Feed on

Thoughts on Verification: The Human Side of Best Practices (Part 2 of 3)

In Part 2, Alex and Jason cover how new challenges, such as power aware features and analog modeling, can affect verification planning. In addition they discuss the approach of risk assessment and how it fits into the planning process. Part 1 can be viewed here.

Alex Melikian: Lets move on to other facets coming into play in today’s modern day silicon product. We’re seeing things like power management, mixed signals, or analog mixed signals involved in making a product. How do you see these new areas affecting how a verification plan is put together?

Jason Sprott: Well, I guess now low power is a fairly well understood concept for verification, and we have power aware simulation and the like. However, what people sometimes fail to understand is that low power features in a design are a massive crosscutting concern. They can slice across all aspects of functionality. So instead of having a piece of functionality that’s exercised only in one mode, it could need to be exercised in many modes for different power scenarios. And that can explode into a massive state space. So I think this is another area of ruthless prioritization, where you can’t possibly verify everything in all different modes. Usually, that’s just too onerous.

So you have to look at what really matters for the low power modes you’ll be using. So I think you have to ask yourself is: “Well, in real life, how would the software control the power modes?” You often have to work hand-in-glove with the software department to keep a tight rein on the verification requirements that end up in the product.

And it has to be well understood that if someone changes the software to do something else, it could have an impact by pushing into areas that haven’t been verified. I think this is an area that really, really needs to be tightly tied down.

A completely different area of verification and much less understood is analog verification. That can be the functional border between the analog and digital domains, or true analog functional verification.

We have to consider what level of accuracy and performance we build the analogue models to. This is an area that will have an increasing effect on verification as we go forward. We haven’t really tied down the languages everybody should use for modeling and for doing the verification. And as much as we understand functional coverage in the digital domain, what does it mean for analog verification?

You have to really tie down the requirements of the crossing between the domains. Sometimes, analog isn’t that complicated in terms of the number of features, as compared to digital designs, but you have lots of discreet signals with independent timing. This can add up to a lot of combinations to verify. Not all combinations are valid, or possible. Understanding what, or if anything, can be cut back in this domain is essential to make prioritization decisions.

I think one of the biggest things to come out of analog functional verification, is a more considered approach to modeling. The accuracy, performance, and validation of the models against the circuits, are going to play a bigger part in verification in general. All of these things are being demanded now of the analog verification team, whereas in the past, they weren’t given much consideration. There may not have even been an analogue verification team. Chip level simulations often ended up with models of the wrong accuracy (typically too accurate for the context), or were buggy and not validated.

Chip teams up until recently haven’t been asking for proof that analog models have been verified against a circuit, or specified requirements for the models in terms of performance and accuracy. So yeah, this is a big area and it’s going to have quite an effect, I think, going forward.

AM: Interesting to see how ‘prioritization’ is applied in the context of analog modeling, that is to say how close to the real thing you would really want it to be and keeping in mind the implications on the schedule.

We’ve been talking about the many elements involved in the process of planning throughout this discussion. Let me bring up a question that would cover this theme from a different perspective. Would you agree that one approach that engineers have to take when they’re doing their planning is applying a risk assessment for each feature as opposed to thinking along the lines of step-by-step procedures?

JS: Risk assessment is certainly one aspect of planning. I think what you really have to aim for with risk assessment is better predictability of results. Risk and priority are two separate things. You can have something that’s a very high risk but very low priority and vice versa. So I think risk assessments are part of the planning, but how far you want to go depends on your product requirements. Some designs can tolerate more risk than others. For example, high reliability designs for automotive, or medical, typically require a much more detailed analysis. This is relevant to planning, as you have to ensure you don’t eliminate necessary work from the schedule. Sometimes this work is only highlighted by risk assessment.

I think you’ve got to decide how much risk assessment you do, and prioritize the risks, as well. But you do need to know about all the features in order to do these things. So you can’t just look at them in isolation and say, “Ah, yeah, we’re going to consider all the risks up front without knowing what the features are” because the risks are related to the features. So you need to do both.

Leave a Reply

Enter the letters you see above.

Work For Verilab