Feed on

Archive for May, 2013

Vanessa Cooper’s “Getting Started With UVM” Now Available

Thursday, May 30th, 2013 by Alex Melikian

Verilab is pleased to announce the publication of “Getting Started with UVM : A Beginner’s Guide” , authored by our very own Vanessa Cooper. It is now available in paperback and will soon be released in Kindle e-book format.

This book is perfect for those learning, or wanting to learn UVM and looking for reference material that goes beyond the standard User’s Guide. For additional insight and tips on UVM from a beginner’s point of view, you can read more in this “Thoughts on Verification” interview with Vanessa.

Congratulations Vanessa!

Thoughts on Verification: The Human Side of Best Practices (Part 3 of 3)

Thursday, May 9th, 2013 by Alex Melikian

In Part 3, Alex and Jason explain the team practice of a “Deep Dive” and how valuable it can be to accomplishing a project on time and on budget. They also discuss the value of post-mortem and peri-mortem meetings, and how the development teams can fully benefit from them. Part 1 and 2 can be viewed respectively here and here.

Alex Melikian: The next topic I want to cover is coordination between teams. We mentioned earlier about the involvement of software teams or other departments who have a stake in the product and its features. One procedure that is brought up in the planning stages involving coordination between teams is the “Deep Dive”. Can you describe that for those who are not familiar with this approach?

Jason Sprott: Well, there’s various techniques you can use to gel teams together and get everyone in the mindset of more joined-up thinking. One good technique is to have the product, marketing, software, various design, and verification teams work together to think about the end product. A “deep dive” is when you throw these guys together in a room for a while and let them tear through requirements and functionality. You give them some time to mull these points over in their head and hope they actually do it and don’t turn up to the meeting cold. The idea is to get as much input from the wider team as possible, and have access to the people who know both deep implementation and customer level requirements.

And start them with what someone thinks is the most important features of the design and then work it through the stuff, so that you have the people, the major stakeholders, for all the different components in the system.

You have them in the room, so there’s even guys in there who could say, “Hey, wait a minute. That feature isn’t important at all. Yeah, let’s take a note to speak to the client about that because the guy we’re going to sell this to might not care about that.” Another thing someone might say is “Now that could you go into the next chip”. So at one end, there’s people in the room who can look at priorities, who can look at features, who can look at the ways things are used. And at the other end of the spectrum, there are guys in the team who can assess an impact of a given feature.

So in this setup, someone might say, “Oh, yeah, we’ve got to have this feature in the design.” At the same time, someone from the verification team can respond with “Fair enough, that will take you only ten minutes to implement in a piece of RTL, but it’s going to take two weeks to verify that, is that what you want? Is it worth two weeks?” That’s the sort of interaction that takes place and you would like to happen.

The advantage of having the stakeholders for the different components of the product in one place, is that you can often get to the bottom of decisions made for specific requirements that may require a lot of effort. And of course, these decisions are being made, before the project is being carried out, hence all teams go in eyes wide open. It’s also a good opportunity to discuss the interaction and expectations between components. It all sounds so trivial, but many costly project “surprises” happen as a result of not taking these things into account. When these “surprises” happen, the best case scenario is additional effort will have to be spent, and the worst case is that they manifest as an issue in the final product.

AM: It’s strange: I think a lot of our readers when told it’s good to spend half, or even a full day of planning for a chip would, probably find that excessive. However, if we compare that amount of time to the overall time of a project, you realize the time spent for that full day is well worth it. It can really pay off over the course of the project, because not only is there more coordination and understanding across all the stakeholders, but you can also avoid these very costly “surprises” in the future.

JS: That’s right. You don’t want spend too much time though, because things change. You could approach this meeting by asking the participants to come up with the top five things that they care about in this design. That would be a good start, and then if you have time left, you’ll cover more. It must be emphasized to the participants to come in prepared, like with some of the top things they care about, and why you care about them. That will definitely drive some discussion.

Also the other thing you really need when you’re going in a deep dive situation is historical data. This can really help making more sensible and informed decisions. If you don’t have anything, meaning it’s all just made up on the day, and relies solely on the knowledge and skill of the people in the room, then in situations like this, decisions can be affected by anything from vague recollections, emotions, heated debate, bullying, or even hunger, like when participants ask “how close is it to lunch”.

AM: That’s funny, but true. I’m glad you mentioned ‘historical data’, as one of the ways that data can be generated or archived would be through postmortem meetings. So what are some of the things you think makes a postmortem very valuable to a team? And what is the specific knowledge a team should be retaining or archiving when they’re doing a postmortem?

JS: I like the postmortems, but in fact, what I prefer is to do both peri and postmortem reviews. Some things you can only know at the end of the project, but it’s also possible to take regular retrospectives along the way. These can be used to capture more detail and affect decisions within the scope of the current project, unlike a postmortem.

You can record many things, but it’s bad only to focus only on the negative. It’s interesting to record how and why things went well, not just things that went wrong.

It’s always good to get to the root cause of things. For example, if stuff goes wrong because the code wasn’t in good shape, the root cause might not be the code itself. The root cause might be that we didn’t do very good peer reviews, or we did the peer reviews too late. During in-project retrospectives, it’s sometimes possible to see patterns of things going wrong, also known as anti-patterns, and fix them within the scope of the project.

In terms of the statistics that you want to archive, you should look at things like bug rates, the type of bugs you had, where the bugs were found, how long it took to turn around the bug fix, simulation times, development times. You also want to record the project conditions that might affect those metrics, such as the amount of change to the design, time pressure, or skill level in the development team. For example, you might expect to get very different results from an experienced team than from a team that have never worked together, or is inexperienced. This is important data that can be used to weight the metrics recorded.

Since everyone always wants to know, “How long do you spend planning?” it’s always useful to record that accurately in project records.

Postmortems, should also always want to look at your reuse. What did you actually reuse from a previous project? And by that, I mean what did you use without modifying the code itself? Did you have to hack about with it?

And also, what did you output from a reuse point of view? What did you give back that other projects may be able to use? That’s very valuable information.

What I would say is at the end of the day, the one thing you should really care about is that you have a clear picture of what you keep for the next project and what you dump.

AM: I think you summarized it well. Postmortems, or peri-mortems, are very good ways of applying the old virtue of learning from your mistakes and experiences in our business.

Love to hear more but sadly that’s all the time that we have for this conversation. Thanks a lot Jason for your time and your input. I hope our readers have better appreciated the human side of best practices in verification. Look forward to the next time.

JS: Okay, thank you, Alex.

Thoughts on Verification: The Human Side of Best Practices (Part 2 of 3)

Thursday, May 2nd, 2013 by Alex Melikian

In Part 2, Alex and Jason cover how new challenges, such as power aware features and analog modeling, can affect verification planning. In addition they discuss the approach of risk assessment and how it fits into the planning process. Part 1 can be viewed here.

Alex Melikian: Lets move on to other facets coming into play in today’s modern day silicon product. We’re seeing things like power management, mixed signals, or analog mixed signals involved in making a product. How do you see these new areas affecting how a verification plan is put together?

Jason Sprott: Well, I guess now low power is a fairly well understood concept for verification, and we have power aware simulation and the like. However, what people sometimes fail to understand is that low power features in a design are a massive crosscutting concern. They can slice across all aspects of functionality. So instead of having a piece of functionality that’s exercised only in one mode, it could need to be exercised in many modes for different power scenarios. And that can explode into a massive state space. So I think this is another area of ruthless prioritization, where you can’t possibly verify everything in all different modes. Usually, that’s just too onerous.

So you have to look at what really matters for the low power modes you’ll be using. So I think you have to ask yourself is: “Well, in real life, how would the software control the power modes?” You often have to work hand-in-glove with the software department to keep a tight rein on the verification requirements that end up in the product.

And it has to be well understood that if someone changes the software to do something else, it could have an impact by pushing into areas that haven’t been verified. I think this is an area that really, really needs to be tightly tied down.

A completely different area of verification and much less understood is analog verification. That can be the functional border between the analog and digital domains, or true analog functional verification.

We have to consider what level of accuracy and performance we build the analogue models to. This is an area that will have an increasing effect on verification as we go forward. We haven’t really tied down the languages everybody should use for modeling and for doing the verification. And as much as we understand functional coverage in the digital domain, what does it mean for analog verification?

You have to really tie down the requirements of the crossing between the domains. Sometimes, analog isn’t that complicated in terms of the number of features, as compared to digital designs, but you have lots of discreet signals with independent timing. This can add up to a lot of combinations to verify. Not all combinations are valid, or possible. Understanding what, or if anything, can be cut back in this domain is essential to make prioritization decisions.

I think one of the biggest things to come out of analog functional verification, is a more considered approach to modeling. The accuracy, performance, and validation of the models against the circuits, are going to play a bigger part in verification in general. All of these things are being demanded now of the analog verification team, whereas in the past, they weren’t given much consideration. There may not have even been an analogue verification team. Chip level simulations often ended up with models of the wrong accuracy (typically too accurate for the context), or were buggy and not validated.

Chip teams up until recently haven’t been asking for proof that analog models have been verified against a circuit, or specified requirements for the models in terms of performance and accuracy. So yeah, this is a big area and it’s going to have quite an effect, I think, going forward.

AM: Interesting to see how ‘prioritization’ is applied in the context of analog modeling, that is to say how close to the real thing you would really want it to be and keeping in mind the implications on the schedule.

We’ve been talking about the many elements involved in the process of planning throughout this discussion. Let me bring up a question that would cover this theme from a different perspective. Would you agree that one approach that engineers have to take when they’re doing their planning is applying a risk assessment for each feature as opposed to thinking along the lines of step-by-step procedures?

JS: Risk assessment is certainly one aspect of planning. I think what you really have to aim for with risk assessment is better predictability of results. Risk and priority are two separate things. You can have something that’s a very high risk but very low priority and vice versa. So I think risk assessments are part of the planning, but how far you want to go depends on your product requirements. Some designs can tolerate more risk than others. For example, high reliability designs for automotive, or medical, typically require a much more detailed analysis. This is relevant to planning, as you have to ensure you don’t eliminate necessary work from the schedule. Sometimes this work is only highlighted by risk assessment.

I think you’ve got to decide how much risk assessment you do, and prioritize the risks, as well. But you do need to know about all the features in order to do these things. So you can’t just look at them in isolation and say, “Ah, yeah, we’re going to consider all the risks up front without knowing what the features are” because the risks are related to the features. So you need to do both.

Work For Verilab