Feed on

Thoughts on Verification: The Human Side of Best Practices (Part 3 of 3)

In Part 3, Alex and Jason explain the team practice of a “Deep Dive” and how valuable it can be to accomplishing a project on time and on budget. They also discuss the value of post-mortem and peri-mortem meetings, and how the development teams can fully benefit from them. Part 1 and 2 can be viewed respectively here and here.

Alex Melikian: The next topic I want to cover is coordination between teams. We mentioned earlier about the involvement of software teams or other departments who have a stake in the product and its features. One procedure that is brought up in the planning stages involving coordination between teams is the “Deep Dive”. Can you describe that for those who are not familiar with this approach?

Jason Sprott: Well, there’s various techniques you can use to gel teams together and get everyone in the mindset of more joined-up thinking. One good technique is to have the product, marketing, software, various design, and verification teams work together to think about the end product. A “deep dive” is when you throw these guys together in a room for a while and let them tear through requirements and functionality. You give them some time to mull these points over in their head and hope they actually do it and don’t turn up to the meeting cold. The idea is to get as much input from the wider team as possible, and have access to the people who know both deep implementation and customer level requirements.

And start them with what someone thinks is the most important features of the design and then work it through the stuff, so that you have the people, the major stakeholders, for all the different components in the system.

You have them in the room, so there’s even guys in there who could say, “Hey, wait a minute. That feature isn’t important at all. Yeah, let’s take a note to speak to the client about that because the guy we’re going to sell this to might not care about that.” Another thing someone might say is “Now that could you go into the next chip”. So at one end, there’s people in the room who can look at priorities, who can look at features, who can look at the ways things are used. And at the other end of the spectrum, there are guys in the team who can assess an impact of a given feature.

So in this setup, someone might say, “Oh, yeah, we’ve got to have this feature in the design.” At the same time, someone from the verification team can respond with “Fair enough, that will take you only ten minutes to implement in a piece of RTL, but it’s going to take two weeks to verify that, is that what you want? Is it worth two weeks?” That’s the sort of interaction that takes place and you would like to happen.

The advantage of having the stakeholders for the different components of the product in one place, is that you can often get to the bottom of decisions made for specific requirements that may require a lot of effort. And of course, these decisions are being made, before the project is being carried out, hence all teams go in eyes wide open. It’s also a good opportunity to discuss the interaction and expectations between components. It all sounds so trivial, but many costly project “surprises” happen as a result of not taking these things into account. When these “surprises” happen, the best case scenario is additional effort will have to be spent, and the worst case is that they manifest as an issue in the final product.

AM: It’s strange: I think a lot of our readers when told it’s good to spend half, or even a full day of planning for a chip would, probably find that excessive. However, if we compare that amount of time to the overall time of a project, you realize the time spent for that full day is well worth it. It can really pay off over the course of the project, because not only is there more coordination and understanding across all the stakeholders, but you can also avoid these very costly “surprises” in the future.

JS: That’s right. You don’t want spend too much time though, because things change. You could approach this meeting by asking the participants to come up with the top five things that they care about in this design. That would be a good start, and then if you have time left, you’ll cover more. It must be emphasized to the participants to come in prepared, like with some of the top things they care about, and why you care about them. That will definitely drive some discussion.

Also the other thing you really need when you’re going in a deep dive situation is historical data. This can really help making more sensible and informed decisions. If you don’t have anything, meaning it’s all just made up on the day, and relies solely on the knowledge and skill of the people in the room, then in situations like this, decisions can be affected by anything from vague recollections, emotions, heated debate, bullying, or even hunger, like when participants ask “how close is it to lunch”.

AM: That’s funny, but true. I’m glad you mentioned ‘historical data’, as one of the ways that data can be generated or archived would be through postmortem meetings. So what are some of the things you think makes a postmortem very valuable to a team? And what is the specific knowledge a team should be retaining or archiving when they’re doing a postmortem?

JS: I like the postmortems, but in fact, what I prefer is to do both peri and postmortem reviews. Some things you can only know at the end of the project, but it’s also possible to take regular retrospectives along the way. These can be used to capture more detail and affect decisions within the scope of the current project, unlike a postmortem.

You can record many things, but it’s bad only to focus only on the negative. It’s interesting to record how and why things went well, not just things that went wrong.

It’s always good to get to the root cause of things. For example, if stuff goes wrong because the code wasn’t in good shape, the root cause might not be the code itself. The root cause might be that we didn’t do very good peer reviews, or we did the peer reviews too late. During in-project retrospectives, it’s sometimes possible to see patterns of things going wrong, also known as anti-patterns, and fix them within the scope of the project.

In terms of the statistics that you want to archive, you should look at things like bug rates, the type of bugs you had, where the bugs were found, how long it took to turn around the bug fix, simulation times, development times. You also want to record the project conditions that might affect those metrics, such as the amount of change to the design, time pressure, or skill level in the development team. For example, you might expect to get very different results from an experienced team than from a team that have never worked together, or is inexperienced. This is important data that can be used to weight the metrics recorded.

Since everyone always wants to know, “How long do you spend planning?” it’s always useful to record that accurately in project records.

Postmortems, should also always want to look at your reuse. What did you actually reuse from a previous project? And by that, I mean what did you use without modifying the code itself? Did you have to hack about with it?

And also, what did you output from a reuse point of view? What did you give back that other projects may be able to use? That’s very valuable information.

What I would say is at the end of the day, the one thing you should really care about is that you have a clear picture of what you keep for the next project and what you dump.

AM: I think you summarized it well. Postmortems, or peri-mortems, are very good ways of applying the old virtue of learning from your mistakes and experiences in our business.

Love to hear more but sadly that’s all the time that we have for this conversation. Thanks a lot Jason for your time and your input. I hope our readers have better appreciated the human side of best practices in verification. Look forward to the next time.

JS: Okay, thank you, Alex.

Leave a Reply

Enter the letters you see above.

Work For Verilab