Feed on
Posts
Comments

Thoughts on Verification: The Verification Mindset (Part 1 of 2)

In this edition of ‘Thoughts on Verification’, Verilab consultants Alex Melikian and Jeff Montesano explore the ideas behind the “Verification Mind Games” paper Jeff co-authored and published during the 2014 DVCon conference. Jeff Montesano migrated to verification after many years involved in ASIC design. The inspiration behind this paper comes from his exposure to the diverging philosophies and practices involved in ASIC/FPGA verification vis-à-vis design.

In part 1, Jeff and Alex discuss the main topics of the paper as well as Jeff’s experiences of transitioning from design to verification.


Alex Melikian: Hi everyone, welcome to another edition of ‘Thoughts on Verification’. I’m pleased to have my colleague Jeff Montesano here with me to discuss the paper he co-authored and published at this year’s DVCon entitled “Verification Mind Games: How to Think Like a Verifier”. Jeff, thanks for joining me on this edition of “Thoughts on Verification”.


Jeff Montesano: Thank you, Alex. Good to be here.


AM: I’m glad we have the chance to talk about this paper and explore deeper into the ideas behind it. For our readers who are not familiar with it, I highly recommend reading it as the topic primarily focuses on the mindset of a verification engineer. At first glance, the topic may seem more suited for novices of verification, and to a certain extent you could argue that position. However, I have seen some of the most experienced and skilled verification engineers, including myself, sometimes lose track of the verification mindset and realize our mistakes only later on in a project. I think a good way to start off is to bring up one of the initial statements in your paper, where you mention that when implementing verification tasks, there can be too much focus on ‘how’ something should be verified rather than ‘what’ should be verified. Can you elaborate on that statement?


JM: Sure. I think we’ve come to a point in the verification world where there’s a huge amount of emphasis on tools and methodologies. A good example of that is if you ever look at any job posting, pretty much all employers are going ask is “Do you know UVM? Do you know System Verilog? Do you know some C? What simulator tools do you know?” If you can check off all those boxes then you can get the interview, and you might even get the job. However, knowing those things is pretty much just the first step. It’s what you do with those tools, and furthermore what you would do in the absence of those tools that really defines verification excellence, in my opinion.


AM:That’s an interesting comment that will probably raise some eyebrows. What do you mean by that?


JM: What I mean is that there are good practices of verification that transcend the use of any verification tool or language. For example, there’s obviously the basic things like doing good verification planning and knowing how to write a good verification plan. That’s part of the ‘what’ question you mentioned earlier. But there are things that go deeper. You can be confronted with a design that uses a clock-data recovery scheme. I bring up this example in the paper. I’d say a huge percentage of even experienced verification engineers would take an approach when building a verification component to interact with this design that is not ideal. They would implement a clock data recovery algorithm in the verification component, whereas you could verify the design to the same extent, using a lot less effort and compute resources, by just instantiating a simple PLL circuit. In other words, we’re avoiding unnecessary complication of modeling clock data recovery, while still achieving the goal of verifying that the design is operating within the correct parameters.

This is an example of how it doesn’t matter if you’re using UVM, or System Verilog or constrained-random. If someone takes the approach of doing something more complex like a clock data recovery model instead of using a simple PLL, they will miss the boat, because they could have accomplished their task much faster, but just as effectively.


AM: Ah! I see your point. In this case, the ‘what’ we’re supposed to do as verification engineers is to verify the clock-data recovery scheme is recoverable within the spec, as opposed to clock-data recovery scheme itself. So from your experience, what is a common error that you see the most often where someone has lost track of the mindset of a verification engineer, or in other words, not thinking the way a verification engineer should be?


JM: So just to give you and our readers a bit of background, I came from the design world. I was an ASIC designer for a number of years before doing verification. And something I see a lot, and something I’ve even been guilty of, is snooping on signals from within the design under test. Snooping on design signals for reference can definitely speeds things up because your verification environment would instantly be in perfect sync with the design. You can figure out what state it’s in, you can figure out what it’s doing, and it will remain synchronized with it.

However, you run the risk of missing some very important bugs, the most important ones being those associated with a design’s clock. If you’re directly using s design’s internal clock as a reference, you’re completely blind to any clocking issues. Even if the design clocking has issues, goes out of spec or has glitches, if the verification component snoops the DUT’s clock signal instead of independently verifying it, it will erroneously follow that clock and intake the data along with it. This is especially true in the case of RTL simulations.


AM: Hence the danger would be that the verification component will fail at its fundamental task and never flag the erroneous clock for running at the wrong rate or having glitches.


JM: That’s right. And so a guideline for this is that if you are forced to use internal DUT signals as a reference, you must apply independent checks on that signal. It cannot be assumed the signal is correct, thus it would be a mistake to believe it can be relied upon.


AM: Backing up a little in our conversation, I’m glad you mentioned you came from a design background. I don’t want to take too much credit here, but I think I was the one who convinced you to make the jump over from design into verification.


JM: That’s right. You were.


AM: I think a lot of our readers can relate to this. We’re continuing to see a need for verification engineers, so to fill this demand we’re seeing some ASIC/FPGA engineers who were originally on the design side, make the jump into verification. Those who make this jump have to deal with a lot of new things that are a lot more software or software-engineering oriented. They can find themselves in a world that is quite alien and foreign to them in different ways.

For example, when someone is making the switch from design to verification, their day-to-day coding tasks will no longer be dealing with modules or procedural statements, but rather objects and inheritance structures. Furthermore, they may find that things change at the methodology level as well. They may have to get familiarized with things like continuous integration, or even agile project management methodologies. I cannot emphasize enough how this transition from design to verification is not easy and the challenge should not be underestimated.

What are your thoughts in regards to someone making that transition?


JM: Well one big thing that always comes up is revision control. Let me explain: revision control is something that’s been around for a while, and the tools have got better with time. However, there are certain aspects of revision control that are very under-appreciated by a lot of people in the design and verification community. One of which is branching and merging, which granted, for a time, was not easy to use. I can recall pulling my hair out with some tools because the merge wouldn’t work out the way you wanted it to, and so you’d be reluctant to create branches. However, some of the more modern revision control tools we use today, Git for example, makes branching and a merging operations the most natural thing you can do. Thus, this creates so many opportunities on how you can organize and co-develop your work in a cleaner, more seamless way.

Another thing is because verification languages have become object-oriented as you alluded to earlier, there are some aspects of a verification environment that are going to be very different than what you’d find in a Verilog module or VHDL entity. For example, you have the ability to employ polymorphism by using virtual functions. Now, I didn’t always know what use virtual functions had. I can recall at the start of my verification career, I wasn’t able at all to tell you what a virtual function did, whereas today I consider them an indispensable part of my toolbox.


AM: Well, I’m happy to see you’ve made the transition and adjusting quite well to the verification world. I can jokingly say you’re a “success story”, but of course this story is not exclusive to you. Quite frankly, anybody who is keen on new challenges, and wanting to learn new things can replicate your feat. I think we can both agree that that taking the time to understand the verification mindset would be a good place to start.

(End of Part 1)

One Response to “Thoughts on Verification: The Verification Mindset (Part 1 of 2)”

  1. Tudor Timi Says:

    You really hit the nail on the head with your comment on version control, Jeff. Up to now, all of the projects I’ve worked on just check into the main branch and often break the build for someone else.

Leave a Reply

Captcha
Enter the letters you see above.

Work For Verilab