Feed on
Posts
Comments

Archive for October, 2014

Thoughts on Verification: The Verification Mindset (Part 2 of 2)

Monday, October 20th, 2014 by Alex Melikian

In part 2, Verilab consultants Alex Melikian and Jeff Montesano continue their discussion on the topics covered in the “Verification Mind Games” paper Jeff co-authored and published at DVCon 2014. Part 1 can be viewed here.

Alex Melikian: So as your paper explains, and as we’ve determined in this conversation, one part of the verification mindset is to determine ‘what’ has to be verified. However there’s something else, something that involves taking a step back and asking: “Can a particular scenario happen? ” with the DUT. In your paper, you give the example of a design that implements the drawing of shapes, and a test scenario where this design is given the task of drawing a circle with a radius of value zero. Describe this particular situation for our readers.

Jeff Montesano: Yeah, this was an interesting case. It was a design that was responsible for drawing circles, and took in an input which was the radius. The designers at the time specified the range of radii that this design needed to handle. When we asked the question, “well, what happens if you input a radius of zero into it?”, the designers came back and said, “It’s invalid. You don’t need to test it. We didn’t design the thing to handle it, and there’s no point in testing it. ” And while some verifiers might stop there and say, “Well, if the designer thinks it’s not valid, we shouldn’t test it”, we decided to go ahead with this case anyway.

In fact, we were able to show that in the broader system, there were circles of radius zero being generated all the time, and so the design genuinely had to handle it. When we finally ran a test with zero radius, it resulted in the design hanging. So as it turned out, it was very important that we brought this up, in spite of what the designers had suggested to us.

AM: This looks like a case where the verification engineer has to be sort of an independent thinker, and consider what are all the possibilities that can be applied to the design, as opposed to just following only what the design spec spells out what is handled.

On the other hand, I can understand the counterargument to this. As always, time is limited, and the verification engineer must judge the verification requirements carefully. For example, they have to ask themselves: “Is it worth it to verify something that is invalid? Are we doing a case of garbage in, garbage out? Or is there a specific requirement detailing out that a specific condition cannot be applied?” This sounds all easy, but it’s really not.

Oftentimes, the verification engineer is the first one to try out all the possibilities or combinations of how a design feature is used, especially when constrained-random stimulus is applied. This can lead to particular cases of stimulus, some of which are outside the intention of how the feature is supposed to be used. Then again, it may lead to a corner case that is genuinely valid and it wasn’t accounted for in the design.

So there will be situations where the verification engineer may be in un-charted territory. It can be a fine line between invalid case or a corner case nobody thought of. However, I think going back to that initial question will really help: “What am I verifying here?” or in other words “Am I doing something that is of value to verify the design?”

JM: Totally agree with everything you’ve said there. There is a fine line, and it takes experience to determine what is useful to go after, and what is a waste of time. I can recall a client that I had to work with in the past, who apparently had been burned by some type of internal issue in post-silicon, and so they wanted verification on the parity between interfaces of the different RTL blocks. Now, if an ASIC has issues that cause it to be unable communicate properly within itself, it would never be able to pass the scan test - it just would be sorted into the garbage bin. And so a verifier should never try to verify parity between sub-blocks because it’s a waste of time. It turned out in this case we didn’t have a choice. But as expected, we never found a bug there.

AM: Moving on to another topic in your paper, you mention that testing a design in creative ways is a core competency of the verification mind. What are some of the creative ways that you’ve done verification?

JM: I can think of something that came up recently. I was verifying an interrupt handler, and it was a circuit of type ‘clear on write one’. I had a test running where I would generate the interrupt, read it, go write one, read it back, make sure it got cleared to zero, and I thought I was done. However, the question arose, well, “what would happen if you wrote a zero to those bits?” Now, at that point, I could’ve written a brand new test which would generate the interrupt, write a zero, give it a brand new name, do all that stuff.

But I thought up of a more creative way to do it. What I was able to do is take the existing test class, make an extension of it, and add a virtual function that was defined to be empty in the base class [the original test] but defined to write zeros to all the interrupt registers in the derived class [the new test]. The original test was then modified to call this virtual function right after having provoked all of the interrupts, and the rest of the base test would proceed exactly as before.

So all the checks from the initial test would fail if any of those writes with zeros had had any effect. And so with very few lines of code, I was able to leverage what I’d already done, and use virtual functions in a creative way.


AM: So you’re not only just creative in what you we’re testing, but also creative in how to reuse the checks and the tests.


JM: You bet. Sure.


AM: So, following this theme of creativity, another area where I think creativity is important is “debug-ability”. What I mean by “debug-ability” is a measure of how easy it is to debug something using the test bench, or VIP you’ve created. Some examples I’ve personally seen is generation of HTML files, to help visualize information or data processed in a simulation thus making it much easier for verification or designers to debug something. Developing tools like this become very useful in the case of designs that implement serial data protocols.

I think it should be mentioned that making debugging easier is part of our job of verifying. You touch on this in your paper and state that prioritizing ‘debug-ability’ is important, and that sufficient time should be allocated to develop tools like the one I mentioned. What about you? What have you seen?


JM: Again, I totally agree. Designers don’t really need to think about ‘debug-ability’ in their day-to-day work. They have other things to think about, right? They have to think about making their design meet the specification with sufficient performance, and with no bugs in it. Whereas when you’re writing code to do verification, debug-ability is right up there.

So one example of this is if you have bidirectional buses on a VIP. Now if you were to implement a single-bit bidirectional bus in your VIP, you could create an interface that has a single bit, and you can declare it as an “inout” in System Verilog, for example, and you’d be able to implement the protocol correctly. However, at the test bench level, if you had multiple instances of these, or one or more designs under test communicating with your VIP, you would never be able to tell who was driving and who was receiving, because your VIP only has one signal, and you’d probably have to resort to print statements at that point.

A better way to do it is to actually split out the bidirectional bus to a driver and a receiver at the VIP level, and that way, you can have visibility on whether the VIP is driving or receiving. In the paper, I give some code examples of how that can be done.


AM: That’s a good tip, tri-state buses can be tricky at times. One last topic I’d like to touch upon from your paper is the statement that the verification mindset should focus on coverage, not test cases. Can you elaborate on that?


JM: Sure. So we now have these amazing tools like constrained random verification. But it’s hard for us sometimes to break away from the tradition of writing a lot of tests. It turns out that the more tests we write, the less we’re making use of the power of these tools. Ideally, you would have one test case that randomizes, a configuration object, you’d have a whole lot of self-checking in your environment, and you’d just rerun that same test case as many times as you can. Thousands of times. And through that mechanism, you’d turn up a whole lot of bugs, and make full use of the tools.


AM: It sounds like you’re talking about an overuse of directed test cases instead of exploiting the constrained-random nature. I have seen this too, usually at the start of a project there’s a lot of pressure to generate test cases that are simple and straightforward … and I would have to say that it would be normal to expect this, because the initial version of the design only have a limited feature set.

However, it’s really important for verification engineers to keep in mind the ‘law of diminishing returns’, if I can borrow the term from the field of economics. What I mean by this is that there’s a point where using directed test cases will become decreasingly efficient during the course of the project because these directed test cases have decreasing chance of finding bugs. Eventually, the verification team has to let go of the directed approach, make the jump to the constrained-randomization approach and go on from there. I think we agree that it’s the only way to efficiently find undiscovered bugs.


JM: Yeah. Just to add to the topic of coverage – there’s an important point that is sometimes missed, which is that coverage and checks need to be almost married together. Reason being, you don’t do checks without coverage, and you don’t do coverage without checks. Why? Because if you do coverage without checks you might think you hit something, but you weren’t even checking at the time. It’s a false positive, and is in fact very dangerous. The opposite is if you were to do checks without coverage. In that case, you’re randomizing things and doing a bunch of checks, but you don’t know which cases actually occurred, especially in the type of “one test” approach that I’m describing here.


AM: Definitely. When constrained random coverage-driven verification is done, both the constrained random and the coverage parts have to be developed together. Coverage is the only indicator you can use to convince someone not familiar to constrained-random that a situation is being executed, therefore there’s no need to write a directed test. It definitely pays dividends to do the constrained random way and collect coverage as soon as you can, as opposed to trying to hit each and every situation one at a time.

I think that’s a good note to end on. Jeff, thanks again for joining me on this edition of ‘Thoughts on Verification’.


JM: Thank you, Alex. Take care.

Thoughts on Verification: The Verification Mindset (Part 1 of 2)

Tuesday, October 7th, 2014 by Alex Melikian

In this edition of ‘Thoughts on Verification’, Verilab consultants Alex Melikian and Jeff Montesano explore the ideas behind the “Verification Mind Games” paper Jeff co-authored and published during the 2014 DVCon conference. Jeff Montesano migrated to verification after many years involved in ASIC design. The inspiration behind this paper comes from his exposure to the diverging philosophies and practices involved in ASIC/FPGA verification vis-à-vis design.

In part 1, Jeff and Alex discuss the main topics of the paper as well as Jeff’s experiences of transitioning from design to verification.


Alex Melikian: Hi everyone, welcome to another edition of ‘Thoughts on Verification’. I’m pleased to have my colleague Jeff Montesano here with me to discuss the paper he co-authored and published at this year’s DVCon entitled “Verification Mind Games: How to Think Like a Verifier”. Jeff, thanks for joining me on this edition of “Thoughts on Verification”.


Jeff Montesano: Thank you, Alex. Good to be here.


AM: I’m glad we have the chance to talk about this paper and explore deeper into the ideas behind it. For our readers who are not familiar with it, I highly recommend reading it as the topic primarily focuses on the mindset of a verification engineer. At first glance, the topic may seem more suited for novices of verification, and to a certain extent you could argue that position. However, I have seen some of the most experienced and skilled verification engineers, including myself, sometimes lose track of the verification mindset and realize our mistakes only later on in a project. I think a good way to start off is to bring up one of the initial statements in your paper, where you mention that when implementing verification tasks, there can be too much focus on ‘how’ something should be verified rather than ‘what’ should be verified. Can you elaborate on that statement?


JM: Sure. I think we’ve come to a point in the verification world where there’s a huge amount of emphasis on tools and methodologies. A good example of that is if you ever look at any job posting, pretty much all employers are going ask is “Do you know UVM? Do you know System Verilog? Do you know some C? What simulator tools do you know?” If you can check off all those boxes then you can get the interview, and you might even get the job. However, knowing those things is pretty much just the first step. It’s what you do with those tools, and furthermore what you would do in the absence of those tools that really defines verification excellence, in my opinion.


AM:That’s an interesting comment that will probably raise some eyebrows. What do you mean by that?


JM: What I mean is that there are good practices of verification that transcend the use of any verification tool or language. For example, there’s obviously the basic things like doing good verification planning and knowing how to write a good verification plan. That’s part of the ‘what’ question you mentioned earlier. But there are things that go deeper. You can be confronted with a design that uses a clock-data recovery scheme. I bring up this example in the paper. I’d say a huge percentage of even experienced verification engineers would take an approach when building a verification component to interact with this design that is not ideal. They would implement a clock data recovery algorithm in the verification component, whereas you could verify the design to the same extent, using a lot less effort and compute resources, by just instantiating a simple PLL circuit. In other words, we’re avoiding unnecessary complication of modeling clock data recovery, while still achieving the goal of verifying that the design is operating within the correct parameters.

This is an example of how it doesn’t matter if you’re using UVM, or System Verilog or constrained-random. If someone takes the approach of doing something more complex like a clock data recovery model instead of using a simple PLL, they will miss the boat, because they could have accomplished their task much faster, but just as effectively.


AM: Ah! I see your point. In this case, the ‘what’ we’re supposed to do as verification engineers is to verify the clock-data recovery scheme is recoverable within the spec, as opposed to clock-data recovery scheme itself. So from your experience, what is a common error that you see the most often where someone has lost track of the mindset of a verification engineer, or in other words, not thinking the way a verification engineer should be?


JM: So just to give you and our readers a bit of background, I came from the design world. I was an ASIC designer for a number of years before doing verification. And something I see a lot, and something I’ve even been guilty of, is snooping on signals from within the design under test. Snooping on design signals for reference can definitely speeds things up because your verification environment would instantly be in perfect sync with the design. You can figure out what state it’s in, you can figure out what it’s doing, and it will remain synchronized with it.

However, you run the risk of missing some very important bugs, the most important ones being those associated with a design’s clock. If you’re directly using s design’s internal clock as a reference, you’re completely blind to any clocking issues. Even if the design clocking has issues, goes out of spec or has glitches, if the verification component snoops the DUT’s clock signal instead of independently verifying it, it will erroneously follow that clock and intake the data along with it. This is especially true in the case of RTL simulations.


AM: Hence the danger would be that the verification component will fail at its fundamental task and never flag the erroneous clock for running at the wrong rate or having glitches.


JM: That’s right. And so a guideline for this is that if you are forced to use internal DUT signals as a reference, you must apply independent checks on that signal. It cannot be assumed the signal is correct, thus it would be a mistake to believe it can be relied upon.


AM: Backing up a little in our conversation, I’m glad you mentioned you came from a design background. I don’t want to take too much credit here, but I think I was the one who convinced you to make the jump over from design into verification.


JM: That’s right. You were.


AM: I think a lot of our readers can relate to this. We’re continuing to see a need for verification engineers, so to fill this demand we’re seeing some ASIC/FPGA engineers who were originally on the design side, make the jump into verification. Those who make this jump have to deal with a lot of new things that are a lot more software or software-engineering oriented. They can find themselves in a world that is quite alien and foreign to them in different ways.

For example, when someone is making the switch from design to verification, their day-to-day coding tasks will no longer be dealing with modules or procedural statements, but rather objects and inheritance structures. Furthermore, they may find that things change at the methodology level as well. They may have to get familiarized with things like continuous integration, or even agile project management methodologies. I cannot emphasize enough how this transition from design to verification is not easy and the challenge should not be underestimated.

What are your thoughts in regards to someone making that transition?


JM: Well one big thing that always comes up is revision control. Let me explain: revision control is something that’s been around for a while, and the tools have got better with time. However, there are certain aspects of revision control that are very under-appreciated by a lot of people in the design and verification community. One of which is branching and merging, which granted, for a time, was not easy to use. I can recall pulling my hair out with some tools because the merge wouldn’t work out the way you wanted it to, and so you’d be reluctant to create branches. However, some of the more modern revision control tools we use today, Git for example, makes branching and a merging operations the most natural thing you can do. Thus, this creates so many opportunities on how you can organize and co-develop your work in a cleaner, more seamless way.

Another thing is because verification languages have become object-oriented as you alluded to earlier, there are some aspects of a verification environment that are going to be very different than what you’d find in a Verilog module or VHDL entity. For example, you have the ability to employ polymorphism by using virtual functions. Now, I didn’t always know what use virtual functions had. I can recall at the start of my verification career, I wasn’t able at all to tell you what a virtual function did, whereas today I consider them an indispensable part of my toolbox.


AM: Well, I’m happy to see you’ve made the transition and adjusting quite well to the verification world. I can jokingly say you’re a “success story”, but of course this story is not exclusive to you. Quite frankly, anybody who is keen on new challenges, and wanting to learn new things can replicate your feat. I think we can both agree that that taking the time to understand the verification mindset would be a good place to start.

(End of Part 1)

Work For Verilab