Feed on
Posts
Comments

Thoughts on Verification: An Interview with JL Gray (part 3 of 3)


In part 3, JL and Alex discuss some of the methodologies, outside of, but complimentary to HVL technologies, such as continuous integration. Typical mistakes and growing pains of adopting HVL methodologies are also reviewed. Finally, JL discusses about his verification blog, along with the various discussions and debates it has generated.

Alex Melikian: Sometimes the verification work that we do isn’t just about coding and writing requirements, test benches and test cases, but it involves usually a lot more than that.  For example, setting up a compute farm. Very often, verification engineers are involved with putting the compute farms together or at least giving some feedback into how it should be assembled. The setup of an adequate revision control or other EDA related elements are other examples.  Is there any particular challenge you took on that was related to verification work, where a client didn’t expect you to take on but recognized the importance of it once completed?

JL Gray: Well, one of the things in that area is the introduction of continuous integration techniques. Backing up, I think the major problems that exist on projects, regardless of whether they are using constrained random verification or not, are the project planning and the methodologies employed to carry the planning out. For example, the division of labor between design and verification engineers is frequently sub-optimal. And engineers often make decisions for the purpose of guarding their turf that do not support the success of the project.  Another issue would be design engineers who make decisions without taking into account the impacts or consequences on verification. These are, I think, the biggest problems that are faced – nothing to do with whether you use constrained random test benches or not.


Continuous integration is a technique that helps teams improve collaboration. There’s a paper on the Verilab website about this that Gordon McGregor and I wrote. Basically the idea is every time a check-in occurs, you run a regression, and if the regression passes then you know incrementally that the head of your revision control system is always working.  This a concept that a lot of people don’t understand. When we first presented this, folks couldn’t grasp the approach, they believed people should have been manually qualifying their code before they made their check-ins.

But, of course, they weren’t always qualifying their check-ins before they made them, or things would get stale or something would change and there was an unexpected dependency on something else, or maybe somebody forgot the check-in files.  So a huge benefit on the recent project that I worked on was the introduction of this idea of continuous integration that was kind of again unrelated and sort of orthogonal to a lot of what was going on.  But by all of a sudden having a major check of the quality of work that people were submitting done automatically, we were able to really speed up the development process.

AM: Once again, another concept that we would associate more in software development, like a build system, but applied to the hardware world.  I read that paper. That was a really cool one to read actually.

JL: It’s bound to be a classic.

AM: You mentioned that you have done some coaching, and we talked a little bit about some common mistakes engineers make.  So what are some of the common mistakes the engineers whom you’ve consulted with on verification make?  Are their mistakes more technical or are they more philosophical in nature?

JL: That’s a huge open-ended question.  The biggest mistakes as I mentioned are related to the poor planning and communication between engineers on the project.  Other more technical things are for example people creating test bench components that aren’t well suited for moving up and down hierarchy.  So if you have checkers in your sub-module level environment, people will often forget to code them in a manner that will allow them to use it in the full-chip environment.  They may not even think that they would ever use the checkers somewhere else – it’s a massive, massive oversight.  I’ve seen this a lot.

You’d think in 2012 people would understand this, but still there are a lot of folks coming into verification who are just realizing that they need to do more, but they still don’t understand what that means.  So again, that’s an example. Another is writing checkers that don’t actually check anything.  They don’t even bother to write functional coverage to make sure that the checkers are ever triggered, or write a function coverage models that were meaningless. That’s related to poor planning - not reviewing, and not going in and visually inspecting your source code for obvious errors. Or even looking at waveforms, just to make sure that the test bench they are running is doing what it is supposed to do – I think these are some mistakes that people make who haven’t been burned yet. I’ve definitely seen a lot of these types of things.

AM: I can see what you’re talking about. So it’s not really the technical side that is lacking, but rather the approach taken. I guess to make an analogy, it’s like coaching those who are used to driving a Ford Pinto, and all of a sudden they’re given a Porsche 911 to be faster, but they’re only using this powerful machine in first gear, or the wrong gear, as opposed to taking advantage of the full functionality and power of it.

JL: I think that’s a separate thing that, yes, you’ve got this environment and you don’t even realize that it’s got more power.  But again some of what I was referring to is if you write a checker that can’t be reused, I don’t know if that is not recognizing the power.  You knew you could write a checker it’s just you didn’t understand the way you wrote it.  Six months from now you’ll understand. You’ll be like, “Oh man that’s a bummer.  I wish I had coded it differently because now I realize I can’t use it.”  So that’s a technical mistake that can be prevented if you work with somebody who has done this before and has the experience to explain, you know, where the pitfalls are.

AM: Stepping a bit away from verification talk and a little bit more about your other work, you started a blog not too long ago about verification.  What was your motivation behind that?

JL: Well, I will say it actually was quite a while ago now unfortunately.  It was back in 2005, so that’s seven – I think almost seven years it’s been going on, and the motivation for that was at the time I had just joined Verilab, it had been less than a year.  I had a lot of opinions on verification, but wasn’t sure if anybody agreed with me on these opinions.  I wanted to have a conversation with folks to see if I was the only one who has these opinions.  So the blog started out as a nice avenue for me to share my thoughts and get feedback as to whether or not the people thought what I was saying made sense or not.  And it’s been a great opportunity to interact with folks pretty much all over the world.

I think I’ve run into folks from many different countries who have read the blog and had interesting things to say about stuff that I’ve written.  So it’s been quite an eye opener.

AM: Any epic debates?

JL: Oh, yeah, there’ve been some – I’m trying to think.  There was one, let’s see, Karen Bartleson and I had a debate about the VMM back in 2008.  It was quite a huge debate actually spawned on by some comments that she made at DVCon, and the story of this debate was actually made into a book by Ron Ploof who Karen and I both know.  He got stats from both of us on our blogs, and how our traffic was going up during this time while we were having these debates.  There were folks getting involved from different companies.  More recently I think last summer I wrote a post about the “UVM and the Death of SystemVerilog” describing how the UVM, as good as it, is highlighting the deficiencies of SystemVerilog and how folks should eventually realize that SystemVerilog needs to change or go away, which is a view that generated quite a lot of feedback publically, but also privately from folks who didn’t care to be associated with the comments that they wanted to share.

AM: I imagine whenever you use the word ‘death’ with a marketable product you’re going get some responses.  I guess that’s part of being in the business, if you’re going to move and shake things around well expect some people to move and shake you back a bit.

JL: Yes, that pretty much happens.  Marketing folks will often get grumpy about things that I write, but what can I do if it’s true?

AM: Well, I think we’ve covered pretty much everything we wanted to on this inaugural conversation. Thanks JL for your time.

JL: You’re welcome.

Leave a Reply

Captcha
Enter the letters you see above.

Work For Verilab