Feed on

Thoughts on Verification: An Interview with JL Gray (part 2 of 3)

In part 2 of 3 of this conversation, JL and Alex talk over risks/reward involved with adopting an HVL workflow, as well as the diverging perspectives from management and engineers in a company. Also, they discuss the state of HVL technologies today and what might evolve from it next. Part 1 can be viewed here.

Alex Melikian: You bring up another hidden risk of not doing verification, negative cases that were unchecked in the design. Which once again comes back to our initial statement that unfortunately it sometimes takes a big and costly failure for some firms to realize that they do need verification, rather than taking a proactive stance and adopting it before the big costly failure happens.

JL Gray: But there is an interesting tradeoff though. I think there’s a big disconnect between the way that staff engineers view projects and the way that the senior management views projects. I often find that there’s a miscommunication that occurs there. The staff engineers are looking up and saying “if we would only just spend some money on this it would save us from so much pain” or we wouldn’t have to spend on so many re-spins on a chip. What are these guys in the senior management thinking? – They’re fools they don’t know what they’re doing.

But the senior management may be looking down and saying gosh what these engineers don’t understand is that we don’t have enough money in the bank this month to buy that tool. So I don’t actually care if it takes six more months to get the project up to par because I just don’t have any money this month to pay the bills. Or, the management says “yeah we tried this thing five times before”, or “we’ve tried different techniques for improving it, and what always happens actually is that we find a flaw in the fab”, or “analog is the only problematic part”. In their minds, the digital part is somehow almost never the problem, or there are manufacturing issues, or some other issue that is far more important than the verification part.

And I do wish that these two sides would chat more because I think it would save a lot of confusion and a lot of headaches for people if each side really understood better the motivations of the other.

AM: Right, clearing up the confusion between teams and management goes a long way. It’s difficult to break one’s preconception of things. There are some who are very, very skeptical about the benefits of it. It’s natural in our business. I think the closest I ever came to decisively proving to someone who was very skeptical of advanced verification tools was on one particular project where the deadline suddenly moved up considerably earlier in order to present at a trade-show. Everyone was required to put in a lot of extra hours. I had to do something like 20 extra hours a week, which is a big effort, but nonetheless reasonable.

We got our designs completed and confidently verified before the deadline. Whereas the other skeptical member, who decided not to use the verification methodologies that I put in place, had to work almost 40 additional hours a week to meet his deadline. For nearly two months, he had to scrap his weekends, and essentially put his whole life on hold during this project. During the whole time, his extra hours meant added costs and there were no guarantees that he was going to make it. Or at least, I should say there was less of a guarantee compared to our approach. Even after all this, I don’t believe he was convinced of the benefits of verification, but it didn’t matter because the project manager, who’s aware of mounting costs, was convinced.

JL: So what were you doing on that project that was different from what the other guy was doing?

AM: I was using a SystemVerilog test-bench and he wasn’t. He was essential doing most of his verification with a plain old, entirely VHDL-based environment. He was running directed test cases looping back some traffic, and then going spending some time, a lot of time, in the lab, debugging it with a good old fashion scope, logic analyzer and late evening office-delivered pizza.

JL: I’d be curious to hear from you, a lot of people that I talk to get caught up on the term “SystemVerilog test bench” versus maybe Verilog or VHDL. What are the actual characteristics in your mind of a SystemVerilog test bench that is more useful than one that is not? The language is one thing, but I think there are some other characteristics. It’s not just the language, right?

AM: That’s probably a good point to go over for our readers not familiar with functional coverage driven verification and the other modern hardware verification languages (HVL). I think one of the standout features of a modern HVL test bench is the ability to apply constrainable randomization to test parameters. This is not the same thing as a completely randomized parameter. It means you can control the limits and the possibility space of how random a parameter can be. Another feature would be functional coverage, which is not the same thing as code coverage. Functional coverage is a way of instrumenting and recording the generated parameter values in your test bench. This is needed because once you are subjecting a parameter to be generated randomly, how do you know if you’ve actually generated all relevant values? You need this capturing capability, the other side of the generation to get full closure on what is happening in your test bench. This is what functional coverage is.

I also think a modern HVL test bench must have elements of high level program capability, meaning the use of object oriented programing (OOP) or aspect oriented programing (AOP). A concept we link more with software engineering rather than hardware. This means you’re no longer dealing with 1’s and 0’s, but rather objects, models and transactions. I think things like this make it much more easier to make a test bench reusable, quick to develop and very powerful.

JL: It’s interesting and I’m probably already on record saying this, but it amazes me that hardware engineers think that OOP techniques are state-of-the-art. For example, Specman uses conditional programming and AOP, and there are lots of existing scripting languages that have some of these interesting characteristics. I am amazed that we’ve not managed to move much beyond this concept of OOP that was probably state of the art a decade or two ago, and we’re still living in the distant past.

AM: This actually makes a nice segue to my next subject. It seems functional coverage and the OOP/AOP paradigms in many of the HVL languages we’ve seen have revolutionized ASIC/FPGA development in the last ten years …

JL: Actually, it was 20 years ago that some of the Specman stuff was initially developed. I believe in ’92 or ’93 were the very early days of it.

AM: Hmm, … you’re right.

JL: I mean it took some time for it to become mainstream. I think Vera came out in ’96 or ’98? So it’s actually been around – it wasn’t invented – in 2000 Vera and e were both starting to be used more widely, so that was in 2000. That was 12 years ago so 20 years ago isn’t far off.

AM: No one has a crystal ball, but if you had one or if you would venture to guess, what do you think are the technologies that will revolutionize verification in the next ten years?

JL: That’s a very good question. I think people are going to find that the lines that they’re being fed by some of the current industry thought leaders about what are the best techniques and tools you should buy are wrong. I think they’re going to find that there are better ways to do things. In the past companies used to do their own EDA. Companies started to outsource their EDA needs in the late 80s, early 90s.

I think what would happen in order for things to really start improving is that companies are going to have to start taking back some of that activity themselves, but it’s not enough. I think they’re going to have to start working collaboratively across companies maybe on open source projects or on some sort of an open source way to create tools that are maybe not restricted by the politics of the EDA industry. And then of course also you have the advent of Amazon, cloud computing and these massive server farms. I could go out today and I could purchase – if I had some money - 10,000 servers or 50,000 servers and use them for the weekend.

However, there’s not a cost effective way to do that right now, and I know some of the EDA vendors are coming out with their own solutions. But unfortunately they’re still charging by the seats, and I just think that’s the wrong model. So people are going to start taking advantage of cloud computing and realize “hey, I could have 10,000 servers working on this for the next hour” instead of “I’ve got five servers to last me the project.” So I think that the scale is going to increase. I think people are going to come up with some interesting new technologies and probably some new languages or ways to work with SystemVerilog. Constrained random is not the “be all, end all”. I think there’s a lot of work still to do.

And then of course some of the industry pundits will say the future is high-level modeling. People are going to move from doing design work in Verilog, SystemVerilog, VHDL to C or SystemC or a much higher level in which case the verification problem will just move out the chain and we’ll be verifying at a higher level as well.

Leave a Reply

Enter the letters you see above.

Work For Verilab