Feed on

Thoughts on Verification: An Interview with JL Gray (part 1)

Verilab is pleased to introduce “Conversations About Verification”, a monthly publication featuring a discussion on VLSI verification topics. In this inaugural edition, Verilab consultant Alex Melikian discusses first experiences and adoptions of modern verification technologies with JL Gray, Vice President and General Manager, North America of Verilab.

In part 1, JL and Alex discuss about their first experiences involving advanced verification languages and methodologies. They also discuss why and how ASIC/FPGA development centers adopt and integrate Hardware Verification Languages (HVL) and related methodologies into their workflow. In addition, they also discuss some of the impediments as to why others hesitate to make the adoption.

Alex Melikian: Hi JL, thanks for participating in our inaugural conversation. Before we get things started maybe we should introduce ourselves to our readers. Since you’re the guest, go ahead first.

JL Gray: Sure. I’m JL at Verilab, I head up the North American operations. I also work with clients on coaching and consulting in the areas of verification planning, SystemVerilog, UVM, and other related types of verification activities. I’m also involved with the Accellera Verification IP Technical Subcommittee as a representative for Verilab. People can read up more about me through my bio.

AM: OK. I myself have been doing verification for about ten years now. I started out with Specman and was in the FPGA/ASIC department of a telecom company. The department was investigating new verification languages that were emerging and they assigned me on a team to do some research with Specman. That was my first taste of verification. It wasn’t like I had planned to specialize in verification when I graduated form university, the EDA industry had just started to produce concepts like functional coverage and the paradigm of verification into ASIC/FPGA development. But I found it to be a really powerful concept. It was a good balance between object-oriented software programming and low level HDL design. I found that really cool and I stuck with it. I have since moved on to learning and applying SystemC and SystemVerilog as well. What was your first experience with verification?

JL: My first experience with verification was on a project for a 10-gigabit Ethernet NIC and I was asked to, first of all, evaluate what were the best ways of doing some sort of code coverage. Actually, I spent several months evaluating code coverage tools, and then the manager said “instead of this code-coverage stuff why don’t we spend our money on Specman.” So he sent myself and another one of the engineers to some training. We came back and we tried to integrate Specman into our environment which, at the time, was a C-based random generator that spit out directed tests.

We integrated some e methodologies into the existing flow, and then it took off on subsequent projects. We started to expand how much of the test bench that we were doing using e, and I was with Intel at the time. So there were some other groups that were starting to use e and we were collaborating in that area and then eventually moved on from there.

AM: What was your first impression when you were initially began working with Specman e, your first verification language?

JL: Well, e didn’t really make sense at the time when I started, but nothing would have made sense. The C test bench didn’t make sense, the Verilog test bench didn’t make sense. So as a brand new engineer right out of school it was all a learning experience for me. But it’s been interesting, because I started my verification career using e, which is not a normal thing I think for people to do, it really affected the way that I think about what is normal in a testbench. So I see certain ways of doing things now, and I compare them to what I had done originally. And I think that is really what most people do.

For example, they may have done a Verilog test bench or a C test bench first, or maybe nowadays they may have actually done a SystemVerilog test bench first. I believe this initial experience will impact how they view verification. So for me, starting off with e gave me a different experience compared to what a lot of folks have had.

AM: You meet with hardware design firms and companies all the time, not all of them have adopted Specman, SystemVerilog or any other hardware verification language (HVL) in their methodology. What do you think is the biggest impediment of firms in adopting verification? Perhaps you can relate to your experience, what was the moment or the situation where your boss asked you to experiment with verification?

JL: Well, it’s interesting because engineers frequently fall back on what they’ve done before. For example, the manager of a team may have used a Verilog test bench, or scripts, or visual inspection and didn’t need to have such complex tools as what we have today. They didn’t need to spend so much effort verifying chips because the same level of complexity wasn’t there. So chips are growing in complexity, but senior engineers and managers are still stuck on what worked for them in the past. They become naturally skeptical of new things.

They’re skeptical when people promise ”hey we’ll get a much better results for you if we do X, Y, Z.” Imagine a manager’s reaction to an engineer who proposes to hire more verification engineers, spend money on licenses, fancy new tools, or training, and that none of this will really pay off on the first project. Rather without the right kind of help, it will take until the second project where you will see the full benefit in your schedule of doing this purchase. So there’s a lot of skepticism I think from a project management perspective when you try to adopt these kinds of new techniques.

Designers are also skeptical. In fact, I would say they are the most skeptical. Again, they’re used to running Verilog or VHDL code, they feel like they have a handle on things and they don’t think that these newfangled approaches are really useful. And it’s hard to prove it until they’ve seen it, but it’s hard for them to see it unless they have a chance to do it. And they don’t want to do it. So it’s definitely a challenge getting folks to adopt some of the new verification techniques. Usually I think you have to experience some sort of failure or some sort of problem where you weren’t able to achieve the results that you were hoping for.

Unfortunately, I think that’s also the case that something bad has to happen in order for something good to come out of it.

AM: Do you think that is the usual case of how verification eventually gets adopted into a company? Do they have to learn the hard way?

JL: Well, it would be nice if they didn’t. I have to say not every company is like this. Some companies I’ve worked with, actually brought us in before they’ve had a failure. They just suspect – they feel that something’s possible to happen. This usually comes up with projects that are much more sensitive with repercussions, where the consequences of a potential problem may be some safety issue or some security issue. It seems to be less of a concern for consumer products. So some folks will have a very strong motivation for this without having a failure, but again I think sometimes it helps to have a failure unfortunately to make progress.

AM: Yeah, I guess it’s part of human nature to fall back on what you know best, and as long as everything is going okay, you’ll just repeat the same methodology. Unfortunately, the caveat with this approach is when a methodology becomes no longer effective or obsolete, and engineers realize this in the middle of a project, then it’s too late to make the change without severely impacting or jeopardizing the project. For some companies, it’s only after such a situation when they finally make the jump to more advanced methodologies.

What are some of the biggest myths or misunderstandings about verification and the work involved that you’ve seen out there in the industry?

JL: Well, a lot of big companies have the desire to take the necessary steps to do this heavy verification. Smaller companies, startups or just companies where chip design is not the thing that they normally do, they tend to feel that they’re not going to have the resources to spend on big compute farms or fancy methodologies. So usually it’s not worth the trouble to do it, or if they do it they’ll do it sort of half-heartedly. They won’t fully adopt everything. They’ll cut back. They’ll say, “well I only need to have a couple of licenses for my simulator because this is not a complex thing that I’m working on.”

Or, “we don’t really need to do all of the functional coverage and constraint randomly because the types of products we’re developing are just a variance of stuff that was developed in the industry maybe ten years ago.” What they don’t realize is those things that were developed ten years ago; the people who developed those things were the ones that came up with constraint random. Well, some of the early folks who adopted Specman or Vera, these were companies who were building network switches or other kind of networking devices. They needed that extra power in order to adequately verify their designs.

So it also comes up in the case of folks working on FPGAs.

They’re thinking, “hey you know we’ve got a small team working on these FPGAs, and it’s not such a big deal if there’s a problem because we’ll just fix it and resynthesize the design, we’ll reprogram the FPGA.” Unfortunately, it’s actually very difficult to debug things in the FPGA due to lack of visibility. Being able to run stuff in a robust test bench is helpful, so people will eventually start to see the tradeoffs and they start to try to adopt some of these new techniques.

AM: Right. I believe that’s one of the big impediments of why verification hasn’t really fully penetrated the market yet, especially with the FPGA development teams. In my experience, as I’ve been involved more with FPGA than ASIC projects, very often there’s a lot of reluctance to adopt verification strategies because the costs are obvious, however the benefits are not obvious. For example, let’s compare this to say a factory production line that manufactures widgets. Let’s say there’s an old machine in the production line, and this machine is not working as fast as it could be and is causing a bottleneck in the system. Well, it’s obvious to everyone what the problem is and what the solution should be.

Replace the machine or invest in a newer, more efficient one, and off you go! Your production immediately improves and it will be very easy to measure and notice. Whereas in verification, the cost/ benefit analysis is not that obvious. As you mentioned there’s some initial costs of adopting verification, which are easily measureable, but the benefits and risks involved of not doing verification are not easily measurable. For example, today’s FPGAs keep getting bigger and bigger in logic density, and the demands of using that available density and complexity are increasing. Well, one of the risks that are not obvious when debugging a large complex FPGA in the lab, is that not only is it increasingly difficult to debug it, but once you found your bug, the cycles to resynthesize and check the solution applied to this FPGA are also getting longer, adding to the overall cost of the project.

It could take a couple of hours out of your day just to go back into the lab with a new FPGA load. Of course, there’s no guarantee, as there could well be another bug somewhere else, or worse something once working is now broken due to the change. Once again, you’ll have to repeat this long, painstaking process of producing a solution, waiting for synthesis and returning to the lab.

JL: People like that, Alex, because then they have a chance to read the web and catch up on the news and Facebook, so I think it’s a strong motivation not to do these different techniques because then you wouldn’t have enough time for your daily reading.

AM: I’m not so sure that their managers nor the shareholders of their companies would like that though.

JL: You never know, the shareholders are probably checking Facebook too. One interesting thing related to that is – you make a good point about the failures. The results of not doing something are not obvious. Engineers may look enviously at salespeople and they wonder why are these guys so highly compensated they’re not really doing that much work. The interesting thing is you can directly map gains or losses in company revenue. So a salesperson may make a sale now, or they make it three weeks earlier or three weeks later. The impact of that is immediately visible.

Just like you said in the factory if the machine is not working it’s immediately obvious where you can save time or money. When a salesperson makes a sale, it’s immediately obvious he’s just brought in a half a million dollars into the company. What’s not obvious is that two years in the future a bug will be found that causes a ten million dollar recall, or a hundred million dollar recall. Well, you didn’t know that was going to happen, and how could you show that if you’d done the verification effort, the bug would not have escaped? It’s very difficult to prove that. You can’t prove a negative. It’s like saying: “Well we did a good job in verifying the chip, because nothing happened”.

What does that even mean? It’s hard to convince somebody with such a statement. Verification is kind of like life insurance, right? If you’re lucky you never need to use it, so you’re doing all of this verification work and hopefully nothing bad should happen. You’re being safe. You’re doing things to make yourself safe. You hire a lawyer to help you prepare a will instead of doing it yourself. You don’t know that it’s worth it - you will only find out once it’s too late.

Leave a Reply

Enter the letters you see above.

Work For Verilab