Feed on

Thoughts on Verification: Verification Languages of Today and Tomorrow (Part 2 of 3)

In Part 2, Alex Melikian and Jonathan Bromley discuss the upcoming additions to the SystemVerilog LRM, as well as their approaches to handling new elements or constructs of a language. Part 1 can be viewed here.

Alex Melikian: You’ve been following the developments of SystemVerilog 2012 very closely. Can you tell us about some of the new language additions that we should be looking out for in this upcoming version of SystemVerilog?

Jonathan Bromley: Yes. I’ve been involved in that more than any normal, reasonable person should expect to be. I’ve been serving as a member of the IEEE committee that works on the testbench features of SystemVerilog for the past 7 years.  I think there’s some very exciting stuff coming up in SystemVerilog 2012. It was deliberately set up as a relatively fast track project. Normally, the revision cycle for IEEE standards is five years, but SystemVerilog 2012 comes only two and a half years after the 2009 standard. So it’s really fast tracked. And it was very carefully focused on a small number of new features. So there’s not a huge list of big ticket items. But there are a couple of things in the verification world that I think are really important.

The first one is a big extension to flexibility of the coverage definition system. You can now define your cover points and your cross cover points in a much more sophisticated, much more algorithmic way than was possible before. There’s a big bunch of stuff that came out there, which looks really exciting. And I get the impression that the vendors are going to rally behind these new items very quickly.

There’s also an interesting new feature in the object oriented programming world that System Verilog calls interface classes. People who come from a Java background might know it as interfaces. People who come from a C++ background might call it multiple inheritance, but we’ve got to be a bit careful there because it isn’t multiple inheritance; it’s interface inheritance, which is slightly different. That’s the other big ticket item, and I guess we haven’t got the time to talk about it in detail here. But it’s certainly well worth following up, and I think it may well have a big impact on the major verification methodologies over the next handful of years. But I can’t comment on when vendors will implement it.

AM: But you do feel that it is imminent that these changes you’re talking about will eventually be incorporated into the SystemVerilog language?

JB: Absolutely right. It’s inevitable. I guess in principle, if it’s in the standards document, then powerful users can lobby and make reasonable demands for the tools to support it. It has to be said that there’s a handful of small things in the SystemVerilog standard that have been there since the very beginning but have still not been implemented by any vendor because they’re actually quite badly designed features. And the vendors have correctly put their foot down and said, “we are not going to do that.” So the fact that a feature is in the standard doesn’t guarantee its implementation. But I think in the case of the new SystemVerilog-2012 features, the big vendors are heavily enough represented on the standards committee that everybody understands that these can and will be done.

Alex: So what are some of the ‘smell’ tests, or litmus tests if you prefer, that you apply to new features that come out in a language to see if they are properly supported and have practical value?

JB: I guess it goes a bit beyond ‘are they supported?’ The first thing you do is when you’re looking at these things, and you see a new feature, the first thing you ask yourself is “can I imagine myself using that?” And we’re geeks, right? We enjoy doing this stuff. So the answer is usually “yes, hey, I’m going to try that. It’s cool. It’s a new feature. Let’s try it.” But the reality is that you then start thinking… you start to try to work out what would I do with this feature? How would this feature make my verification activities better and more productive? And sometimes, it doesn’t make a big difference.

Interestingly, I initially was very excited about the interface inheritance feature that’s coming into SystemVerilog soon. It seemed like just the kind of thing that I really wanted to be able to use. Thinking about it more carefully and trying to map it onto what I expect to be doing in verification over the next few years, it’s not so clear to me now that interface inheritance really helps solve the problems that I’m going to have to solve. I guess we’ll have to wait and see.

But then, of course, it comes back to the other thing you were asking. Is a feature ready for prime time use yet? Dare I try using it myself? Well, to a certain extent, I’m in a privileged position, right? I work for a small consulting company. We can try all kinds of stuff, and that is exactly what we should be doing: to experiment with new features and find out when they’re ready for use. Great. But then, we also have clients. And those clients are likely to be a little bit more conservative. They have real work to do. They have products to get out the door. And they can’t risk that by sticking their necks on the block trying out new features that might not work.

AM: So besides being sort of the crash test dummies for our clients for the new language features and constructs, what are some of the things that we do to convince them that these new elements are ready for use and can be beneficial for them?

JB: Well, I think we can point to our experience, to places where we’ve been able to show a client: hey, look, you ought to be using this language feature because it’s just going to make your life better, and here’s how. So that’s absolutely something we should be doing and can do. But of course, you can’t do that with confidence until - we come back to your previous question - until we know that the vendors have implemented the feature reliably, and I can write a piece of code that uses that feature, and it works reliably across all the major vendor simulators. There’s no point in promoting a feature that works in simulator X but doesn’t work in simulator Y, because we’ve got to be ready to work with clients using any of the major verification tools.

So there’s a series of maturation processes you have to go through. And maybe, you have to use your own skill and judgment about the individual client, too. Some clients are very gung-ho about these things and really want to try stuff out, and are ready to take a risk, and have a fallback position if the risk doesn’t pay out. But other clients are much more conservative and need to be absolutely sure.

AM: That sounds familiar. We’ve recently seen some interesting debate about how programmers can be ‘conservative’ or ‘liberal’ in their approach. It’s understandable that some stick to the devil that they know. There are good reasons for that too, as change always involves some level of risk. As we know and accept, some are more risk averse than others.

JB: Right. And sometimes, changes are just a simple point feature. For example, in SystemVerilog 2012, there’s a new constraint allowing you to specify that a bunch of values should all have unique values and there are no common values between them. It’s a useful new randomization constraint. And that’s a snap, a no brainer. As soon as it’s available for use, we’ll use it.

But there are other bigger, more structural things where you really have to think about whether the people that are going to be using this are really comfortable with what they’re doing and are actually productive with it, because throwing a new piece of technology of any kind, even if it’s as simple as a new language feature, throwing that at people who aren’t ready to use it is just counterproductive. It doesn’t help them get their work done. They may well be more productive using techniques they’ve used in the past.

AM: That’s a good point, something that is new doesn’t necessarily mean it suits your needs better.

JB: Absolutely. New doesn’t necessarily mean better. And even if it really is better in some absolute sense, it’s not necessarily better for any specific person. They have to be ready for it, enthusiastic about it, and prepared for it by training or reading or experimentation or whatever is their chosen way of ramping up.

Alex: I agree. I believe engineering teams should allow themselves time and accept the potential, maybe even likely outcome of failure whenever they are experimenting with something new. One of my pet peeves with the way engineering teams are sometimes managed is that they are given absolutely no room for experimentaton. I think this is a tactical mistake in management. Experimentation should be a separate process, done independently from the processes of delivering a bug-free product. As you mentioned before, something new always involves risk. So, unless you try and explore, you never know. I believe the right attitude is to allocate time and resources for the team to experiment with new things, but without any expectations for the outcome. Even if the conclusion of the experiment turns out to be a complete bust, it’s never really a complete waste of time or resources, because the team gains the factual knowledge and experience of dealing with what is new, as well as the chance to properly evaluate it.

One Response to “Thoughts on Verification: Verification Languages of Today and Tomorrow (Part 2 of 3)”

  1. Chris Higgs Says:

    I think Jonathan has alluded to some of the problems with the standardisation process that are worth highlighting. Even if a new standard can be “fast-tracked” within 3 years, there’s still a lag before vendors implement and even then it’s hit-and-miss whether their implementation will have intricacies that require you to write vendor-aware code anyway. Just look at VHDL 2008! The length of time from conception to usability really hampers progress and stifles innovation.

    Compared to the software industry we seem to be stuck in a bygone age and our productivity suffers as a result.

Leave a Reply

Enter the letters you see above.

Work For Verilab