Feed on
Posts
Comments

Archive for the ‘Methodology’ Category

IEEE Std.1800-2017 for SystemVerilog: What Changed?

Sunday, February 25th, 2018 by Paul Marriott

Thoughts on the updated standard, by Principal Consultant Jonathan Bromley

A new revision

On Thursday 22nd February 2018, the latest revision of the IEEE standard for the SystemVerilog language was published as IEEE Std.1800-2017 (yeah, I know that’s so last year, but you can’t fight the way these things work). Thanks to the generosity of Accellera www.accellera.com and its member companies, the full standard document – the language reference manual, or LRM –is available free of charge through the GetIEEE program at http://ieeexplore.ieee.org/document/8299595/. You’ll need an IEEE login to download it, but you can get one for free by following the links on that page.

How can I figure out what’s different?

Within hours of publication, colleagues were asking me the reasonable question “what’s new?” In principle you shouldn’t need to ask. The SystemVerilog standards development process is highly transparent. Anyone can read the LRM, and anyone can follow the progress of committee discussion by watching the Mantis bug tracker https://accellera.mantishub.io. In practice, though, I’ve saved you a load of trouble by slugging my way through all the issues that made the cut into 1800-2017 and creating the summary of changes that you’ll find later in this post.

How did we get to where we are today?

SystemVerilog first saw public light of day as an Accellera standard way back in 2003. Vendors rallied behind it, users were enthusiastic, and Accellera wisely passed the standard into the care of the IEEE. The first gold-plated, fully-official IEEE SystemVerilog standard appeared in 2005. There were significant revisions in 2009 and 2012, each adding important new features and functionality to an already large and rich language. Spurred on by the development and rapid adoption of the Universal Verification Methodology, commercial implementations of SystemVerilog became increasingly mature so that everyone could use the language with confidence (and, of course, with caution to avoid a few things that didn’t enjoy perfect support from all the available tools).

So, what happened since 1800-2012?

How can you have a SystemVerilog revision with no new features? Everyone has pet features that they would like to see in SystemVerilog. A ton of them got added in the 2009 and 2012 revisions – here are a few that I use routinely:

For 2017, though, the remit was clear: no new features. Boy, did we have to bite our tongues in the committee discussions (and no, I’m not allowed to tell you anything about what happened in them). The focus? Corrections, clarifications and improvements of LRM text – great news for anyone who tries to write code that will work reliably on any commercially available tools.

C’mon, spill the beans: How many changes?

As far as I can tell, 108 distinct Mantis issues made the cut and were fully resolved in time for incorporation into 1800-2017 by the editor. This is a good moment for a hat-tip to the tireless Shalom Bresticker, who served as LRM editor for this revision. His encyclopaedic knowledge of SystemVerilog, razor-sharp attention to detail, and diligent curation of the Mantis issue tracker made a huge contribution to the project’s success.

Just the words

Of those 108 issues, 69 were purely editorial or wordsmithing changes, improving LRM text or internal consistency without any technical controversy.

Whoops, we missed a few things in the VPI

There were three changes to the VPI header file vpi_user.h to fix some minor oversights.

Clarifications to provide a solid base for vendors and users

30 issues were minor clarifications that are probably only of interest to the most dedicated and obsessive LRM wonk. Stuff like typesetting of the BNF syntax rules in Annex A, a tightening-up of the strict definition of property vacuity, and improvements or corrections of a few code examples. However, some of these clarifications are worth a closer look. Take a peek at these Mantis items to learn more:

But that was just the small stuff. What about the big-ticket items?

Of the 108 changes, just five by my reckoning were significant changes of definition. None of these are new language features. They’re just cleanups of areas of the standard that were too sloppy or just plain wrong. Some of those problem areas had led to incompatible divergence between different vendors’ implementations. Some were wrinkles in the language that were effectively un-implementable or too error-prone, and needed to be ironed out. Here they are, one by one:

  1. Issue 343: modport declarations in generate blocks

    In the early days of SystemVerilog, a few brave engineers tried to use interfaces to Do Interesting Things in RTL design. Yes, you guessed it – I’m guilty, along with a few others. One of the things we thought was cool: representing a set of connections to an interface by using a modport, which could then be instantiated more than once in the interface. So you define a modport to represent – let’s say – a slave device’s connection to a bus fabric. And then you instantiate an array of those modports, so that an array of slaves can connect to them.

    Oh my, were we wrong. Brave, but wrong.

    A modport isn’t a thing you can instantiate.

    If you ever thought that using modports like this was a good idea, then read the Mantis ticket and weep. It isn’t. And you’re not allowed to do it any more. Modports are no longer allowed to appear inside a generate block.

    There are other, better ways to get the same result that will make good material for a future blog post.

  2. Issue 2488: calling virtual methods from a class’s constructor

    Wise programmers know that it’s a bad idea to call a virtual method of any class from the class’s constructor. Different object-oriented languages deal with this situation in different ways, and it’s tricky. Unfortunately it was never properly defined in SystemVerilog – until now. Thanks to that lack of definition, different simulators behaved in different, incompatible ways. The required behaviour is now clearly defined, although it may take a while before tools converge on that behaviour.

    Wise programmers will continue to avoid calling virtual methods from the constructor. The effects are gnarly and far from intuitive.

  3. Issue 4939 and 5540: randomization of enums

    These two corrections deal with some interesting issues about randomization of enum variables. The enum literals define a set of possible values. Should that be treated as a constraint on the enum? What happens if the enum is a member of a packed struct? Once again these are questions that weren’t properly answered, and simulators had begun to diverge. There’s now a clear definition of how it all works. Check your favourite simulator to see how it stacks up against the new definition.

  4. Issue 5183: syntax of pragma expressions

    This fixes some problems in the definition of “protected envelopes”, SystemVerilog’s mechanism for delivering encrypted source code. It’s likely to be of interest mainly to IP vendors.

  5. Issue 5217: operator overloading removed

    Yes, you read it correctly. The operator overloading feature, which has never been implemented by any tool that I know about, has been removed from the LRM. The feature was never properly defined, and there were too many difficulties with the definition for it to be retained.

    This isn’t the first time a feature has been completely deleted from SystemVerilog, but it’s probably the most significant.

So Long, And Thanks For All The Syntax

Thanks for reading this roundup of the changes in SystemVerilog for the 2017 revision. That revision also marks the end of my own involvement with SystemVerilog standardization, as I stand down from the standardization process.

I’ve been honoured (with a U, me being a Brit – apologies to anyone west of Iceland who doesn’t like the spelling) to serve on SystemVerilog standards working groups for nearly 14 years. I don’t use the word “honour” lightly. It’s been a huge privilege to work alongside the exceptionally smart and dedicated people who, supported by their employers, have given time and expertise to make SystemVerilog better for the whole EDA community – an enormous effort in which I’ve made a few tiny contributions. It’s been an amazing journey, engaging with the development of a programming language that is almost synonymous with digital hardware design and verification. It’s introduced me to an astonishing group of talented, enthusiastic, generous-spirited experts from vendor and user companies. Many of those people – you know who you are – have taught me a huge amount, and I’m deeply grateful.

Any errors in this summary are mine alone; if you find any, please get in touch at jonathan.bromley@verilab.com and I’ll be happy to correct them and acknowledge your contribution.

25 February 2018

DAC 2008 Presentations Now Posted

Wednesday, July 30th, 2008 by JL Gray

Just a quick FYI… both David Robinson and I have posted our DAC presentations on Verification Planning and SystemVerilog Interoperability on the Verilab website. Please check them out and let us know if you have any questions or comments!

Response to Mentor CDC Whitepaper

Saturday, March 22nd, 2008 by Kevin Johnston

There was a recent surge of discussions about asynchronous clock domain crossings and metastability handling in Verilab email: Two people asked Mark Litterick essentially the same question just hours apart, and then a day later Jason Sprott noticed a Mentor CDC Verification paper that referenced Mark’s “Pragmatic Simulation-Based Verification of Clock Domain Crossing Signals and Jitter using SystemVerilog Assertions,” paper (Best Paper at DVCon 2006).

One particular statement in the Mentor paper caught my eye: "this model can still generate false errors: the waveforms show that input sequence A, B, C, D, E, F can result in output sequence A, B, E, E, E, where two consecutive inputs, C and D, are skipped". And this statement bothered me: I had spent a long time figuring out Mark’s model some while back, and while it was not at all intuitive to me, I did convince myself that it could never generate a simulated output sequence that was impossible in real hardware. So if the Mentor paper was correct, then I had missed something about Mark’s model, and I’ll be honest, I didn’t relish going back and studying it again.

Obviously I was just going to have to find a mistake in the Mentor paper instead. And to my considerable relief, I did. In fact, I found two:

  1. The schematic (Fig 8, p.9) of Mark’s synchronizer model is missing a small but important feature.
  2. The waveform (Fig 9, p.9) of data signal values input to the model is a somewhat misleading representation of an async input.

(more…)

DFT Digest: Secure Design-For-Test

Saturday, December 1st, 2007 by JL Gray

Folks interested in DFT would do well to head over to DFT Digest. In his latest post, John Ford ponders about the potential for hackers to learn information about the inner workings of a device via a side channel attack using scan chains. The topic reminds me of a presentation I attended at this year’s DATE conference in Nice. The presenter was discussing security issues and described how she wrapped her passport in aluminum foil to prevent would-be hackers from scanning info out of the embedded RFID chip.

Separately, John is compiling a list of DFT related links. If you’ve got some good ones to share head on over to his DFT Bookcase and or his DFT Forum and let him know!

Aspect-Oriented Programming with the e Verification Language

Wednesday, August 29th, 2007 by admin

aop_book_cover Used well, the Aspect Oriented (AO) features of the e verification language can save you scarce project time and give you a solution that can absorb change. The trick, of course, is using AO well.

(more…)

Checks or Functional Coverage (Part II)?

Tuesday, July 24th, 2007 by David Robinson

[NOTE: This entry was written in response to some comments posted to my previous entry "Checks or Functional Coverage?". It was only meant to be a couple of line, but got a bit out of hand :-) ]

In my previous entry "Checks or Functional Coverage?", I made the point that checkers were more important than functional coverage, and that you had to get the checkers done first. Some of the replies said "I agree, but…" and then went on to say that we needed both. I completely agree; the ideal testbench will have checkers and functional coverage. My message here isn’t "do checks and forget about functional coverage". It’s "do the checks for the important requirements before the functional coverage for the important requirements, but do both of these before starting work on the less important requirements, because you might not end up with the time to do it all". That’s not very snappy though, so let’s go with "do the checks first".

(more…)

Checks or Functional Coverage?

Monday, July 9th, 2007 by David Robinson

31 July 2007 - Fixed a typo.

Why does no one mention checkers any more? All I ever seem to hear is “functional coverage”, “functional coverage”, and more “functional coverage”. It appears that the entire verification industry is in the midsts of a functional coverage love-in that, while might be good for tool sales, isn’t very good for some verification teams.

The historical reasons for this are clear - EDA vendors had to sell new tools, so they went on a functional coverage marketing campaign. They had nothing really new to add to checking, but they sure had those fancy constraint solvers with functional coverage engines to sell. And slowly but surely, functional coverage took centre stage in everyone’s minds.

But it has gone too far. Functional coverage has become such a central pillar of verification that we’ve encountered teams who can tell us in gory detail what they have covered, but can’t tell us what they have checked. In one case, they hadn’t actually checked anything, although they did have 100% functional coverage (which turned out to be wrong anyway).

A quick look at the SystemVerilog LRM suggests that the checking requirement seems to have escaped the language designers as well. Sure, SVA is wonderful, but assertions only go so far towards checking a design (and not really that far when you think about it). What about support for all those higher level checks? Where are the language constructs for checking behaviour and reporting errors in the testbench part of SV? “if()” and “$display()”? Is that really it? That’s not what I was expecting from a language that has been designed for verification.

The functional coverage mantra is so engrained in the verification industry’s psyche that even non-tool vendors are preaching it. Let me quote from Janick Bergeron’s [1] "Writing Testbenches using SystemVerilog":

"Start with functional coverage …Thus, it is important to implement functional coverage models and collect functional coverage measurements right from the start".

He is not alone - I just happened to have his book to hand. Surely it should be something like “Start with checks. Who cares what the functional coverage is if you don’t have any checks? Who cares what the functional coverage says when your implementation metric is only sitting at 10% (e.g. only 10% of testbench code written)?”.

Experienced guys like Janick know that the checks have to be in place, but even the mention of checkers has faded so far into the background that some verification engineers don’t seem to know about them at all.

So what should you really do when writing your testbench?

  • Select your most important verification requirements. Pick the ones you absolutely have to get done
  • Write some stimuli for them
  • Write some checkers and check them
  • Once you get close to finishing the implementation that you have planned, put the functional coverage in
  • Repeat, but for your less important verification requirements

The point where you start concerning yourself with functional coverage is the point where you start going from the implementation phase (typing in the testbench code) to the closure phase (running tests and debugging). Now sure, I know they overlap quite a lot, but the point is that you get the checks in first because they are important. Functional coverage is a metric - passing checks are a necessity.

Look at it this way - if you had to run a testbench that had checks but no functional coverage, or a testbench that had functional coverage but no checks, which would be better?

Checks - no question about it.

So functional coverage might be the icing on the cake, but it will never be the cake. Checkers are the cake. You have to get the checks in first.

Cheers
David

[1] Ok, he works for Synopsys, but his testbench books are neutral and generic. Buy yourself a copy - you won’t regret it

Cadence uRM and Verification Planning

Wednesday, April 18th, 2007 by JL Gray

Tuesday afternoon I attended the Cadence/Doulos solutions workshop entitled “Adopting a Plan-to-Closure Methodology across Design Teams and Verification Teams”. The session was presented by Hamilton Carter from Cadence, co-author of the soon to be released book “Metric Driven Design Verification”, and Dave Long from Doulos. Hamilton focused much of his portion of the session on verification planning and functional coverage. I’m sure much of the information from his talk will be covered in his book, but there were a few things that stood out.

Hamilton stressed the importance of planning sessions and the idea of creating a prioritized set of metrics. He also highlighted the value of the verification planning document (vPlan). I asked him later in the presentation if it was possible to put too much emphasis on the vPlan to the point where it was being held up to the exclusion of other sets of metrics that should be used together with the vPlan to get an accurate picture of where the project is going (think bug count, number of recently changed lines of code, real progress in completing assigned tasks, etc). According to Hamilton, the Cadence methodology doesn’t take these things into account yet, but he did mention that tools such as Enterprise Manager may have some point be integrated with LSF and Clearcase to the point where you could automatically extract such information.

Next up was Dave Long. Dave’s description of uRM was the first I’ve seen any details about how the methodology has been applied to SystemVerilog, and my first impression is that the results aren’t good (yet). First of all, Incisive does not yet support class-based test environments, only module-based ones. That may change soon, but seems to be a current limitation. Second, sequences, one of the more widely used features of the eRM (the predecessor to uRM focused on the e language), seems basically useless when implemented in SystemVerilog. The implementation relies on creating a driver with one task corresponding to each of what would have originally been an individual “when subtype” of a sequence. The first thing I would do if I was stuck using that feature would be to throw it away and code a more customizable solution (perhaps using factories?). The problems with the feature would be especially severe when dealing with verification IP. Currently in ‘e’ it is possible to override default sequences and add new ones very easily. With this new approach the best possible outcome would be for a user to extend the original driver and hope it was possible to instantiate it in place of the base class in the verification IP.

One other item of note - if I understood correctly there have been no announced improvements to Cadence SystemVerilog support or the uRM. There may be some smaller announcements in the near future, but it doesn’t appear that anything major will be revealed for the next several months at least.

Work For Verilab