Feed on
Posts
Comments

Archive for the ‘Methodology’ Category

DAC 2008 Presentations Now Posted

Wednesday, July 30th, 2008 by JL Gray

Just a quick FYI… both David Robinson and I have posted our DAC presentations on Verification Planning and SystemVerilog Interoperability on the Verilab website. Please check them out and let us know if you have any questions or comments!

Response to Mentor CDC Whitepaper

Saturday, March 22nd, 2008 by Kevin Johnston

There was a recent surge of discussions about asynchronous clock domain crossings and metastability handling in Verilab email: Two people asked Mark Litterick essentially the same question just hours apart, and then a day later Jason Sprott noticed a Mentor CDC Verification paper that referenced Mark’s “Pragmatic Simulation-Based Verification of Clock Domain Crossing Signals and Jitter using SystemVerilog Assertions,” paper (Best Paper at DVCon 2006).

One particular statement in the Mentor paper caught my eye: "this model can still generate false errors: the waveforms show that input sequence A, B, C, D, E, F can result in output sequence A, B, E, E, E, where two consecutive inputs, C and D, are skipped". And this statement bothered me: I had spent a long time figuring out Mark’s model some while back, and while it was not at all intuitive to me, I did convince myself that it could never generate a simulated output sequence that was impossible in real hardware. So if the Mentor paper was correct, then I had missed something about Mark’s model, and I’ll be honest, I didn’t relish going back and studying it again.

Obviously I was just going to have to find a mistake in the Mentor paper instead. And to my considerable relief, I did. In fact, I found two:

  1. The schematic (Fig 8, p.9) of Mark’s synchronizer model is missing a small but important feature.
  2. The waveform (Fig 9, p.9) of data signal values input to the model is a somewhat misleading representation of an async input.

(more…)

DFT Digest: Secure Design-For-Test

Saturday, December 1st, 2007 by JL Gray

Folks interested in DFT would do well to head over to DFT Digest. In his latest post, John Ford ponders about the potential for hackers to learn information about the inner workings of a device via a side channel attack using scan chains. The topic reminds me of a presentation I attended at this year’s DATE conference in Nice. The presenter was discussing security issues and described how she wrapped her passport in aluminum foil to prevent would-be hackers from scanning info out of the embedded RFID chip.

Separately, John is compiling a list of DFT related links. If you’ve got some good ones to share head on over to his DFT Bookcase and or his DFT Forum and let him know!

Aspect-Oriented Programming with the e Verification Language

Wednesday, August 29th, 2007 by admin

aop_book_cover Used well, the Aspect Oriented (AO) features of the e verification language can save you scarce project time and give you a solution that can absorb change. The trick, of course, is using AO well.

(more…)

Checks or Functional Coverage (Part II)?

Tuesday, July 24th, 2007 by David Robinson

[NOTE: This entry was written in response to some comments posted to my previous entry "Checks or Functional Coverage?". It was only meant to be a couple of line, but got a bit out of hand :-) ]

In my previous entry "Checks or Functional Coverage?", I made the point that checkers were more important than functional coverage, and that you had to get the checkers done first. Some of the replies said "I agree, but…" and then went on to say that we needed both. I completely agree; the ideal testbench will have checkers and functional coverage. My message here isn’t "do checks and forget about functional coverage". It’s "do the checks for the important requirements before the functional coverage for the important requirements, but do both of these before starting work on the less important requirements, because you might not end up with the time to do it all". That’s not very snappy though, so let’s go with "do the checks first".

(more…)

Checks or Functional Coverage?

Monday, July 9th, 2007 by David Robinson

31 July 2007 - Fixed a typo.

Why does no one mention checkers any more? All I ever seem to hear is “functional coverage”, “functional coverage”, and more “functional coverage”. It appears that the entire verification industry is in the midsts of a functional coverage love-in that, while might be good for tool sales, isn’t very good for some verification teams.

The historical reasons for this are clear - EDA vendors had to sell new tools, so they went on a functional coverage marketing campaign. They had nothing really new to add to checking, but they sure had those fancy constraint solvers with functional coverage engines to sell. And slowly but surely, functional coverage took centre stage in everyone’s minds.

But it has gone too far. Functional coverage has become such a central pillar of verification that we’ve encountered teams who can tell us in gory detail what they have covered, but can’t tell us what they have checked. In one case, they hadn’t actually checked anything, although they did have 100% functional coverage (which turned out to be wrong anyway).

A quick look at the SystemVerilog LRM suggests that the checking requirement seems to have escaped the language designers as well. Sure, SVA is wonderful, but assertions only go so far towards checking a design (and not really that far when you think about it). What about support for all those higher level checks? Where are the language constructs for checking behaviour and reporting errors in the testbench part of SV? “if()” and “$display()”? Is that really it? That’s not what I was expecting from a language that has been designed for verification.

The functional coverage mantra is so engrained in the verification industry’s psyche that even non-tool vendors are preaching it. Let me quote from Janick Bergeron’s [1] "Writing Testbenches using SystemVerilog":

"Start with functional coverage …Thus, it is important to implement functional coverage models and collect functional coverage measurements right from the start".

He is not alone - I just happened to have his book to hand. Surely it should be something like “Start with checks. Who cares what the functional coverage is if you don’t have any checks? Who cares what the functional coverage says when your implementation metric is only sitting at 10% (e.g. only 10% of testbench code written)?”.

Experienced guys like Janick know that the checks have to be in place, but even the mention of checkers has faded so far into the background that some verification engineers don’t seem to know about them at all.

So what should you really do when writing your testbench?

  • Select your most important verification requirements. Pick the ones you absolutely have to get done
  • Write some stimuli for them
  • Write some checkers and check them
  • Once you get close to finishing the implementation that you have planned, put the functional coverage in
  • Repeat, but for your less important verification requirements

The point where you start concerning yourself with functional coverage is the point where you start going from the implementation phase (typing in the testbench code) to the closure phase (running tests and debugging). Now sure, I know they overlap quite a lot, but the point is that you get the checks in first because they are important. Functional coverage is a metric - passing checks are a necessity.

Look at it this way - if you had to run a testbench that had checks but no functional coverage, or a testbench that had functional coverage but no checks, which would be better?

Checks - no question about it.

So functional coverage might be the icing on the cake, but it will never be the cake. Checkers are the cake. You have to get the checks in first.

Cheers
David

[1] Ok, he works for Synopsys, but his testbench books are neutral and generic. Buy yourself a copy - you won’t regret it

Cadence uRM and Verification Planning

Wednesday, April 18th, 2007 by JL Gray

Tuesday afternoon I attended the Cadence/Doulos solutions workshop entitled “Adopting a Plan-to-Closure Methodology across Design Teams and Verification Teams”. The session was presented by Hamilton Carter from Cadence, co-author of the soon to be released book “Metric Driven Design Verification”, and Dave Long from Doulos. Hamilton focused much of his portion of the session on verification planning and functional coverage. I’m sure much of the information from his talk will be covered in his book, but there were a few things that stood out.

Hamilton stressed the importance of planning sessions and the idea of creating a prioritized set of metrics. He also highlighted the value of the verification planning document (vPlan). I asked him later in the presentation if it was possible to put too much emphasis on the vPlan to the point where it was being held up to the exclusion of other sets of metrics that should be used together with the vPlan to get an accurate picture of where the project is going (think bug count, number of recently changed lines of code, real progress in completing assigned tasks, etc). According to Hamilton, the Cadence methodology doesn’t take these things into account yet, but he did mention that tools such as Enterprise Manager may have some point be integrated with LSF and Clearcase to the point where you could automatically extract such information.

Next up was Dave Long. Dave’s description of uRM was the first I’ve seen any details about how the methodology has been applied to SystemVerilog, and my first impression is that the results aren’t good (yet). First of all, Incisive does not yet support class-based test environments, only module-based ones. That may change soon, but seems to be a current limitation. Second, sequences, one of the more widely used features of the eRM (the predecessor to uRM focused on the e language), seems basically useless when implemented in SystemVerilog. The implementation relies on creating a driver with one task corresponding to each of what would have originally been an individual “when subtype” of a sequence. The first thing I would do if I was stuck using that feature would be to throw it away and code a more customizable solution (perhaps using factories?). The problems with the feature would be especially severe when dealing with verification IP. Currently in ‘e’ it is possible to override default sequences and add new ones very easily. With this new approach the best possible outcome would be for a user to extend the original driver and hope it was possible to instantiate it in place of the base class in the verification IP.

One other item of note - if I understood correctly there have been no announced improvements to Cadence SystemVerilog support or the uRM. There may be some smaller announcements in the near future, but it doesn’t appear that anything major will be revealed for the next several months at least.

Work For Verilab