Feed on

Checks or Functional Coverage?

31 July 2007 - Fixed a typo.

Why does no one mention checkers any more? All I ever seem to hear is “functional coverage”, “functional coverage”, and more “functional coverage”. It appears that the entire verification industry is in the midsts of a functional coverage love-in that, while might be good for tool sales, isn’t very good for some verification teams.

The historical reasons for this are clear - EDA vendors had to sell new tools, so they went on a functional coverage marketing campaign. They had nothing really new to add to checking, but they sure had those fancy constraint solvers with functional coverage engines to sell. And slowly but surely, functional coverage took centre stage in everyone’s minds.

But it has gone too far. Functional coverage has become such a central pillar of verification that we’ve encountered teams who can tell us in gory detail what they have covered, but can’t tell us what they have checked. In one case, they hadn’t actually checked anything, although they did have 100% functional coverage (which turned out to be wrong anyway).

A quick look at the SystemVerilog LRM suggests that the checking requirement seems to have escaped the language designers as well. Sure, SVA is wonderful, but assertions only go so far towards checking a design (and not really that far when you think about it). What about support for all those higher level checks? Where are the language constructs for checking behaviour and reporting errors in the testbench part of SV? “if()” and “$display()”? Is that really it? That’s not what I was expecting from a language that has been designed for verification.

The functional coverage mantra is so engrained in the verification industry’s psyche that even non-tool vendors are preaching it. Let me quote from Janick Bergeron’s [1] "Writing Testbenches using SystemVerilog":

"Start with functional coverage …Thus, it is important to implement functional coverage models and collect functional coverage measurements right from the start".

He is not alone - I just happened to have his book to hand. Surely it should be something like “Start with checks. Who cares what the functional coverage is if you don’t have any checks? Who cares what the functional coverage says when your implementation metric is only sitting at 10% (e.g. only 10% of testbench code written)?”.

Experienced guys like Janick know that the checks have to be in place, but even the mention of checkers has faded so far into the background that some verification engineers don’t seem to know about them at all.

So what should you really do when writing your testbench?

  • Select your most important verification requirements. Pick the ones you absolutely have to get done
  • Write some stimuli for them
  • Write some checkers and check them
  • Once you get close to finishing the implementation that you have planned, put the functional coverage in
  • Repeat, but for your less important verification requirements

The point where you start concerning yourself with functional coverage is the point where you start going from the implementation phase (typing in the testbench code) to the closure phase (running tests and debugging). Now sure, I know they overlap quite a lot, but the point is that you get the checks in first because they are important. Functional coverage is a metric - passing checks are a necessity.

Look at it this way - if you had to run a testbench that had checks but no functional coverage, or a testbench that had functional coverage but no checks, which would be better?

Checks - no question about it.

So functional coverage might be the icing on the cake, but it will never be the cake. Checkers are the cake. You have to get the checks in first.


[1] Ok, he works for Synopsys, but his testbench books are neutral and generic. Buy yourself a copy - you won’t regret it

6 Responses to “Checks or Functional Coverage?”

  1. Sean W, Smith Says:

    Good point…

    I think that everyone understands checkers so the functional coverage has become the new hot topic. Clearly checkers are the backbone of verificaiton but coverage is almost as important so we can know if we generated useful traffic to check. In terms of language support even E and VERA lack a lot of useful infrastructure there which is augmented by class libraries like eRM and RVM that provides these basic functions. I see many users going beyond these classes and adding other hooks to deal with test phases and my favorite challenge OIR (online insertion or removal) aka. Hot Plugging. OIR presents unique challenges when crafting checkers…


  2. Rahul V Shah Says:

    Hi David,
    This is a great article and very inspiring and makes lot of sense. I also had some concerns about the functional coverage, in terms of the way it is marketed. I am pasting the link of some thoughts which I wrote for the same.



  3. Janick Bergeron Says:

    Interesting and important discussion and Thanks for the book plug!

    I think you are correct that there isn’t much talk about checks nowadays because there is little new in that area. Functional coverage makes for much better slideware :-)

    > if you had to run a testbench that had checks but no
    > functional coverage, or a testbench that had functional
    > coverage but no checks, which would be better?

    Actually, neither.

    Without some form of coverage, how do you know that you have actually exercised and confirmed that the checked functionality was correct? For example, you may have a check that a FIFO, when full, stops accepting write cycles, but if you never fill the FIFO, the check is useless.

    Note that a directed testcase is another form of functional coverage. If you code a test explicitly (e.g. fill the FIFO then do another write request), it is similar (but less reliable) to implementing the equivalent functional coverage point (e.g. that measures whether a write cycle was attempted on a full FIFO).

  4. sergio Says:

    Hi David

    thanks for putting this in clear words.
    And I totally I agree with you: coverage is important but checkers are by far more important.

    Better to write a check for a FIFO and hope that it is triggered, rather then having a functional coverage point on “FIFO full” and hope that you also have a check in place to actually detect wrong behavior.
    At least in theory …
    In practive, we must admit that detecting a missing checker in a testench code review is difficult but possible, whereas manual detection of a missing stimuli is simply impossible in most cases.

    Now I want to plant 2 seeds for further discussions:
    1. A big part of a testbench, including checkers, is effectively (re)modeling the verified functionality (you need that to compute your expected behavior) and part is dealing with modeling the enviroment (e.g., constrains for stimulus generation).
    Functional coverage and code coverage help to detect issues with insufficient stimulus generation.
    What helps you in detecting issues with insufficient or wrong checkers ?

    2. We all know that code coverage is not enough and that functional coverage gives more relevant information on what functionality has been actually exercised.
    But look at this from the technology point of view. A tool for collecting code coverage needs to be quite smart and have powerful technology to be able to instrument RTL code in a smart way.
    On the other hand, when it comes to functional coverage, the smart part is really about understanding what are the important coverage points and then coding them up.
    What is the tool doing? Seems to me not much more then collecting data and have a nice GUI to display greens and reds.


  5. Manmohan Singh Says:

    Hi David,

    Nice & insightful comments
    I like when you said
    “So functional coverage might be the icing on the cake, but it will never be the cake”
    I completely agree with you that checkers are an integral part of verification and of course with add on functional coverage the life of Verification Engineer has slightly got better :-)


  6. Hamilton Says:

    Hi David,

    Thanks for starting this interesting conversation!

    I’ve posted more thoughts on how coverage and checks can play off of and reuse one another at:



Work For Verilab