Feed on

Thoughts On Verification: Keeping Up With Specman (part 2 of 2)

In Part 2, Alex and Thorsten continue discussing the latest developments with Specman and the ‘e’ language, along with practical use cases. They focus on coverage driven distribution and how anonymous methods can be applied. Part 1 of the conversation can be viewed here.

Alex Melikian: Okay. Moving on to another topic, let’s talk about something introduced at a recent conference covering Specman: the notion of coverage driven distribution. This has been something that’s been in the works for some time, now. It’s not 100% complete yet, but it looks like Specman features supporting coverage driven distribution are becoming piece by piece available. Before we get into that, once again for the readers that are not familiar with it, can you explain what are the concepts behind coverage driven distribution?

Thorsten Dworzak: Yes. So the typical problem for coverage closure in the projects we are usually working on as verification engineers is that you have this S curve of coverage completeness. You start slowly and then you easily ramp up your coverage numbers or your metrics to a high number like 80/90 percent.

And then in the upper part of the S curve it slows down because you have corner cases that are hard to reach, etc., etc. And it takes a lot of time to fill the gaps in the coverage over the last mile. So people at Specman have done some thinking about it. And of course one of the ideas that has been around in the community for some time is that you look at your coverage results and feed them back into the constraints solver.

But this is a different approach. Here you look at the actual coverage implementation and derive your constraints from this, or you guide your constraints from your actual coverage implementation. So to give an example, you have an AXI bus and an AXI transaction comprising of an address, a strobe value, direction, and so on. And in your transaction based coverage you have defined certain corner cases like hitting address zero, address 0xFFFF and so on. And whenever Specman is able to link this coverage group to the actual transaction, which is not easy and I’ll come to that later – then it can guide the constraints solver to create higher probability for these corner cases in the stimulus.

AM: It may be important to clarify that the language construct for this coverage distribution guidance is different from the language constructs for randomization constraints. Is that an accurate statement?

TD: Yes, yes. So in the past, the two aspects coverage and stimuli generation were completely separated. And this is also the case in SystemVerilog and, as far as I know - Vera. But this is an attempt to bring these two together.

And of course you can imagine that from a tool point of view, or just from a conceptual point of view, it’s very hard if you have a, let’s say, a monitor with a transaction and then your stimulus generation is in a totally different part of the testbench. And of course the tool cannot make a link between the coverage of the transaction and its generation. The other thing, of course, is this concept has to somehow work backwards. So from the earlier example of an AXI address, it’s quite easy. If you want to see interesting addresses, you just generate them on the input side. But what if you have a CRC code and you want to cover some interesting CRC values.

Of course you cannot work back the input in a straightforward manner. It’s more or less impossible, because the output of the CRC is the result of an encoding engine. So you cannot feed back from the coverage of the outputs to the constraints on the input; in this case it’s impossible.

So Specman is working on issues like this – incrementally improving all this as well as releasing these features gradually so that the user base can try them out and comment. Which I think is kind of a good approach, even if the CDD is not yet perfected.

AM: So it is intentional for these features to be released by step by step. I agree this makes sense. When you have something that changes a core aspect of a methodology we’re all accustomed to, it’s best to be brought in carefully to ensure better adoption by the community.

TD: Indeed.

AM: CDD does sound like something quite significant in terms of the way someone would try to get coverage closure. As you said, for verification engineers, it’s all about getting coverage closure as quickly, efficiently and accurately as possible. Hence, I always try to think strategically on how we can use these tools in a clever manner and achieve our goals. CDD sounds like something you could definitely use in a strategic manner, slightly different from the way we’re used to collecting and reaching our coverage goals. Are there some strategies you’ve thought of using CDD?

TD: Yes. So you always have these different domains of your DUT’s features, let’s say you have an interface with a certain protocol. And there you often have a direct link between stimuli generation and the output - I mentioned the address example earlier. I think for this it should be a huge help to get coverage closure, faster. And this is one area. A domain where it’s less useful would be a big data-path where you want to cover the output but it’s hard to feed back to the stimuli.

AM: Ah, so for the moment, CDD is best suited to quickly get coverage closure on DUT features with a well defined “input to output” functional relationship ?

TD: Yes. For example in a system level verification you often have the problem that you cannot cover all protocol scenarios - often because long run times discourage you from generating all kinds of stimuli. So I think it would really speed up the process if you could use a CDD in this area.

AM: Interesting, and valuable insight. OK, moving onto another subject, something that you’ve contributed recently in a blog posting, as well as having presented it. You’ve come up with a method to implement anonymous methods in Specman. What was your motivation behind this, or what drove you to do this with Specman? I should mention to our readers, anonymous methods are something that already exists in languages like Ruby, Python and even believe C++. You’ve found a clever way to do this in Specman.

TD: Yes. So to be honest, Alex, I had some time on my hands and was thinking about some clever way to apply the language extendability features that e has. Then I could do something like Perl can do with anonymous methods. And the single example - where all this came out of - was actually a command line calculator. So Perl is an interpretative language and you can just pass an arithmetic expression to the “eval()” function and then you get the result.

So that’s basically code execution and code creation on the fly. The code exists in a string and you can construct the string during run-time. And now Specman has something similar. It’s a command – it’s funnily called “specman” too – that can evaluate some e code on the fly. So you can type specman() with the string argument “3+5″ and it prints the result. Because, as you might know, Specman has both aspects of compiled and interpretative languages. The idea was to use this ability to execute code to implement anonymous methods and make it generally available for the verification engineer.

One of the major obstacles to this is that the specman() command itself doesn’t have a scope; it doesn’t inherit the scope of the struct or the class you call it. So you cannot pass stuff in or out of it. And this doesn’t make it very useful with anonymous methods that you want to pass parameters into get results back. Yeah, and so was really the intellectual motivation behind.

AM: So out of this motivation, you’ve created a library for the user community. We our readers to go deeper into the details of this library, but conceptually, can you give us a summary of what are the library constructs and how they can be used to implement anonymous methods?

TD: Yes, anonymous methods are like handling anonymous code. And so this library provides a struct that encapsulates this piece of code. An instance of this is called the “Proc” object, named after the similar class in the Ruby language. And what it contains is, of course, the anonymous code itself and some methods to access it. And for convenience, this interface has been wrapped with e macros. The e macros are really part of the language extensibility and they allow you to create a nice syntax to work with these “Proc” objects.

So the two main macros are, of course, one to create a “Proc” object, which means during the runtime of your code you can create a such an object containing some arbitrary code. You can either do this from a string, as I mentioned in the earlier example, or embedded in your source code. And the second macro is one that allows you to call your anonymous method. And this is responsible for passing the parameters in and out of the “Proc” object.

AM: At this point, readers and potential users are wondering how they can include this library in their test bench environment, and where can they get it from?

TD: Yes, well, Verilab has a public bitbucket repository.

And there you find the vlab_util library. And it also contains other functionality that has been presented in earlier conferences so contains a lot of useful macros that extend the language with certain functionality that we were missing. And it’s also worth looking at it for the syntax or code examples themselves.

AM: Great, I’m sure all of that will be appreciated.

I’m trying to think what would be a typical scenario where someone would use these anonymous methods. One scenario I would have at the top of my head would be a stream of data that would have to be encoded with some sort of encryption method.

However, I would not know that encryption method ahead of time, it could even be chosen on the fly. So with anonymous methods I would be able to define and encapsulate encryption methods, and with the use of anonymous functions, I could ‘bind’ the right encryption method at run-time, to objects holding the anonymous method placeholders. Does that sound like a good application?

TD: Yeah, that’s exactly a good example because since you have these “Proc” objects which encapsulate the code, you can do everything with them. You can construct them, you can pass them around and you can attach them to data. And so this way you – in your example, you could really, um, like pass an encryption method along or later create new methods and select them based on some other property.

And this would all be feasible during run time.

AM: Could you pass them to, let’s say, transaction items?

TD: Yeah. The “Proc” objects are normal structs or classes. So you can just pass them around as pointers or copy them.

AM: Cool. Great ideas for our readers on tools they can use to develop their future test benches. All right! I think that’s all the time that we have for this edition. Thorsten, thanks a lot for your time and letting us pick the Specman portion of your brain. I hope to see you back on another edition of Thoughts on Verification.

TD: Yeah, thank you Alex for giving me this opportunity! Hope we do this again soon.

Leave a Reply

Enter the letters you see above.

Work For Verilab