Thoughts On Verification: Keeping Up With Specman (part 1 of 2)
In this edition of “Thoughts On Verification”, Verilab consultant Alex Melikian interviews fellow consultant Thorsten Dworzak over recently released features from Specman and the ‘e’ language. With nearly 15 years verification experience, Thorsten has worked extensively with Specman and ‘e’, as well as regularly participating in conferences covering related subjects tools.
In Part 1, Alex goes over new features from Specman as Thorsten weighs-in on what he feels are the most practical according to his experience. In addition they discuss in detail the language and tool’s support for employing “Test Driven Development” methodology.
Alex Melikian: Hi everyone, once again, Alex Melikian here back for another edition of Thoughts on Verification. We’ve covered many topics on these blogs but have yet to do one focusing on Specman and the ‘e’ language. To do so, I’m very pleased to have a long time Verilab colleague of mine, Thorsten Dworzak with me. Like me, Thorsten has been in the verification business for some time now, and is one of the most experienced users of Specman I personally know. Actually, Thorsten, I should let you introduce yourself to our readers. Talk about how you got into verification, what your background is and how long you’ve been working with Specman and ‘e’.
Thorsten Dworzak: Yes, so first of all Alex, thank you for this opportunity. I’m a big fan of your series and okay, let’s dive right into it. I’ve been doing design and verification of ASICs since 1997 in the industrial, embedded, consumer, and automotive industry - so almost all there is.
And I’ve always been doing, like, both; design and verification, say 50 percent of each and started using Specman around 2000. That was even before they had a reuse methodology and they didn’t even have things like sequences, drives, and monitors. Later on I was still active in both domains but then I saw that the design domain was getting less exciting. Basically plugging IPs together, somebody writing a bit of glue logic and the bulk of it is being generated by in-house or commercial tools.
So I decided to move to verification full time and then I had the great opportunity to join Verilab in 2010.
AM: Of course your scope of knowledge in verification extends to areas outside of Specman. But since you’ve been working with it since the year 2000, I’m happy to have a chance to cover subjects focusing on it with you. That year is particular for me as I started working with Specman around that time, and I’ve felt that was the era where it and other constrained-random, coverage driven verification tools really took off.
It’s been a couple of years since I’ve last worked with Specman. However, you’ve been following it very closely. What are some of the recent developments in Specman that you think users of this tool and the ‘e’ language should be paying attention to?
TD: Right, so of course the main features that came on very early were constrained random simulation approach, together with functional coverage, and also the ‘eRM’ reuse methodology.
So over the last ten or fifteen years, a lot of improvements have been added and a lot of them were motivated by user input, of course. The people behind Specman organize user meetings, like all the other vendors do as well, and there they collect feedback from the user base about what features should be implemented next or what should be improved and so on.
AM: So what are the features that you’ve seen in the last year that you think have a lot of potential or are good features for people to adopt if they haven’t yet?
TD: A good example is the coverage API. It hadn’t been changed for years, was very clumsy and inflexible compared to the rest of the language. So a lot of people began to generate coverage files up front and this was a source of complaints for a long time. About two years ago they made a major improvement in this area, especially in the light that SystemVerilog had an even better coverage API by that time.
So Specman had to catch up, thus some major improvements were made- I really encourage people to use it. It’s a lot more flexible, particularly the ‘per unit instance’ coverage. So to construct your buckets or bins, as they say, you can directly reference features from the units that the coverage group is embedded in. That was a major step in my opinion.
And the other area of major improvement was the messaging infrastructure. In the early years, all the messaging was handled by a logger unit that was instantiated somewhere in the instance hierarchy and it was always a bit hard to control. Now, Specman has introduced a message manager which lives on the top of the environment and that can be controlled directly, or by means of the instance hierarchy. It’s way more intuitive to use.
Along with this, they introduced structured debug messages so instead of just printing your message to the screen or to a file, you can now extend all your units and your data types with structured messages like “message begin” and “message end”. This can then be recorded in a database and later be accessed by other tools for analysis.
AM: Hmmm, all these messaging features sound handy, especially when making a test bench or DUT more ‘debug-able’. I’m sure I’m not alone when I say that I’ve had to jump through hoops to get the messaging to be more flexible, understandable and better structured in a test bench. Can you think of one more?
TD: Yes, the constraints solver. So in the beginning the solver was called PGen, and over the years there were only minor improvements. What Specman decided to do was to redesign the solver from scratch.
And so now they have this new constraints solver, IntelliGen, which is a lot better than the old one. More powerful, better performance, better features and so on. It’s already the default now so people don’t have to worry about switching to it.
AM: Interesting. The last time I worked with it a few years ago, PGen was still in use. Good to see they’ve set the new solver as the default.
It’s equally good to see that they’re taking feedback from the users and implementing them slowly but surely back into the tool. That’s always a benefit for the users.
TD: Yes, of course. I remember at one of these user conferences, they took a vote over some certain feature where the question was “should we go for more backward compatibility or more versatility?” so it was kind of nice that they asked for opinions.
AM: A democratic approach with the user base. Refreshing.
TD: Yes, right.
AM: So something else that has been released about a year ago was the concept of unit testing, or what some may refer to as “TDD” or “Test Driven Development”. Now, before we go further, for the sake of some of our readers, I should define what TDD is. If I could sum it up, it’s a development methodology from the software engineering world, that takes small pieces of testable software within an application. These small pieces can become isolated to then plug into blocks of testing code, which determine whether the isolated pieces of application code are behaving correctly. This way, a development team will develop the application and testing code in parallel. It would not be unusual for a team to develop the testing code first before the actual application code, even though initial runs of the application would produce many test code failures. However as the team progresses by adding code to implement application features, those errors will progressively be resolved. The goal of course is to complete the application code and run it code without any errors being triggered from any of the test code blocks.
The concept may sound simple enough, but there’s actually a lot of support needed to make the whole process efficient. It looks like Specman has added some support for a team wishing to employ this methodology, specifically with unit testing. What can you tell our readers about this?
TD: Yes. First of all, let me clarify something with this general topic. What you described is like unit testing. Whereas “Test Driven Development” is the methodology above this. The unit testing is really taking – as you said - parts of your test environment and applying a test to it, whereas the Test Driven Development is really the philosophy of developing application features along with feature testing code.
I guess this is something that hasn’t been really put to practice in the verification domain.hen talking to verification engineers, you have to be careful that they understand these nuances because their daily work has to do with testing, and sometimes they are wondering what you are talking about if you mention unit testing. And they say: ooh, don’t we do this already? They may believe they are already doing it, but it’s not the case. TDD is really testing of the actual code that comprises your test environment.
AM: Right, a lot of the nomenclature and terminology is the same, so it’s easy for someone in verification to misunderstand TDD and what role it would play with test bench code. So tell me about Specman and the unit testing. What does it provide?
TD: Yes, so Specman has released a package called eUnit, which provides facilities for unit testing. I believe they derived this from the so-called xUnit class of unit testing frameworks. Early on, somebody started with a framework for unit testing in Smalltalk. Soon after, a lot of other languages adopted the same concept and named it similarly, with the language name plus “Unit”. That’s why it’s called PyUnit or CUnit and so on in other languages. I guess that’s why the Specman approach uses the eUnit and use a lot of the same terminology within.
So what’s it about? As I mentioned, it provides some facilities for unit testing which, for example, can parse a unit and automatically build some kind of harness around. This would allow you to drive all the inputs to the unit interfaces, or to the unit methods, and extract the outputs of the unit. In addition, there’s a framework that allows you to write tests. Again, templates are generated by the package, and there’s also a facility to run these tests. You can run a single test or all the tests, and so on.
So there are some limitations to this unit package; namely it cannot handle timing. This sounds like a big limitation because a lot of what we do in the test environment has to do with synchronizing to protocols, assembling frames and packets and so on. But if you structure your environment correctly, then you try to put most of the complex stuff into timeless methods. And these you can then test with the eUnit package.
AM: By timeless methods, do you mean functions executing in zero time like at the transaction boundary?
TD: Yes, but it can be anything really. It’s just something that doesn’t trigger on a clock, let’s say, and doesn’t consume time otherwise.
A good example is where we had a client with a very complex scoreboard that supported all kinds of protocols on either end. It had been reused a lot over the years and it was getting better and better. Then the problem arose that if you added a feature, you never knew whether that would break the existing set of functionality or not. So you have to have the means to regress every change you do to this scoreboard and the unit testing would be ideal in this case. You won’t need a simulator license, you won’t need to do go through the trouble of reviving an old project you may not have access to anymore, to test existing functionality. So this an ideal use case for unit testing or any form of TDD.
(end of part 1)