Feed on
Posts
Comments

Archive for February, 2016

Verilab at DVCon 2016

Thursday, February 25th, 2016 by Paul Marriott

Come and join us at DVCon 2016 in San Jose, CA, from February 29th - March 3rd, 2016.

Verilab’s Vanessa Cooper is this year’s Panel Chair and this is what she has to say about the line-up for 2016:

We had a number of excellent panel submissions to consider this year, and selected two that I think are of particular importance and address issues our audience is concerned with right now. Both panels will be held on Wednesday, March 2.

The first panel, “Redefining ESL” will be moderated by Brian Bailey. They will attempt to define ESL verification, from tools to flows. As they discuss, “How or when can all the disparate pieces be brought together, or is that even necessary?” there will be plenty of angles to consider.

The second panel, “Emulation + Static Verification Will Replace Simulation” will be moderated by Jim Hogan of Vista Ventures. The panel will discuss where it sees the verification paradigm of the future and where it leaves RTL simulation. It promises to be a lively discussion!

Bringing together two distinct groups of experts, I think attendees will be pleased by the different discussions and varying points of view offered by both of panels. We look forward to seeing you at DVCon U.S.!

Mark Litterick will be presenting his paper, entitled “Full Flow Clock Domain Crossing - From Source to Si”,  in the Design and Modeling Approaches session at 9am on Tuesday March 1st. This is the paper’s abstract:

Functional verification of clock domain crossing (CDC) signals is normally concluded on a register- transfer level (RTL) representation of the design. However, physical design implementation during the back-end pre-silicon stages of the flow, which turns the RTL into an optimized gate-level representation, can interfere with synchronizer operation or compromise the effectiveness of the synchronizers by eroding the mean time between failures (MTBF). This paper aims to enhance cross-discipline awareness by providing a comprehensive explanation of the problems that can arise in the physical implementation stages including a detailed analysis of timing intent for common synchronizer circuits.

Thoughts On Verification: Keeping Up With Specman (part 2 of 2)

Monday, February 22nd, 2016 by Alex Melikian

In Part 2, Alex and Thorsten continue discussing the latest developments with Specman and the ‘e’ language, along with practical use cases. They focus on coverage driven distribution and how anonymous methods can be applied. Part 1 of the conversation can be viewed here.


Alex Melikian: Okay. Moving on to another topic, let’s talk about something introduced at a recent conference covering Specman: the notion of coverage driven distribution. This has been something that’s been in the works for some time, now. It’s not 100% complete yet, but it looks like Specman features supporting coverage driven distribution are becoming piece by piece available. Before we get into that, once again for the readers that are not familiar with it, can you explain what are the concepts behind coverage driven distribution?


Thorsten Dworzak: Yes. So the typical problem for coverage closure in the projects we are usually working on as verification engineers is that you have this S curve of coverage completeness. You start slowly and then you easily ramp up your coverage numbers or your metrics to a high number like 80/90 percent.

And then in the upper part of the S curve it slows down because you have corner cases that are hard to reach, etc., etc. And it takes a lot of time to fill the gaps in the coverage over the last mile. So people at Specman have done some thinking about it. And of course one of the ideas that has been around in the community for some time is that you look at your coverage results and feed them back into the constraints solver.

But this is a different approach. Here you look at the actual coverage implementation and derive your constraints from this, or you guide your constraints from your actual coverage implementation. So to give an example, you have an AXI bus and an AXI transaction comprising of an address, a strobe value, direction, and so on. And in your transaction based coverage you have defined certain corner cases like hitting address zero, address 0xFFFF and so on. And whenever Specman is able to link this coverage group to the actual transaction, which is not easy and I’ll come to that later – then it can guide the constraints solver to create higher probability for these corner cases in the stimulus.
(more…)

Thoughts On Verification: Keeping Up With Specman (part 1 of 2)

Thursday, February 4th, 2016 by Alex Melikian

In this edition of “Thoughts On Verification”, Verilab consultant Alex Melikian interviews fellow consultant Thorsten Dworzak over recently released features from Specman and the ‘e’ language. With nearly 15 years verification experience, Thorsten has worked extensively with Specman and ‘e’, as well as regularly participating in conferences covering related subjects tools.

In Part 1, Alex goes over new features from Specman as Thorsten weighs-in on what he feels are the most practical according to his experience. In addition they discuss in detail the language and tool’s support for employing “Test Driven Development” methodology.

Alex Melikian: Hi everyone, once again, Alex Melikian here back for another edition of Thoughts on Verification. We’ve covered many topics on these blogs but have yet to do one focusing on Specman and the ‘e’ language. To do so, I’m very pleased to have a long time Verilab colleague of mine, Thorsten Dworzak with me. Like me, Thorsten has been in the verification business for some time now, and is one of the most experienced users of Specman I personally know. Actually, Thorsten, I should let you introduce yourself to our readers. Talk about how you got into verification, what your background is and how long you’ve been working with Specman and ‘e’.


Thorsten Dworzak: Yes, so first of all Alex, thank you for this opportunity. I’m a big fan of your series and okay, let’s dive right into it. I’ve been doing design and verification of ASICs since 1997 in the industrial, embedded, consumer, and automotive industry - so almost all there is.

And I’ve always been doing, like, both; design and verification, say 50 percent of each and started using Specman around 2000. That was even before they had a reuse methodology and they didn’t even have things like sequences, drives, and monitors. Later on I was still active in both domains but then I saw that the design domain was getting less exciting. Basically plugging IPs together, somebody writing a bit of glue logic and the bulk of it is being generated by in-house or commercial tools.

So I decided to move to verification full time and then I had the great opportunity to join Verilab in 2010.


AM: Of course your scope of knowledge in verification extends to areas outside of Specman. But since you’ve been working with it since the year 2000, I’m happy to have a chance to cover subjects focusing on it with you. That year is particular for me as I started working with Specman around that time, and I’ve felt that was the era where it and other constrained-random, coverage driven verification tools really took off.

It’s been a couple of years since I’ve last worked with Specman. However, you’ve been following it very closely. What are some of the recent developments in Specman that you think users of this tool and the ‘e’ language should be paying attention to?
(more…)

Work For Verilab