Feed on
Posts
Comments

Verilab at DVCon 2016

February 25, 2016 by Paul Marriott

Come and join us at DVCon 2016 in San Jose, CA, from February 29th - March 3rd, 2016.

Verilab’s Vanessa Cooper is this year’s Panel Chair and this is what she has to say about the line-up for 2016:

We had a number of excellent panel submissions to consider this year, and selected two that I think are of particular importance and address issues our audience is concerned with right now. Both panels will be held on Wednesday, March 2.

The first panel, “Redefining ESL” will be moderated by Brian Bailey. They will attempt to define ESL verification, from tools to flows. As they discuss, “How or when can all the disparate pieces be brought together, or is that even necessary?” there will be plenty of angles to consider.

The second panel, “Emulation + Static Verification Will Replace Simulation” will be moderated by Jim Hogan of Vista Ventures. The panel will discuss where it sees the verification paradigm of the future and where it leaves RTL simulation. It promises to be a lively discussion!

Bringing together two distinct groups of experts, I think attendees will be pleased by the different discussions and varying points of view offered by both of panels. We look forward to seeing you at DVCon U.S.!

Mark Litterick will be presenting his paper, entitled “Full Flow Clock Domain Crossing - From Source to Si”,  in the Design and Modeling Approaches session at 9am on Tuesday March 1st. This is the paper’s abstract:

Functional verification of clock domain crossing (CDC) signals is normally concluded on a register- transfer level (RTL) representation of the design. However, physical design implementation during the back-end pre-silicon stages of the flow, which turns the RTL into an optimized gate-level representation, can interfere with synchronizer operation or compromise the effectiveness of the synchronizers by eroding the mean time between failures (MTBF). This paper aims to enhance cross-discipline awareness by providing a comprehensive explanation of the problems that can arise in the physical implementation stages including a detailed analysis of timing intent for common synchronizer circuits.

Thoughts On Verification: Keeping Up With Specman (part 2 of 2)

February 22, 2016 by Alex Melikian

In Part 2, Alex and Thorsten continue discussing the latest developments with Specman and the ‘e’ language, along with practical use cases. They focus on coverage driven distribution and how anonymous methods can be applied. Part 1 of the conversation can be viewed here.


Alex Melikian: Okay. Moving on to another topic, let’s talk about something introduced at a recent conference covering Specman: the notion of coverage driven distribution. This has been something that’s been in the works for some time, now. It’s not 100% complete yet, but it looks like Specman features supporting coverage driven distribution are becoming piece by piece available. Before we get into that, once again for the readers that are not familiar with it, can you explain what are the concepts behind coverage driven distribution?


Thorsten Dworzak: Yes. So the typical problem for coverage closure in the projects we are usually working on as verification engineers is that you have this S curve of coverage completeness. You start slowly and then you easily ramp up your coverage numbers or your metrics to a high number like 80/90 percent.

And then in the upper part of the S curve it slows down because you have corner cases that are hard to reach, etc., etc. And it takes a lot of time to fill the gaps in the coverage over the last mile. So people at Specman have done some thinking about it. And of course one of the ideas that has been around in the community for some time is that you look at your coverage results and feed them back into the constraints solver.

But this is a different approach. Here you look at the actual coverage implementation and derive your constraints from this, or you guide your constraints from your actual coverage implementation. So to give an example, you have an AXI bus and an AXI transaction comprising of an address, a strobe value, direction, and so on. And in your transaction based coverage you have defined certain corner cases like hitting address zero, address 0xFFFF and so on. And whenever Specman is able to link this coverage group to the actual transaction, which is not easy and I’ll come to that later – then it can guide the constraints solver to create higher probability for these corner cases in the stimulus.
Read the rest of this entry »

Thoughts On Verification: Keeping Up With Specman (part 1 of 2)

February 4, 2016 by Alex Melikian

In this edition of “Thoughts On Verification”, Verilab consultant Alex Melikian interviews fellow consultant Thorsten Dworzak over recently released features from Specman and the ‘e’ language. With nearly 15 years verification experience, Thorsten has worked extensively with Specman and ‘e’, as well as regularly participating in conferences covering related subjects tools.

In Part 1, Alex goes over new features from Specman as Thorsten weighs-in on what he feels are the most practical according to his experience. In addition they discuss in detail the language and tool’s support for employing “Test Driven Development” methodology.

Alex Melikian: Hi everyone, once again, Alex Melikian here back for another edition of Thoughts on Verification. We’ve covered many topics on these blogs but have yet to do one focusing on Specman and the ‘e’ language. To do so, I’m very pleased to have a long time Verilab colleague of mine, Thorsten Dworzak with me. Like me, Thorsten has been in the verification business for some time now, and is one of the most experienced users of Specman I personally know. Actually, Thorsten, I should let you introduce yourself to our readers. Talk about how you got into verification, what your background is and how long you’ve been working with Specman and ‘e’.


Thorsten Dworzak: Yes, so first of all Alex, thank you for this opportunity. I’m a big fan of your series and okay, let’s dive right into it. I’ve been doing design and verification of ASICs since 1997 in the industrial, embedded, consumer, and automotive industry - so almost all there is.

And I’ve always been doing, like, both; design and verification, say 50 percent of each and started using Specman around 2000. That was even before they had a reuse methodology and they didn’t even have things like sequences, drives, and monitors. Later on I was still active in both domains but then I saw that the design domain was getting less exciting. Basically plugging IPs together, somebody writing a bit of glue logic and the bulk of it is being generated by in-house or commercial tools.

So I decided to move to verification full time and then I had the great opportunity to join Verilab in 2010.


AM: Of course your scope of knowledge in verification extends to areas outside of Specman. But since you’ve been working with it since the year 2000, I’m happy to have a chance to cover subjects focusing on it with you. That year is particular for me as I started working with Specman around that time, and I’ve felt that was the era where it and other constrained-random, coverage driven verification tools really took off.

It’s been a couple of years since I’ve last worked with Specman. However, you’ve been following it very closely. What are some of the recent developments in Specman that you think users of this tool and the ‘e’ language should be paying attention to?
Read the rest of this entry »

Verilab at MTV 2015

December 3, 2015 by Paul Marriott

Verilab will be at the Microprocessor Test and Verification conference, being held in Austin, TX, on December 3rd and 4th 2015.

On Thursday December 3rd, Kevin Johnson will be presenting in the Session A: Test Generation Techniques slot a paper co-authored with Jonathan Bromley entitled Is Your Testing N-Wise or Unwise? Pairwise and N-wise Patterns in SystemVerilog for Efficient Test Configuration and Stimulus.

Jeff Montesano will be presenting on Friday December 4th in the Session G. Methodology Innovations slot a paper co-authored with Mark Litterick entitled Mastering Reactive Slaves in UVM.

Papers and presentations for both can be downloaded from the Papers and Presentations section of our website.

DVCon-EU 2015 Wrap Up

November 16, 2015 by Paul Marriott

Congratulations to Jonathan Bromley and Kevin Johnston for winning the “Best Paper” award for their presentation entitled “Is Your Testing N-wise or Unwise? Pairwise and N-wise Patterns in SystemVerilog for Efficient Test Configuration and Stimulus”.

The full paper is available for download: N-wise paper (PDF)
The presentation is also available for download (complete with speaker notes): N-wise presentation (PDF)

Mark Litterick, Jason Sprott and Jonathan Bromley gave a tutorial entitled “Advanced UVM Tutorial - Taking Reuse To The Next Level”. More details of other tutorials and workshops are available on our Training and Workshops” page, with a full portfolio description. Contact info@verilab.com for more information.

Verilab at DVCon-Europe 2015

November 10, 2015 by Paul Marriott

Mark Litterick, Jason Sprott and Jonathan Bromley will be presenting the their “Advanced UVM Tutorial - Taking Reuse To The Next Level” in two sessions on Day 1 (Wednesday 11th Nov).

Full details of the tutorial are in this abstract: verilab_dvcon_eu2015_abstract

Jonathan Bromley will be presenting a paper he co-authored with Kevin Jonhston on Day 2 (Thursday 12th Nov) in the

Session TA1: Advanced Verification & Validation – 1 Forum 1

TA1.1: Is Your Testing N-wise or Unwise? Pairwise and N-wise Patterns in SystemVerilog for Efficient Test Configuration and Stimulus

Abstract:
Pairwise, and more generally N-wise, pattern generation has long been known as an efficient and effective way to construct test stimulus and configurations for software testing. It is also highly applicable to digital design verification, where it can dramatically reduce the number and length of tests that need to be run in order to exercise a design under test adequately. Unfortunately, readily available tools for N-wise pattern generation do not fit conveniently into a standard hardware verification flow. This paper reviews the background to N-wise testing, and presents a new open-source SystemVerilog package that leverages the language’s constrained randomization features to offer flexible and convenient N-wise generation in a pure SystemVerilog environment.

A freely downloadable SystemVerilog code package, together with the paper and presentation describing it will be available after the conference is over.

Verilab at SNUG Canada 2015

September 30, 2015 by Paul Marriott

Verilab will be participating in SNUG Canada on Thursday 1st October.

Bryan Morris will re-present his paper entitled “RESSL UVM Sequences to the Mat” that he co-authored with Jeff McNeal (which won the 2014 SNUG SV Technical Committee’s Best Paper award) in the A1 - User Session - Testbench Techniques with UVM session.

Alex Melikian will present his paper, co-authored with Hilmar Van Der Kooij and entitled “Replacing Hardcoded Register Values with Hardcore Abstraction” in the same session.

Bryan and Alex will be available to discuss their papers at the SNUG Pub following the end of the afternoon’s technical sessions.

[updated: Alex and Hilmar's paper and slides can be found on our resources page. ]

Verilab at SNUG Austin 2015

September 17, 2015 by Paul Marriott

Verilab will be at the Designer Community Expo where we will be raffling an Amazon Echo.

On Friday September 18th, Kevin Johnson will be presenting in the FA3 Verification - Improving Test Generation session a paper co-authored with Jonathan Bromley entitled Is Your Testing N-Wise or Unwise? Pairwise and N-wise Patterns in SystemVerilog for Efficient Test Configuration and Stimulus.

Jeff Montesano will be presenting the Technical Committee Award Winner during the FB4 User & Tutorial Session - UVM Agents, Verdi Debug session a paper co-authored with Mark Litterick entitled Mastering Reactive Slaves in UVM.

Papers and presentations for both can be downloaded from the Papers and Presentations section of our website.

Thoughts on Verification: Doing Our Work in Regulated Industries

August 18, 2015 by Alex Melikian

In this edition of “Thoughts on Verification”, Verilab consultant Jeff Montesano interviews fellow consultant Jeff Vance on verification in regulated industries. Jeff Vance has extensive verification experience in the regulated nuclear equipment industry. The discussion explains the role of regulators and how it can affect verification processes as well as interactions within the team. They also discuss the challenges and how innovation manifests in such an industry.

Jeff Montesano: Hi, everyone. Welcome to another edition of Thoughts on Verification. I’m pleased to have my colleague, Jeff Vance here with me to discuss his experience in working in regulated industries and how it would impact verification. Jeff, thanks for joining me.

Jeff Vance: Thanks. Happy to be here.

JM: So let’s talk a little bit about what would you think are the primary differences between working in regulated industries, such as nuclear and military, versus unregulated industries, where you’re making commercial products that might be going into cell phones and things like that.

JV: Yes. My experience is mostly in the nuclear industry, working on safety critical systems for the automation of nuclear power plants. There are a lot of differences working in that domain compared to most non-regulated industries. The biggest difference is you have a regulator such as the Nuclear Regulatory Commission (NRC) who has to approve the work you’re doing. So there’s a huge change to priorities. There’s a change to the daily work that you do, the mindset of the people and how the work is done. Ultimately, it’s not enough just to design your product and catch all your bugs. You have to prove to a regulator that you designed the correct thing, that it does what it’s supposed to do, and that you followed the correct process.

JM: I see, I believe we’ve covered something like this before with the aerospace industry. So you said there’s a difference in priorities, can you give me an example of what types of priorities would be different?

JV: I think the biggest difference is that you must define a process and prove that you followed it. That’s how you prove that the design has no defects. So even if you designed the perfect product and the verification team found all the bugs; there will still be an audit. They’re going to challenge you, and you’re going to have to prove that everything you did is correct. The primary way to do this is to define a process that the regulator agrees is good and create a lot of documentation that demonstrates you followed it. If you can prove that you followed that process throughout the entire life cycle of the product, that demonstrates to an auditor that your design is correct and can be used.

Read the rest of this entry »

Extending the e Language with Anonymous Methods

July 13, 2015 by Paul Marriott

Many programming languages like Python, Perl, and Ruby support anonymous methods, typically through classes or other constructs representing a block of code. These are useful to construct code by a higher-order method or to be used as arguments by higher-order methods.

The e language knows code blocks in (for example) list pseudo-methods and macro definitions, but they are defined statically and cannot be referenced, unlike the aforementioned languages. Using reflection, template structs, and define-as-computed macros, we implemented anonymous methods functionality in the e language, modeled after the corresponding Ruby feature.

It is licensed under Apache 2.0 and available in the vlab_util package.

The full article, written by our consultant Thorsten Dworzak, is published on the Cadence blog here:

http://community.cadence.com/cadence_blogs_8/b/fv/archive/2015/07/10/extending-the-e-language-with-anonymous-methods

See http://www.verilab.com/resources/other-downloads/ for download information.

Work For Verilab