Feed on
Posts
Comments

Archive for the ‘Interview’ Category

DVCon 2017: “SV Jinxed Half My Career” Panel Preview

Tuesday, February 7th, 2017 by Alex Melikian

Verilab is proud to have senior consultant Jonathan Bromley host the “SystemVerilog Jinxed Half My Career” panel at DVCon 2017, on Wednesday March 1st. Jonathan continues to serve on the SystemVerilog IEEE committee and is the author of numerous papers, including the recently published “Slicing Through the UVM’s Red Tape”. We took a moment with Jonathan to preview what this panel will cover and what those planning or thinking of attending should expect.


The title is “SystemVerilog Jinxed Half My Career : Where do we go from here”, which signals this panel will focus on areas of improvement. What are those areas of frustration in SystemVerilog you feel need improvement?

It would be easy to give a “where do I start?” response, and it’s not difficult to come up with a laundry list of desirable SystemVerilog improvements and nit-picky complaints. But this is DVCon, and our very knowledgeable and sophisticated audience deserves better. We have five extraordinarily experienced panelists and I hope we can venture beyond details of the languages and tools we have today, and think creatively about what we can and should hope for in the mid-term future. Many languages have been used successfully to create advanced testbenches - ‘e’, C++, Python, Vlang - but there’s no question that SystemVerilog remains dominant. Why is that? What sort of code will verification engineers be writing in five, ten years’ time?
(more…)

Thoughts On Verification: Keeping Up With Specman (part 2 of 2)

Monday, February 22nd, 2016 by Alex Melikian

In Part 2, Alex and Thorsten continue discussing the latest developments with Specman and the ‘e’ language, along with practical use cases. They focus on coverage driven distribution and how anonymous methods can be applied. Part 1 of the conversation can be viewed here.


Alex Melikian: Okay. Moving on to another topic, let’s talk about something introduced at a recent conference covering Specman: the notion of coverage driven distribution. This has been something that’s been in the works for some time, now. It’s not 100% complete yet, but it looks like Specman features supporting coverage driven distribution are becoming piece by piece available. Before we get into that, once again for the readers that are not familiar with it, can you explain what are the concepts behind coverage driven distribution?


Thorsten Dworzak: Yes. So the typical problem for coverage closure in the projects we are usually working on as verification engineers is that you have this S curve of coverage completeness. You start slowly and then you easily ramp up your coverage numbers or your metrics to a high number like 80/90 percent.

And then in the upper part of the S curve it slows down because you have corner cases that are hard to reach, etc., etc. And it takes a lot of time to fill the gaps in the coverage over the last mile. So people at Specman have done some thinking about it. And of course one of the ideas that has been around in the community for some time is that you look at your coverage results and feed them back into the constraints solver.

But this is a different approach. Here you look at the actual coverage implementation and derive your constraints from this, or you guide your constraints from your actual coverage implementation. So to give an example, you have an AXI bus and an AXI transaction comprising of an address, a strobe value, direction, and so on. And in your transaction based coverage you have defined certain corner cases like hitting address zero, address 0xFFFF and so on. And whenever Specman is able to link this coverage group to the actual transaction, which is not easy and I’ll come to that later – then it can guide the constraints solver to create higher probability for these corner cases in the stimulus.
(more…)

Thoughts On Verification: Keeping Up With Specman (part 1 of 2)

Thursday, February 4th, 2016 by Alex Melikian

In this edition of “Thoughts On Verification”, Verilab consultant Alex Melikian interviews fellow consultant Thorsten Dworzak over recently released features from Specman and the ‘e’ language. With nearly 15 years verification experience, Thorsten has worked extensively with Specman and ‘e’, as well as regularly participating in conferences covering related subjects tools.

In Part 1, Alex goes over new features from Specman as Thorsten weighs-in on what he feels are the most practical according to his experience. In addition they discuss in detail the language and tool’s support for employing “Test Driven Development” methodology.

Alex Melikian: Hi everyone, once again, Alex Melikian here back for another edition of Thoughts on Verification. We’ve covered many topics on these blogs but have yet to do one focusing on Specman and the ‘e’ language. To do so, I’m very pleased to have a long time Verilab colleague of mine, Thorsten Dworzak with me. Like me, Thorsten has been in the verification business for some time now, and is one of the most experienced users of Specman I personally know. Actually, Thorsten, I should let you introduce yourself to our readers. Talk about how you got into verification, what your background is and how long you’ve been working with Specman and ‘e’.


Thorsten Dworzak: Yes, so first of all Alex, thank you for this opportunity. I’m a big fan of your series and okay, let’s dive right into it. I’ve been doing design and verification of ASICs since 1997 in the industrial, embedded, consumer, and automotive industry - so almost all there is.

And I’ve always been doing, like, both; design and verification, say 50 percent of each and started using Specman around 2000. That was even before they had a reuse methodology and they didn’t even have things like sequences, drives, and monitors. Later on I was still active in both domains but then I saw that the design domain was getting less exciting. Basically plugging IPs together, somebody writing a bit of glue logic and the bulk of it is being generated by in-house or commercial tools.

So I decided to move to verification full time and then I had the great opportunity to join Verilab in 2010.


AM: Of course your scope of knowledge in verification extends to areas outside of Specman. But since you’ve been working with it since the year 2000, I’m happy to have a chance to cover subjects focusing on it with you. That year is particular for me as I started working with Specman around that time, and I’ve felt that was the era where it and other constrained-random, coverage driven verification tools really took off.

It’s been a couple of years since I’ve last worked with Specman. However, you’ve been following it very closely. What are some of the recent developments in Specman that you think users of this tool and the ‘e’ language should be paying attention to?
(more…)

Thoughts on Verification: Doing Our Work in Regulated Industries

Tuesday, August 18th, 2015 by Alex Melikian

In this edition of “Thoughts on Verification”, Verilab consultant Jeff Montesano interviews fellow consultant Jeff Vance on verification in regulated industries. Jeff Vance has extensive verification experience in the regulated nuclear equipment industry. The discussion explains the role of regulators and how it can affect verification processes as well as interactions within the team. They also discuss the challenges and how innovation manifests in such an industry.

Jeff Montesano: Hi, everyone. Welcome to another edition of Thoughts on Verification. I’m pleased to have my colleague, Jeff Vance here with me to discuss his experience in working in regulated industries and how it would impact verification. Jeff, thanks for joining me.

Jeff Vance: Thanks. Happy to be here.

JM: So let’s talk a little bit about what would you think are the primary differences between working in regulated industries, such as nuclear and military, versus unregulated industries, where you’re making commercial products that might be going into cell phones and things like that.

JV: Yes. My experience is mostly in the nuclear industry, working on safety critical systems for the automation of nuclear power plants. There are a lot of differences working in that domain compared to most non-regulated industries. The biggest difference is you have a regulator such as the Nuclear Regulatory Commission (NRC) who has to approve the work you’re doing. So there’s a huge change to priorities. There’s a change to the daily work that you do, the mindset of the people and how the work is done. Ultimately, it’s not enough just to design your product and catch all your bugs. You have to prove to a regulator that you designed the correct thing, that it does what it’s supposed to do, and that you followed the correct process.

JM: I see, I believe we’ve covered something like this before with the aerospace industry. So you said there’s a difference in priorities, can you give me an example of what types of priorities would be different?

JV: I think the biggest difference is that you must define a process and prove that you followed it. That’s how you prove that the design has no defects. So even if you designed the perfect product and the verification team found all the bugs; there will still be an audit. They’re going to challenge you, and you’re going to have to prove that everything you did is correct. The primary way to do this is to define a process that the regulator agrees is good and create a lot of documentation that demonstrates you followed it. If you can prove that you followed that process throughout the entire life cycle of the product, that demonstrates to an auditor that your design is correct and can be used.

(more…)

Thoughts on Verification: svlib - Including the Batteries for SystemVerilog (part 2)

Thursday, June 18th, 2015 by Alex Melikian

In part 2, Alex and Jonathan continue covering features in svlib. In addition, they also cover how someone in the verification community can quickly ramp-up, request support for problems and get involved with svlib. Part 1 can be viewed here.

Alex Melikian: Another feature set offered by svlib in addition to what we’ve talked about so far, would be its functions related to the operating system. Once again, the standard SystemVerilog already offers some functionality interacting with the OS, but svlib takes it a level further. Can you tell us what svlib can do in this respect that SystemVerilog does not?

Jonathan Bromley: I can’t imagine a general-purpose programming language that doesn’t have library or built-in features for figuring out the time of day, discovering the values of environment variables, exploring the filesystem’s directory structure and so forth. We should expect that to be available, without fuss, in SystemVerilog - but, frustratingly, it is not. That’s precisely what svlib’s OS interaction features are intended to offer. There’s a collection of related sets of features - they’re distinct, but in practice you’ll probably use them together.

First, there’s a set of tools (implemented in the Pathname class) that allow you to manipulate Linux file and directory names in a convenient and robust way. If you try to do that as a simple string processing task, you’ll typically get some nasty surprises with things like doubled-up path separators (slashes); the Pathname class copes with all of that. Next, there’s a slew of functions for inquiring about the existence and state of files, mostly based on the “stat” system call that you may be familiar with. You can check whether a file exists, decide if it’s a directory or softlink, find whether you have write permission on it, and examine its creation and modification datestamp. You can also find what files exist at any given location, using the same “glob” syntax (star and query wildcards) that’s familiar to anyone who has used the “ls” command.
(more…)

Thoughts on Verification: svlib - Including the Batteries for SystemVerilog (part 1)

Wednesday, June 10th, 2015 by Alex Melikian

In this edition of “Thoughts on Verification”, Verilab consultant Alex Melikian interviews colleague Jonathan Bromley, lead author of the svlib library. svlib is a free open source utility functions library for SystemVerilog, available on Verilab’s website.

In part 1, Alex and Jonathan begin the discussion by covering why svlib was created, features it offers, as well as some of the internal details of this open source library.

Alex Melikian: Hi Jonathan! It’s great to have you with us. I believe you’re our first ‘repeat’ interviewee on these ‘Thoughts on Verification’ series, so I should really be saying ‘welcome back’.

Jonathan Bromley: Thanks, Alex! I’m sure you could have found someone more exciting from among our amazing colleagues, but it’s a pleasure to be back in the hot seat.

AM: We’re here to discuss svlib, an ongoing project you’ve been working for some time now. Let’s begin by introducing it for the unfamiliar, how would you best describe svlib?

JB: It’s partly a “pet project” that I’ve been thinking about for some years, and partly a response to genuine needs that I’ve encountered - not only in my own verification work with SystemVerilog, but also when observing what our clients ask for, and what I heard from students back when I was delivering training classes before joining Verilab. It’s a package of general-purpose library functions covering string manipulation, regular-expression processing, file access and operating-system interface stuff such as wall-clock time and directory exploration, and a bunch of other utility functions. Almost all are things that - frustratingly - aren’t available in standard out-of-the-box SystemVerilog, but exist in just about any general-purpose language. With svlib added to your toolkit, SystemVerilog starts to look a lot more like a competent all-round programming language.

Most of the non-trivial functionality in svlib is implemented using SystemVerilog’s fantastic C-language interfaces, the DPI and VPI. For many years, folk who are expert in both SystemVerilog and C/C++ have used those interfaces to implement their own additional functionality. The contribution of svlib, I hope, is to make a wide range of useful new features freely accessible to anyone who’s familiar with SystemVerilog. No C or DPI expertise is needed to use it.

(more…)

Thoughts on Verification: The Verification Mindset (Part 2 of 2)

Monday, October 20th, 2014 by Alex Melikian

In part 2, Verilab consultants Alex Melikian and Jeff Montesano continue their discussion on the topics covered in the “Verification Mind Games” paper Jeff co-authored and published at DVCon 2014. Part 1 can be viewed here.

Alex Melikian: So as your paper explains, and as we’ve determined in this conversation, one part of the verification mindset is to determine ‘what’ has to be verified. However there’s something else, something that involves taking a step back and asking: “Can a particular scenario happen? ” with the DUT. In your paper, you give the example of a design that implements the drawing of shapes, and a test scenario where this design is given the task of drawing a circle with a radius of value zero. Describe this particular situation for our readers.

Jeff Montesano: Yeah, this was an interesting case. It was a design that was responsible for drawing circles, and took in an input which was the radius. The designers at the time specified the range of radii that this design needed to handle. When we asked the question, “well, what happens if you input a radius of zero into it?”, the designers came back and said, “It’s invalid. You don’t need to test it. We didn’t design the thing to handle it, and there’s no point in testing it. ” And while some verifiers might stop there and say, “Well, if the designer thinks it’s not valid, we shouldn’t test it”, we decided to go ahead with this case anyway.

In fact, we were able to show that in the broader system, there were circles of radius zero being generated all the time, and so the design genuinely had to handle it. When we finally ran a test with zero radius, it resulted in the design hanging. So as it turned out, it was very important that we brought this up, in spite of what the designers had suggested to us.

AM: This looks like a case where the verification engineer has to be sort of an independent thinker, and consider what are all the possibilities that can be applied to the design, as opposed to just following only what the design spec spells out what is handled.

On the other hand, I can understand the counterargument to this. As always, time is limited, and the verification engineer must judge the verification requirements carefully. For example, they have to ask themselves: “Is it worth it to verify something that is invalid? Are we doing a case of garbage in, garbage out? Or is there a specific requirement detailing out that a specific condition cannot be applied?” This sounds all easy, but it’s really not.

Oftentimes, the verification engineer is the first one to try out all the possibilities or combinations of how a design feature is used, especially when constrained-random stimulus is applied. This can lead to particular cases of stimulus, some of which are outside the intention of how the feature is supposed to be used. Then again, it may lead to a corner case that is genuinely valid and it wasn’t accounted for in the design.

So there will be situations where the verification engineer may be in un-charted territory. It can be a fine line between invalid case or a corner case nobody thought of. However, I think going back to that initial question will really help: “What am I verifying here?” or in other words “Am I doing something that is of value to verify the design?”

JM: Totally agree with everything you’ve said there. There is a fine line, and it takes experience to determine what is useful to go after, and what is a waste of time. I can recall a client that I had to work with in the past, who apparently had been burned by some type of internal issue in post-silicon, and so they wanted verification on the parity between interfaces of the different RTL blocks. Now, if an ASIC has issues that cause it to be unable communicate properly within itself, it would never be able to pass the scan test - it just would be sorted into the garbage bin. And so a verifier should never try to verify parity between sub-blocks because it’s a waste of time. It turned out in this case we didn’t have a choice. But as expected, we never found a bug there.

AM: Moving on to another topic in your paper, you mention that testing a design in creative ways is a core competency of the verification mind. What are some of the creative ways that you’ve done verification?

JM: I can think of something that came up recently. I was verifying an interrupt handler, and it was a circuit of type ‘clear on write one’. I had a test running where I would generate the interrupt, read it, go write one, read it back, make sure it got cleared to zero, and I thought I was done. However, the question arose, well, “what would happen if you wrote a zero to those bits?” Now, at that point, I could’ve written a brand new test which would generate the interrupt, write a zero, give it a brand new name, do all that stuff.

But I thought up of a more creative way to do it. What I was able to do is take the existing test class, make an extension of it, and add a virtual function that was defined to be empty in the base class [the original test] but defined to write zeros to all the interrupt registers in the derived class [the new test]. The original test was then modified to call this virtual function right after having provoked all of the interrupts, and the rest of the base test would proceed exactly as before.

So all the checks from the initial test would fail if any of those writes with zeros had had any effect. And so with very few lines of code, I was able to leverage what I’d already done, and use virtual functions in a creative way.


AM: So you’re not only just creative in what you we’re testing, but also creative in how to reuse the checks and the tests.


JM: You bet. Sure.


AM: So, following this theme of creativity, another area where I think creativity is important is “debug-ability”. What I mean by “debug-ability” is a measure of how easy it is to debug something using the test bench, or VIP you’ve created. Some examples I’ve personally seen is generation of HTML files, to help visualize information or data processed in a simulation thus making it much easier for verification or designers to debug something. Developing tools like this become very useful in the case of designs that implement serial data protocols.

I think it should be mentioned that making debugging easier is part of our job of verifying. You touch on this in your paper and state that prioritizing ‘debug-ability’ is important, and that sufficient time should be allocated to develop tools like the one I mentioned. What about you? What have you seen?


JM: Again, I totally agree. Designers don’t really need to think about ‘debug-ability’ in their day-to-day work. They have other things to think about, right? They have to think about making their design meet the specification with sufficient performance, and with no bugs in it. Whereas when you’re writing code to do verification, debug-ability is right up there.

So one example of this is if you have bidirectional buses on a VIP. Now if you were to implement a single-bit bidirectional bus in your VIP, you could create an interface that has a single bit, and you can declare it as an “inout” in System Verilog, for example, and you’d be able to implement the protocol correctly. However, at the test bench level, if you had multiple instances of these, or one or more designs under test communicating with your VIP, you would never be able to tell who was driving and who was receiving, because your VIP only has one signal, and you’d probably have to resort to print statements at that point.

A better way to do it is to actually split out the bidirectional bus to a driver and a receiver at the VIP level, and that way, you can have visibility on whether the VIP is driving or receiving. In the paper, I give some code examples of how that can be done.


AM: That’s a good tip, tri-state buses can be tricky at times. One last topic I’d like to touch upon from your paper is the statement that the verification mindset should focus on coverage, not test cases. Can you elaborate on that?


JM: Sure. So we now have these amazing tools like constrained random verification. But it’s hard for us sometimes to break away from the tradition of writing a lot of tests. It turns out that the more tests we write, the less we’re making use of the power of these tools. Ideally, you would have one test case that randomizes, a configuration object, you’d have a whole lot of self-checking in your environment, and you’d just rerun that same test case as many times as you can. Thousands of times. And through that mechanism, you’d turn up a whole lot of bugs, and make full use of the tools.


AM: It sounds like you’re talking about an overuse of directed test cases instead of exploiting the constrained-random nature. I have seen this too, usually at the start of a project there’s a lot of pressure to generate test cases that are simple and straightforward … and I would have to say that it would be normal to expect this, because the initial version of the design only have a limited feature set.

However, it’s really important for verification engineers to keep in mind the ‘law of diminishing returns’, if I can borrow the term from the field of economics. What I mean by this is that there’s a point where using directed test cases will become decreasingly efficient during the course of the project because these directed test cases have decreasing chance of finding bugs. Eventually, the verification team has to let go of the directed approach, make the jump to the constrained-randomization approach and go on from there. I think we agree that it’s the only way to efficiently find undiscovered bugs.


JM: Yeah. Just to add to the topic of coverage – there’s an important point that is sometimes missed, which is that coverage and checks need to be almost married together. Reason being, you don’t do checks without coverage, and you don’t do coverage without checks. Why? Because if you do coverage without checks you might think you hit something, but you weren’t even checking at the time. It’s a false positive, and is in fact very dangerous. The opposite is if you were to do checks without coverage. In that case, you’re randomizing things and doing a bunch of checks, but you don’t know which cases actually occurred, especially in the type of “one test” approach that I’m describing here.


AM: Definitely. When constrained random coverage-driven verification is done, both the constrained random and the coverage parts have to be developed together. Coverage is the only indicator you can use to convince someone not familiar to constrained-random that a situation is being executed, therefore there’s no need to write a directed test. It definitely pays dividends to do the constrained random way and collect coverage as soon as you can, as opposed to trying to hit each and every situation one at a time.

I think that’s a good note to end on. Jeff, thanks again for joining me on this edition of ‘Thoughts on Verification’.


JM: Thank you, Alex. Take care.

Thoughts on Verification: The Verification Mindset (Part 1 of 2)

Tuesday, October 7th, 2014 by Alex Melikian

In this edition of ‘Thoughts on Verification’, Verilab consultants Alex Melikian and Jeff Montesano explore the ideas behind the “Verification Mind Games” paper Jeff co-authored and published during the 2014 DVCon conference. Jeff Montesano migrated to verification after many years involved in ASIC design. The inspiration behind this paper comes from his exposure to the diverging philosophies and practices involved in ASIC/FPGA verification vis-à-vis design.

In part 1, Jeff and Alex discuss the main topics of the paper as well as Jeff’s experiences of transitioning from design to verification.


Alex Melikian: Hi everyone, welcome to another edition of ‘Thoughts on Verification’. I’m pleased to have my colleague Jeff Montesano here with me to discuss the paper he co-authored and published at this year’s DVCon entitled “Verification Mind Games: How to Think Like a Verifier”. Jeff, thanks for joining me on this edition of “Thoughts on Verification”.


Jeff Montesano: Thank you, Alex. Good to be here.


AM: I’m glad we have the chance to talk about this paper and explore deeper into the ideas behind it. For our readers who are not familiar with it, I highly recommend reading it as the topic primarily focuses on the mindset of a verification engineer. At first glance, the topic may seem more suited for novices of verification, and to a certain extent you could argue that position. However, I have seen some of the most experienced and skilled verification engineers, including myself, sometimes lose track of the verification mindset and realize our mistakes only later on in a project. I think a good way to start off is to bring up one of the initial statements in your paper, where you mention that when implementing verification tasks, there can be too much focus on ‘how’ something should be verified rather than ‘what’ should be verified. Can you elaborate on that statement?


JM: Sure. I think we’ve come to a point in the verification world where there’s a huge amount of emphasis on tools and methodologies. A good example of that is if you ever look at any job posting, pretty much all employers are going ask is “Do you know UVM? Do you know System Verilog? Do you know some C? What simulator tools do you know?” If you can check off all those boxes then you can get the interview, and you might even get the job. However, knowing those things is pretty much just the first step. It’s what you do with those tools, and furthermore what you would do in the absence of those tools that really defines verification excellence, in my opinion.


AM:That’s an interesting comment that will probably raise some eyebrows. What do you mean by that?


JM: What I mean is that there are good practices of verification that transcend the use of any verification tool or language. For example, there’s obviously the basic things like doing good verification planning and knowing how to write a good verification plan. That’s part of the ‘what’ question you mentioned earlier. But there are things that go deeper. You can be confronted with a design that uses a clock-data recovery scheme. I bring up this example in the paper. I’d say a huge percentage of even experienced verification engineers would take an approach when building a verification component to interact with this design that is not ideal. They would implement a clock data recovery algorithm in the verification component, whereas you could verify the design to the same extent, using a lot less effort and compute resources, by just instantiating a simple PLL circuit. In other words, we’re avoiding unnecessary complication of modeling clock data recovery, while still achieving the goal of verifying that the design is operating within the correct parameters.

This is an example of how it doesn’t matter if you’re using UVM, or System Verilog or constrained-random. If someone takes the approach of doing something more complex like a clock data recovery model instead of using a simple PLL, they will miss the boat, because they could have accomplished their task much faster, but just as effectively.


AM: Ah! I see your point. In this case, the ‘what’ we’re supposed to do as verification engineers is to verify the clock-data recovery scheme is recoverable within the spec, as opposed to clock-data recovery scheme itself. So from your experience, what is a common error that you see the most often where someone has lost track of the mindset of a verification engineer, or in other words, not thinking the way a verification engineer should be?


JM: So just to give you and our readers a bit of background, I came from the design world. I was an ASIC designer for a number of years before doing verification. And something I see a lot, and something I’ve even been guilty of, is snooping on signals from within the design under test. Snooping on design signals for reference can definitely speeds things up because your verification environment would instantly be in perfect sync with the design. You can figure out what state it’s in, you can figure out what it’s doing, and it will remain synchronized with it.

However, you run the risk of missing some very important bugs, the most important ones being those associated with a design’s clock. If you’re directly using s design’s internal clock as a reference, you’re completely blind to any clocking issues. Even if the design clocking has issues, goes out of spec or has glitches, if the verification component snoops the DUT’s clock signal instead of independently verifying it, it will erroneously follow that clock and intake the data along with it. This is especially true in the case of RTL simulations.


AM: Hence the danger would be that the verification component will fail at its fundamental task and never flag the erroneous clock for running at the wrong rate or having glitches.


JM: That’s right. And so a guideline for this is that if you are forced to use internal DUT signals as a reference, you must apply independent checks on that signal. It cannot be assumed the signal is correct, thus it would be a mistake to believe it can be relied upon.


AM: Backing up a little in our conversation, I’m glad you mentioned you came from a design background. I don’t want to take too much credit here, but I think I was the one who convinced you to make the jump over from design into verification.


JM: That’s right. You were.


AM: I think a lot of our readers can relate to this. We’re continuing to see a need for verification engineers, so to fill this demand we’re seeing some ASIC/FPGA engineers who were originally on the design side, make the jump into verification. Those who make this jump have to deal with a lot of new things that are a lot more software or software-engineering oriented. They can find themselves in a world that is quite alien and foreign to them in different ways.

For example, when someone is making the switch from design to verification, their day-to-day coding tasks will no longer be dealing with modules or procedural statements, but rather objects and inheritance structures. Furthermore, they may find that things change at the methodology level as well. They may have to get familiarized with things like continuous integration, or even agile project management methodologies. I cannot emphasize enough how this transition from design to verification is not easy and the challenge should not be underestimated.

What are your thoughts in regards to someone making that transition?


JM: Well one big thing that always comes up is revision control. Let me explain: revision control is something that’s been around for a while, and the tools have got better with time. However, there are certain aspects of revision control that are very under-appreciated by a lot of people in the design and verification community. One of which is branching and merging, which granted, for a time, was not easy to use. I can recall pulling my hair out with some tools because the merge wouldn’t work out the way you wanted it to, and so you’d be reluctant to create branches. However, some of the more modern revision control tools we use today, Git for example, makes branching and a merging operations the most natural thing you can do. Thus, this creates so many opportunities on how you can organize and co-develop your work in a cleaner, more seamless way.

Another thing is because verification languages have become object-oriented as you alluded to earlier, there are some aspects of a verification environment that are going to be very different than what you’d find in a Verilog module or VHDL entity. For example, you have the ability to employ polymorphism by using virtual functions. Now, I didn’t always know what use virtual functions had. I can recall at the start of my verification career, I wasn’t able at all to tell you what a virtual function did, whereas today I consider them an indispensable part of my toolbox.


AM: Well, I’m happy to see you’ve made the transition and adjusting quite well to the verification world. I can jokingly say you’re a “success story”, but of course this story is not exclusive to you. Quite frankly, anybody who is keen on new challenges, and wanting to learn new things can replicate your feat. I think we can both agree that that taking the time to understand the verification mindset would be a good place to start.

(End of Part 1)

Thoughts on Verification: Verification in the DO-254 Process (Part 2 of 2)

Thursday, September 19th, 2013 by Alex Melikian

In Part 2, Verilab consultants Paul Marriott and Alex Melikian continue their discussion on verification in a DO-254 quality assurance process. In this part of the conversation, they focus on the technologies, tools and methodologies that intersect DO-254 and verification. Part 1 can be viewed here.


Alex Melikian: Let’s move on to talk about some of the tools available tailored for a DO-254 process available in the industry. A name that seems to come up regularly is DOORS. What is DOORS?


Paul Marriott: DOORS is a tool, which stands for “Dynamic Object-oriented Requirements System”. It’s a tool which allows you to capture requirements and other documents, which are required for a DO-254 process, and to put in the traceability between the different levels of documentation. So every design requirement has to trace through to a design implementation; it has to trace through into a validation plan; it has to trace through into a test case. You have to describe a procedure, what you’re going to do, and what the acceptance criteria are. This ensures that every requirement in the design is actually implemented and validated.

And DOORS is a tool which allows you to manage these databases and provide the links between the different levels of documentation in a way that’s easy to see in areas where you might’ve missed out.


AM: It appears DOORS is the stalwart tool for the DO-254 certified process. So how do some of the verification tools out there that support or work with DOORS interface with it?


PM: DOORS will allow you to import and export via spreadsheets, and if you think of the way DOORS works, essentially, it’s a almost like a spreadsheet because you have a table of requirements and then a link from each requirement to another document, which may be a validation plan or a test description. And so, you could, I mean DO-254 itself says nothing about the tools that you have to use. You can achieve DO-254 certification with pencil and paper. As long as the process is repeatable and traceable and is described and audited properly, then you can do it. But of course, automation makes it much more efficient and much easier.

DOORS is not really designed to interface directly with verification tools. With modern verification, we’re using assertion-based and coverage-driven verification, and you can actually implement the validation of many of the requirements by your assertions. And so, you want to ensure that a particular assertion is tied back to a particularly requirement. Therefore, you may be able to export the set of requirements from DORS into a spreadsheet and import that into another tool, which then ties into your coverage plan, which is then linked into your simulator’s verification manager.


AM: There appears to be a lot of strong industry support behind it. Some of the major EDA tool vendors have features that interface directly to a DOORS tool, so it’s good to know that you’re not stuck with one specific vendor or tool provider.


PM: Yes, exactly. People think it’s the tools themselves which are certified, but it’s not. It’s the overall process which uses the tools. So DO-254 itself doesn’t mandate any particular tools. Of course, certain tools can make it easier to achieve the goals of DO-254, but it doesn’t mandate them.


AM: Okay. Now, the DO-254 process goes back a long way. Earlier (in Part 1), we mentioned we would talk about some of the modern verification tools and processes and how they relate to this subject. So how do things like coverage-driven testbenches or constrained random generation fit into a DO-254 process?


PM: This is one of the challenges with the DO-254 process. Because everything is requirements-based, the temptation is to write one test per requirement. And so, this leads to a very directed testing methodology, which fits in well to the traceability because you can trace one test back to one requirement. But with a modern process, in terms of verification that tends not to be the case. We tend to have fewer test cases and put in functional coverage and assertions to cover much functionality per test run. DO-254 is fine with this, as long as you can trace back those assertions and functional coverage points to actual requirements.

And the traceability tools allow you to ensure that every requirement does in fact trace down to a cover point, a test result or an assertion point.


AM: That’s good to know, especially for those who have been taking more of a directed test case suite approach to achieve their objectives, they can now be persuaded that modern verification paradigms and tools out there are still compatible with DO-254. There are other dynamics involving the management of a verification project like Agile. How does that fit into the DO-254 process?


PM: That’s a good question. Process and project management are not necessarily the same thing. The process describes things that have to be done to get you from design specification to verified implementation. Project management is the steps that you take along the way to ensure that the work gets done. Now, with DO-254 having various stages of involvement in terms of audits, it does impose a somewhat rigid structure on the order that things are done. But there’s nothing to say that along the steps of meeting those milestones that you can’t use an Agile approach.

At the end of the day, with any kind of project management, you still have to cover all of the work that has to be done to reach a certain milestone, whether it’s agile or whether it’s a standard waterfall approach or any other project management approach. And I’ve worked with companies doing DO-254 certified processes that are quite happy using a Kanban style of Agile project management to achieve the goals of the overall process.


AM: I see, thanks for clearing that up for me. So let’s set the record straight for our readers: Agile deals with project management; whereas, DO-254 is a process flow.


PM: Exactly.


AM: So does DO-254 only affect firms involved in aerospace or mission-critical products, or are there firms outside of these industries that can also adopt it?


PM: DO-254, by definition, is purely for certification of complex electronic hardware for civilian aerospace. That’s its definition. That said, it doesn’t mean that the DO-254 style of process can’t be used for other mission-critical products, and there are groups working – particularly in the automotive area – on similar processes to DO-254 to achieve the high process assurance that’s required. If you look at some of the major automotive recalls that have been done in the past few years, some of the costs of the recalls have been tens of billions of dollars because there’s been bugs found in the software or the hardware, which has caused some problems.

I mean the aim with DO-254 is to ensure that when you’re designing a Level-A criticality system it will not fail and people don’t get killed. That said, you can still have a 100 percent certified Level-A system which can still fail because somebody didn’t think of something in the requirements. So the critical aspect of any higher reliability, high assurance process is the people involved ensuring that they write good requirements in the first place. And that’s really the hardest part of any mission-critical process.

And other mission-critical areas can definitely learn from the DO-254 experience. There’s also DO-178B, which is used for the software aspects of civilian aerospace, which also has the same criticality levels as DO-254 because now with systems becoming more complex from a software point of view, you still want to have the same assurance that the software is reliable and safe.


AM: That’s a good point you made about noncritical systems taking an approach of building critical systems designs. We can all agree that though your car may have an airbag, you’d still like the brake pedal to work 100% of the time. So if a firm is not obligated to adopt a DO-254 certified process for whatever reason, what are some of the things that it can take away from the process just the same?


PM: There’s several important aspects, which I think can be taken away for everybody. The first is to have a good set of design requirements. If you have a good specification, then you have a much higher likelihood of success at producing a correctly working product, which has been essentially verified to be reliable. So the process itself ensures that the requirements are written in a formal way. And the process also describes the traceability between the requirements, the verification plan and the implementation and also ties in code coverage and other aspects to verify that what was implemented was in fact what was intended.

Code coverage tells you nothing about the quality of the code, but if you don’t have 100 percent code coverage, either you’ve missed out verifying some requirement, as requirements were implemented without being specified – or that the testing wasn’t sufficient to cover all of the implementation. So it’s very important to look at code coverage, even though it tells you nothing about the code quality. And just having the formal audit process so people sit down and actually review the work that’s been done is also important.

Because DO-254 is high assurance, they really focus on peer reviews of the requirements themselves, the verification of the requirements, and the validation to ensure that there’s no single point of failure in the entire process. This is why, in fact, there’s no DO-254 certified tools because you want to make sure that each tool is covered by another process or another tool to ensure that there’s just no one single point of failure in the entire process.


AM: That’s interesting, but I can certainly understand the thinking behind that. On that note, I believe we’ll conclude this edition of “Thoughts on Verification”. I hope our readers not only feel safer when they board an airplane, but have also learned a few things about the DO-254 process. Thanks for your time Paul.


PM: Thank you Alex for some very interesting questions.

Thoughts on Verification: Verification in the DO-254 Process (Part 1 of 2)

Wednesday, September 11th, 2013 by Alex Melikian

After a summer hiatus, our regular “Thoughts on Verification” blog postings is back. In this edition, Verilab consultant Paul Marriott is interviewed by his colleague Alex Melikian to discuss about DO-254 processes and certification, along with how modern verification practices fit into them. Paul’s industry experience includes verification projects in the aerospace sector, necessitating strict assurance overviews such as DO-254.

In Part 1, Paul describes DO-254, his experience with projects involving it, and what implications it has on the verification process.


Alex Melikian: Hi everyone, welcome to another edition of Thoughts on Verification. I’m pleased to have Paul Marriott with me to talk about the DO-254 process and how it fits in with the verification processes. If you’re not familiar with it, not to worry, I wasn’t very familiar with it either before I started researching for this conversation. Hence, for the sake of our readers, let’s start off by asking you Paul – what is DO-254 in a nutshell?


Paul Marriott: Hi Alex, glad to be here. So: DO-254 in a nutshell is a process assurance flow for civilian aerospace design of complex electronic hardware, and complex electronic hardware basically means ASIC or complex FPGAs. It’s a design assurance process specifically for these aspects of electronic design for civilian aerospace. It is actually document number 254 published by the Radio Technical Commission for Aeronautics - a not-for-profit organization. In the United States, the Federal Aviation Authority requires a DO-254 process is followed and in Europe, the equivalent standard is EUROCAE ED-80. The FAA publishes many guidelines through their CAST papers.


AM: Okay. So we here at Verilab deal, of course, with verification. Can you tell me how some of the aspects of DO-254 generally affect the ASIC or FPGA verification process?


PM: Yes. Most important to note about DO-254 is there’s different levels of certification, depending on the criticality of the components involved. So, for example, for a Design Assurance Level-A certified system, this is a system which, if it fails, will cause death because of complete failure of the aircraft. Whereas, Level-B is a level which may cause death but may not necessarily cause death, but it’s still a critical component; and there’s a level below that called Level-C. So the idea of DO-254 is to have 100 percent confidence at Level-A that the design cannot fail and that every aspect of the design has been verified to 100 percent completeness.

This means that 100 percent code coverage has to be completed – all statements, all branches, all cases, all conditions have to be taken. And every line of design code has to be tied back to a particular requirement in the design specification and so this means that every requirement has to be implemented, and not only that, but every line of design code has to have some function. Therefore, you can’t have lines of design code which don’t have a tieback to a specific requirement.

Of course, the challenge with this is that it’s difficult to write the requirements in the first place to ensure that all of the functionality is actually described. You can write a system with a set of requirements, but you can still miss some critical functionality. Hence one of the key parts of a DO-254 process is verification of the requirements in the first place. Now, DO-254 uses the terms verification and validation rather differently than what we do as verification engineers.

So the process that we would normally call verification in DO-254 is validation where we’re validating that the implementation of the requirements actually meets the specification and that when we do a test that the performance of the design actually meets the intent of the requirement. Whereas, verification is what you do to the requirements to verify that they actually meet the intent of the design in the first place. And you have peer reviews to go through the requirements themselves and verify that they are actually functioning correctly in a proper description of what the design is.

So it’s a double process. It’s a process to verify that the requirement is correct in the first place, and then to validate that the implementation of those requirements in the design actually meets the specification that’s been written.


AM: Hmmm, interesting. Now, I do have more questions related to functional verification and other aspects of modern verification processes, but I’ll get to that later in the interview. First, lets cover the work you’ve done with DO-254 compliant verification projects. Describe what it’s like, and how did you feel it differs from verification project that is not DO-254 compliant?


PM: The first difference I noted was that the specifications were written in a much more rigorous way than the specifications we normally see in the electronic design area. Specifications for a DO-254 process tend to be written in a formal style. So a requirement would be written as, there shall be a certain function – and the word shall means a requirement that has to be implemented and verified. Whereas, unfortunately, lots of the time in verification we see specifications which are written in a more loose way, and it’s up to more the designer’s interpretation as to what the specification is supposed to do.

In theory, in a DO-254 process, you are supposed to have completed all of the specifications before you even start on the implementation and verification – though in practice this doesn’t necessarily happen. But everything has to be complete to 100 percent, so all the design requirements have to be completed; they have to trace down into implementation. They also have to trace down into a verification and validation plan. And so, every function has to have some way of verifying and validating that it’s correct. Sometimes these are by simulations; sometimes these are by inspection, and sometimes these might be actually by running a test on the target hardware.

One other difference with a DO-254 process is that you have to have independence of verification. And so, you can’t trust, necessarily, the output of one tool to be correct. You have to have a way of cross verifying that that tool is producing the correct result. So if you have a design which is synthesized into gates, you might use an equivalence checker to verify that the output of that tool is still a correct description of the RTL that went in. So you always have this chain of checks to ensure that every step of the chain is covered in more than one way.


AM: Yes, I’m starting to see the patterns and mentality behind DO-254. As verification engineers, we often obsess about coverage and particularly requirement traceability. It’s good to see that for mission critical systems, DO-254 imposes a very rigorous process in these areas.

Another remark about DO-254 that somebody looking at it from the outside would say is that it appears to be heavily oriented around auditing. Would you say this characteristic would slow the verification process down, or would it improve the overall quality through the auditing process?


PM: The auditing process in DO-254 is an audit of the process itself. People often get confused as to what DO-254 means and talk about DO-254 certified tools. In general, it’s not the tools that are certified; it’s the overall process because the idea of the process is to have high assurance that the process itself will achieve a design that’s reliable. And so, it’s not any one point in the chain which is certified; it’s the overall process. The auditing part is to ensure that the plans which are created to describe the process are actually checked to make sure that they comply with the objectives of the DO-254 process itself.

And so, there’s all these different stages of involvement called SOIs, meaning ” Stages Of Involvement “, which audit the different steps along the process from the creation of the requirements to the final implementation in hardware. There’s a document called the Plan for Hardware Aspects of Certification. This is where you set down the actual process which describes how you plan to verify and certify that the design you’ve created is actually correct. This would be audited at the first SOI to ensure that this document correctly describes a process which will achieve the assurance levels required for the criticality level of the project.

Of course, with any auditing there is an overhead, but on the flip side, it makes people sit down and go through the code that they’ve created and the different aspects of the process to ensure that it’s correct. And oftentimes in verification of design these kinds of sessions where people go through and do code reviews and documentation reviews don’t take place. So you might think you’re saving time by not doing these, but in the end if you have to do re-spins because you’ve missed out some critical aspects, then it actually takes more time.

So even though there’s an overhead in doing this extra process, if you come up with a process which is repeatable and gives you the assurance that you require, you can actually achieve the objectives with a minimal overhead in terms of time. Of course, any process that requires documentation is going to take more time than one that doesn’t.


AM: Always interesting how this proverbial balancing act in verification appears in different conversations: investing in the right amount of documentation and auditing to gain the maximum amount of quality.

(End of Part 1 of 2)

Work For Verilab