Archive for the ‘DAC 2007’ Category
Music. Dancing. Free food and drink. Did I mention drink? All in a room full of engineers…? As it turns out, Denali Night Fever provided an excellent opportunity for a veritable who’s who of EDA luminaries and the rest of us just along for the ride to relax after a couple of long days at the DAC. The event was held at the On Broadway Event Center just a few blocks away from the
The party itself was a blast, but the more interesting question is whether the same held true for DAC this year. According to Richard Goering, there were 5,135 registered attendees, 3,796 exhibitor attendees, and 400 “other” attendees, for a total of 9,331 people. These numbers are down significantly from last year’s DAC in
Richard’s take was that there wasn’t much exciting going on this year at DAC, but I would tend to disagree. All four of us from Verilab who attended the conference were able to attend interesting product demos and sessions, and met up with people we otherwise would have had to travel far and wide to see. It also gave us a chance to catch up ourselves, as there were Verilab attendees from the
Some of the info at the conference could have been gleaned from attendance at DVCon or DATE. The technical sessions at DVCon were consistently the most relevant to my role as a verification consultant. Its smaller size (710 attendees) made it a good “starter conference” to help kick off the season. DATE was good because it gave me the opportunity to catch up with current/former clients and colleagues of mine in Europe, and to get a better understanding of what the design and verification community in
Is DAC still relevant? For me, the answer is yes. Your mileage may vary. If you’ve never been to any of the major conferences (a situation I found myself in before this year), you’re missing out. My horizons have broadened significantly over the last few months. I’ve got a much better appreciation for the state of the industry, what tools and methodologies are available, and who to call if I need a helping hand than I did back at the beginning of February.
After the keynote on Tuesday I had the opportunity to meet Soha Hassoun, an Associate Professor of Computer Science at Tufts University, while snapping a photo of Steven Levitan (DAC conference chair). Among other things, Soha is involved with a company called Carbon Design Systems. Now, as it turns out I’ve been bombarded with emails from Georgia Marszalek from ValleyPR about Carbon, but for some reason I never fully grasped the value of the company’s product after reading an email description. Based on the additional recommendation from Soha I decided to take a look.
Thursday morning I went by the Carbon booth and spoke with Elizabeth Abraham, VP, Consulting Services and Product Marketing. She gave me an overview of Carbon’s Virtual System Prototype (VSP) software. VSP converts “Verilog, VHDL, and mixed language RTL designs into an ultra-fast, cycle-accurate virtual prototype.” Basically, RTL is converted to high level C software model which can run 10-100x faster than the original design, according to Elizabeth. The other cool feature of Carbon’s product is the ability to debug hardware and software side by side, as bugs tracked down in the generated C code can be mapped back to the original hardware implementation.
I asked Elizabeth how VSP compared with solutions such as the Cadence ISX, which can provide coverage metrics and constrained random testing for embedded software. Based on my understanding of the tools, it appears VSP is focused on verifying the full system hardware/software solution, whereas ISX is focused on testing the interface layer between the system software and hardware (i.e. not the entire software solution). The other difference is that VSP should dramatically speed up simulations whereas ISX would not unless it was paired with a Palladium hardware accelleration box.
I had my first brush with formal methods about 11 years ago when I started my PhD. I was asked to look at the Z language, which would let you write a specification that could be formally proven to be correct. The downside was that, at any given time, there would only be three people on the planet with large enough brains to use it. Part of the complexity of Z was down to the fact that English characters were not allowed (that would have been too easy) - only Greek symbols by the looks of things. I’m not sure how you were meant to type it into a text editor, and I didn’t pursue it far enough to find out.
Every day this week at DAC I’ve been involved in at least one discussion on VMM versus AVM. It’s getting really competitive now. There’s all this talk of standards, Open Source, maturity, and compliance. On top of that, things just don’t stay still long enough to form an opinion that lasts more than five minutes.
Tuesday I had the opportunity to attend the VMM Users’ Group luncheon. The highlight of the luncheon was a panel discussion moderated by Janick Bergeron, Chief Scientist at Synopsys. Before the panel got started, the folks from Synopsys had a few tidbits to share. According to Synopsys, the VMM is the most broadly adopted SystemVerilog library. They were also keen to point out that Synopsys had the highest percentage of reported users on Cooley’s DeepChip verification census.
Most verification engineers burn themselves at some point by disabling a checker and then forgetting about it. There are sensible reasons for doing this; think about it. You find an RTL bug on Friday, but it doesn’t get fixed immediately. You decide to comment out the checker in order not to pollute the weekend’s regression run. The problem is that you come in on Monday morning and start debugging the new errors you have. The commented out check gets forgotten about.
Burned? I positively set myself on fire doing this on my first ever project. I spotted the commented out check 3 days before code freeze. And guess what it was masking a bug. Ouch.
I haven’t made the same mistake again. In fact, I go to excessive lengths to check that my testbench works correctly. I talk about one method in my book , where I create a special aspect that I load up at the start of regressions to verify that the testbench works before I run all of the other simulations. Another method I use is known as error injection (or fault injection, bug injection or mutation) where I’ll deliberately go and break the RTL and check that my testbench catches it.
The problem with this approach is that it can be manually intensive. Determining where the best place to inject a bug is, running an entire regression to see if it is caught, and repeating until you are happy you’ve done enough (and really, how do you know?) is tough.
Not any more though. I caught the Certess demo yesterday, and they seem to have solved the problem. I only saw a demo, but their solution looks pretty push button. You load the design into their tool, it runs a regression to profile your tests and work out which faults should get caught by which tests, and then it injects the faults one at a time and runs the appropriate tests. If they don’t complain about errors, then you have a problem with your testbench.
As far as I know, this is the first time that we’ve been able to measure the quality of a verification environment. So all you verification engineers, IP providers and outsourcing companies out there – be afraid. This thing will tell you what functionality your stimuli isn’t activating, what functionality it isn’t propagating, and what bugs you aren’t detecting. Your boss and customers can now find out how good a job you are really doing.
Over 50% of chip designs today have >20 clock domains. This makes CDC verification pretty high up on the priority list. At Verilab we have our own CDC workshop, which is split into a design portion (it’s better to get CDC design right in the first place), and a verification portion, which focuses on using SystemVerilog Assertions and dynamic simulation. This gets our clients hitting the ground running with CDC really quickly, using the tools they have at their disposal today. However, we’re always on the lookout for other cool CDC verification techniques. I got an update yesterday of 0-IN’s CDC verification capabilities and they still look pretty good.
I was feeling a bit jet lagged today at DAC – nothing to do with the wine the evening before – so I decided to visit the Oxygen Party Bar installed at the Mentor Graphics booth. The friendly “bar staff” were only too happy to advise me that a 10 minute shot of the 90 percent concentration of aroma-scented oxygen, would give me “immediate relief”.