Feed on
Posts
Comments

VMM Users Group

Tuesday I had the opportunity to attend the VMM Users’ Group luncheon. The highlight of the luncheon was a panel discussion moderated by Janick Bergeron, Chief Scientist at Synopsys. Before the panel got started, the folks from Synopsys had a few tidbits to share. According to Synopsys, the VMM is the most broadly adopted SystemVerilog library. They were also keen to point out that Synopsys had the highest percentage of reported users on Cooley’s DeepChip verification census.

Janick got the panel started by giving a short talk on VMM, followed by 15 minute talks by each of the panelists. Janick mentioned that teams usually go through three steps when adopting a new language and methodology:

  1. Learn new language syntax but continue doing things the way they’ve always been done.
  2. Learn new features of the language to enable enhanced productivity.
  3. Learn a new methodology to take advantage of the new language features.

According to Janick, VMM tries to jumpstart teams ramping up on SystemVerilog by helping them jump to step three, allowing them to bypass the trial and error typical of the first two phases. As someone currently ramping up on VMM, and having experience with other methodologies such as eRM from Cadence, I can say that there are significant benefits from going with a proven methodology when learning a new high level verification language such as SystemVerilog.

The first panelist to speak was Jonathan Lutz from General Dynamics. Jonathan and the next panelist, Samir Patel from Tarari described their experiences building testbenches using SystemVerilog and the VMM. Both felt there were many benefits to using the VCS flow, including the level of support from Synopsys.

Subsequently, Ambar Sakar from Paradigm Works discussed the results of an “unscientific poll” where clients were asked a variety of questions about how they used SystemVerilog. For starters, Ambar wanted to know what type of clients were using VMM. The results?

  • 50% of Paradigm Works clients used VMM or RVM.
  • Some clients were advanced users, who had an easy time migrating to SV and the VMM.
  • Another group of clients had moderate verification background but were new to OOP.
  • The final group was learning from scratch, having only built up testbenches in Verilog or VHDL. These clients may have also relied on viewing waveforms to ensure tests were passing.

Next, Ambar wanted to know how long it took to build a testbench. Typically, a 1-2 week setup period was required to cover basic testbench functionality. Of course, significant additional time was required to enable more than the most basic functionality. Ambar also asked where most design bugs were found. As would be expected, the last 20% of bugs took 80% of the total effort to find.

Dave Deptula from TI Houston was up next, and shared his experiences building testbenches for two small (5k gate) designs, one larger (50k gate) design, and one reusable bus BFM in SystemVerilog using VMM. Dave’s team was comprised primarily of engineers new to SystemVerilog, but who had some experience in another HVL. The goal of the effort was to enable TI to build expertise in SV. VMM was chosen because TI felt it to be the most mature SystemVerilog methodology. In the end, the team felt the ramp up time was not significant. They felt the tool was suitable for beginners and experts alike, and that VMM helped standardize the components used by the team so they could be used elsewhere within the organization.

An engineer from Commex in Israel (whose name I didn’t catch) shared his project experiences using SV and the VMM, and was followed by Kelly Larson from Analog Devices in Austin. Kelly described his teams experiences converting from a methodology primarily focused on directed tests (his team had over 20,000 of them) to a constrained random flow. Prior to the switch, 80% of the time was spent on directed test development and 20% on random development. However, 55% of bugs were found with random testing, 33% with directed testing, and 10% with other techniques. That made it easy for Kelly to convince the rest of the team that the focus on testing should be shifted to a solution such as SystemVerilog.

Initially, each engineer developed his or her own methodology. That made it difficult to integrate block level testbenches into a cohesive full chip environment. According to Kelly, his job “is not to write a testbench, it’s to verify a chip.” Given a choice of trying to agree on a common methodology or to use a standard methodology such as the VMM, the team chose to port all existing code to use the VMM base classes.

In the end, Kelly’s team is currently using VMM for unit level testbenches, and integrating those benches into a system level environment.

The panel concluded with a question and answer session. Questions such as the following were asked:

(Q) – Should design and verification teams be split?

(A) – Most panelists agreed that yes, the work should be split, but also commented that designers should have access to a few test templates they can easily modify to help out in the testing process, especially towards the end of the project when additional tests may be required to hit difficult to reach corner-case bugs. Kelly also mentioned that designers are very good at white-box testing due to their understanding of the internals of the design.

(Q) - Is it difficult to take notes and pay attention to the panel discussion at the same time?

Ok… I made that up. A few more questions were asked and answered, and the luncheon was closed out with drawings for several books and the grand prize of a Sony Playstation 3.

Comments are closed.

Work For Verilab