Feed on
Posts
Comments

Archive for the ‘Interview’ Category

Thoughts on Verification: The Human Side of Best Practices (Part 3 of 3)

Thursday, May 9th, 2013 by Alex Melikian

In Part 3, Alex and Jason explain the team practice of a “Deep Dive” and how valuable it can be to accomplishing a project on time and on budget. They also discuss the value of post-mortem and peri-mortem meetings, and how the development teams can fully benefit from them. Part 1 and 2 can be viewed respectively here and here.


Alex Melikian: The next topic I want to cover is coordination between teams. We mentioned earlier about the involvement of software teams or other departments who have a stake in the product and its features. One procedure that is brought up in the planning stages involving coordination between teams is the “Deep Dive”. Can you describe that for those who are not familiar with this approach?


Jason Sprott: Well, there’s various techniques you can use to gel teams together and get everyone in the mindset of more joined-up thinking. One good technique is to have the product, marketing, software, various design, and verification teams work together to think about the end product. A “deep dive” is when you throw these guys together in a room for a while and let them tear through requirements and functionality. You give them some time to mull these points over in their head and hope they actually do it and don’t turn up to the meeting cold. The idea is to get as much input from the wider team as possible, and have access to the people who know both deep implementation and customer level requirements.

And start them with what someone thinks is the most important features of the design and then work it through the stuff, so that you have the people, the major stakeholders, for all the different components in the system.

You have them in the room, so there’s even guys in there who could say, “Hey, wait a minute. That feature isn’t important at all. Yeah, let’s take a note to speak to the client about that because the guy we’re going to sell this to might not care about that.” Another thing someone might say is “Now that could you go into the next chip”. So at one end, there’s people in the room who can look at priorities, who can look at features, who can look at the ways things are used. And at the other end of the spectrum, there are guys in the team who can assess an impact of a given feature.

So in this setup, someone might say, “Oh, yeah, we’ve got to have this feature in the design.” At the same time, someone from the verification team can respond with “Fair enough, that will take you only ten minutes to implement in a piece of RTL, but it’s going to take two weeks to verify that, is that what you want? Is it worth two weeks?” That’s the sort of interaction that takes place and you would like to happen.

The advantage of having the stakeholders for the different components of the product in one place, is that you can often get to the bottom of decisions made for specific requirements that may require a lot of effort. And of course, these decisions are being made, before the project is being carried out, hence all teams go in eyes wide open. It’s also a good opportunity to discuss the interaction and expectations between components. It all sounds so trivial, but many costly project “surprises” happen as a result of not taking these things into account. When these “surprises” happen, the best case scenario is additional effort will have to be spent, and the worst case is that they manifest as an issue in the final product.


AM: It’s strange: I think a lot of our readers when told it’s good to spend half, or even a full day of planning for a chip would, probably find that excessive. However, if we compare that amount of time to the overall time of a project, you realize the time spent for that full day is well worth it. It can really pay off over the course of the project, because not only is there more coordination and understanding across all the stakeholders, but you can also avoid these very costly “surprises” in the future.


JS: That’s right. You don’t want spend too much time though, because things change. You could approach this meeting by asking the participants to come up with the top five things that they care about in this design. That would be a good start, and then if you have time left, you’ll cover more. It must be emphasized to the participants to come in prepared, like with some of the top things they care about, and why you care about them. That will definitely drive some discussion.

Also the other thing you really need when you’re going in a deep dive situation is historical data. This can really help making more sensible and informed decisions. If you don’t have anything, meaning it’s all just made up on the day, and relies solely on the knowledge and skill of the people in the room, then in situations like this, decisions can be affected by anything from vague recollections, emotions, heated debate, bullying, or even hunger, like when participants ask “how close is it to lunch”.


AM: That’s funny, but true. I’m glad you mentioned ‘historical data’, as one of the ways that data can be generated or archived would be through postmortem meetings. So what are some of the things you think makes a postmortem very valuable to a team? And what is the specific knowledge a team should be retaining or archiving when they’re doing a postmortem?


JS: I like the postmortems, but in fact, what I prefer is to do both peri and postmortem reviews. Some things you can only know at the end of the project, but it’s also possible to take regular retrospectives along the way. These can be used to capture more detail and affect decisions within the scope of the current project, unlike a postmortem.

You can record many things, but it’s bad only to focus only on the negative. It’s interesting to record how and why things went well, not just things that went wrong.

It’s always good to get to the root cause of things. For example, if stuff goes wrong because the code wasn’t in good shape, the root cause might not be the code itself. The root cause might be that we didn’t do very good peer reviews, or we did the peer reviews too late. During in-project retrospectives, it’s sometimes possible to see patterns of things going wrong, also known as anti-patterns, and fix them within the scope of the project.

In terms of the statistics that you want to archive, you should look at things like bug rates, the type of bugs you had, where the bugs were found, how long it took to turn around the bug fix, simulation times, development times. You also want to record the project conditions that might affect those metrics, such as the amount of change to the design, time pressure, or skill level in the development team. For example, you might expect to get very different results from an experienced team than from a team that have never worked together, or is inexperienced. This is important data that can be used to weight the metrics recorded.

Since everyone always wants to know, “How long do you spend planning?” it’s always useful to record that accurately in project records.

Postmortems, should also always want to look at your reuse. What did you actually reuse from a previous project? And by that, I mean what did you use without modifying the code itself? Did you have to hack about with it?

And also, what did you output from a reuse point of view? What did you give back that other projects may be able to use? That’s very valuable information.

What I would say is at the end of the day, the one thing you should really care about is that you have a clear picture of what you keep for the next project and what you dump.


AM: I think you summarized it well. Postmortems, or peri-mortems, are very good ways of applying the old virtue of learning from your mistakes and experiences in our business.

Love to hear more but sadly that’s all the time that we have for this conversation. Thanks a lot Jason for your time and your input. I hope our readers have better appreciated the human side of best practices in verification. Look forward to the next time.


JS: Okay, thank you, Alex.

Thoughts on Verification: The Human Side of Best Practices (Part 2 of 3)

Thursday, May 2nd, 2013 by Alex Melikian

In Part 2, Alex and Jason cover how new challenges, such as power aware features and analog modeling, can affect verification planning. In addition they discuss the approach of risk assessment and how it fits into the planning process. Part 1 can be viewed here.


Alex Melikian: Lets move on to other facets coming into play in today’s modern day silicon product. We’re seeing things like power management, mixed signals, or analog mixed signals involved in making a product. How do you see these new areas affecting how a verification plan is put together?


Jason Sprott: Well, I guess now low power is a fairly well understood concept for verification, and we have power aware simulation and the like. However, what people sometimes fail to understand is that low power features in a design are a massive crosscutting concern. They can slice across all aspects of functionality. So instead of having a piece of functionality that’s exercised only in one mode, it could need to be exercised in many modes for different power scenarios. And that can explode into a massive state space. So I think this is another area of ruthless prioritization, where you can’t possibly verify everything in all different modes. Usually, that’s just too onerous.

So you have to look at what really matters for the low power modes you’ll be using. So I think you have to ask yourself is: “Well, in real life, how would the software control the power modes?” You often have to work hand-in-glove with the software department to keep a tight rein on the verification requirements that end up in the product.

And it has to be well understood that if someone changes the software to do something else, it could have an impact by pushing into areas that haven’t been verified. I think this is an area that really, really needs to be tightly tied down.

A completely different area of verification and much less understood is analog verification. That can be the functional border between the analog and digital domains, or true analog functional verification.

We have to consider what level of accuracy and performance we build the analogue models to. This is an area that will have an increasing effect on verification as we go forward. We haven’t really tied down the languages everybody should use for modeling and for doing the verification. And as much as we understand functional coverage in the digital domain, what does it mean for analog verification?

You have to really tie down the requirements of the crossing between the domains. Sometimes, analog isn’t that complicated in terms of the number of features, as compared to digital designs, but you have lots of discreet signals with independent timing. This can add up to a lot of combinations to verify. Not all combinations are valid, or possible. Understanding what, or if anything, can be cut back in this domain is essential to make prioritization decisions.

I think one of the biggest things to come out of analog functional verification, is a more considered approach to modeling. The accuracy, performance, and validation of the models against the circuits, are going to play a bigger part in verification in general. All of these things are being demanded now of the analog verification team, whereas in the past, they weren’t given much consideration. There may not have even been an analogue verification team. Chip level simulations often ended up with models of the wrong accuracy (typically too accurate for the context), or were buggy and not validated.

Chip teams up until recently haven’t been asking for proof that analog models have been verified against a circuit, or specified requirements for the models in terms of performance and accuracy. So yeah, this is a big area and it’s going to have quite an effect, I think, going forward.


AM: Interesting to see how ‘prioritization’ is applied in the context of analog modeling, that is to say how close to the real thing you would really want it to be and keeping in mind the implications on the schedule.

We’ve been talking about the many elements involved in the process of planning throughout this discussion. Let me bring up a question that would cover this theme from a different perspective. Would you agree that one approach that engineers have to take when they’re doing their planning is applying a risk assessment for each feature as opposed to thinking along the lines of step-by-step procedures?


JS: Risk assessment is certainly one aspect of planning. I think what you really have to aim for with risk assessment is better predictability of results. Risk and priority are two separate things. You can have something that’s a very high risk but very low priority and vice versa. So I think risk assessments are part of the planning, but how far you want to go depends on your product requirements. Some designs can tolerate more risk than others. For example, high reliability designs for automotive, or medical, typically require a much more detailed analysis. This is relevant to planning, as you have to ensure you don’t eliminate necessary work from the schedule. Sometimes this work is only highlighted by risk assessment.

I think you’ve got to decide how much risk assessment you do, and prioritize the risks, as well. But you do need to know about all the features in order to do these things. So you can’t just look at them in isolation and say, “Ah, yeah, we’re going to consider all the risks up front without knowing what the features are” because the risks are related to the features. So you need to do both.

Thoughts on Verification: The Human Side of Best Practices (Part 1 of 3)

Thursday, April 25th, 2013 by Alex Melikian

In this edition of Thoughts on Verification, Verilab consultant Alex Melikian talks with Verilab CTO Jason Sprott about the human aspects related to planning or executing a verification project. Jason was invited to DVCon 2013 to participate at one of the panels with other industry leaders and representatives covering the subject of “Best Practices”.

In Part 1, Alex and Jason discuss the concept of “ruthless prioritization” and the differences in practices between FPGA and ASIC development.

Alex Melikian: Hi, Jason. Thanks for joining me on this conversation. Today, we’re going to be talking about the ‘human’ side of best practices in verification. I emphasize ‘human’ in the title because this topic of conversation will not focus so much on tools, technical issues nor coding details. Rather, we take a closer look in the way we do the day-to-day human activities related to verification. I’m talking about things like planning, team coordination and cooperation.

So Jason, you just got back from DVCON 2013, where you were one of the panelists on the discussion of “Best Practices in Verification Planning”. Let’s get started with this conversation with one of the points you greatly emphasized at the panel you called “ruthless prioritization”. Talk about this a little bit more. How should managers or engineers execute this approach of “ruthless prioritization”?


Jason Sprott: Well, I’m glad we’re talking about the human factors because I think they play a major part in team productivity. This “ruthless prioritization”, as I call it, is something that’s difficult to automate, maybe even impossible.

There’s a lot of spin-around things like prioritization, project planning etc. Ruthless prioritization, is when you make sure that as much work as possible goes towards the features and parts of the design that really matter: the ones that the end users are going to use. These are the things that you’re taping-out and people are going to notice if they’re broken. Whereas, what we tend to do in designs, not just verification environments, is to design in a lot of things that may never be required.

There are many reasons we focus on unimportant features, but at the end of the day they can be a major distraction and don’t necessarily matter to the end result. And the problem is we’ve got to verify all those things, or at least spend time considering them. So for me, part of the planning process and the human aspect to the planning process is to ruthlessly prioritize. Not just once at the beginning but continually through the project. The aim is to ensure all the things we’re working are towards the highest priority product goals at the end of the day: the things that really matter. That’s not an easy task, but it’s worth it.

When thinking about the priorities, you have to really consider them at all stages of the development. It’s not something you just do at the very beginning. You’re continually testing: “Is this something I should be working on now?”, or “does this affect something that will definitely make it to the final product?”

If the answer is “no”, you’ve got to ruthlessly throw it out. Otherwise you may be working on stuff that doesn’t matter and you’re just burning development cycles.


AM: I can relate to that: coming from more of an FPGA background myself, I see some of the parallels in the FPGA development process. Not so much, as you said, in ruthless prioritization, but rather setting goals on how much verification you want to exert for each feature. For FPGAs, the name of the game is ‘time to market’. So you can allow yourself to make mistakes before going to the bench in the lab. You don’t have to do 100 percent coverage and test everything down to a ‘t’. Similarly, you have to set priorities.

Of course, there are some parts that are critical, and testing for 100% coverage would be beneficial. However, there are other parts where you can take the risk and aim for 80% coverage in simulation, as long as you have a good bench available in the lab carrying out the exhaustive testing. It’s counterintuitive for us verification engineers to allow a design to go into the lab with the possibility of bugs. However, by carefully allocating some of the validation effort into the lab, I think more often than not, you will achieve complete coverage without running into a bug that requires additional time to debug in the lab. Therefore the overall time would be less, than if you spent the effort to verify everything in simulation with 100% coverage. You know what they say, 80% of effort will be spent chasing the last 20% of coverage. So some time saving can be done there when dealing with an FPGA.

This decision making process of how much verification should be done in simulation also involves continual planning. This means that in mid-project it can be decided that testing a certain feature can get pushed to the lab, or conversely, it becomes necessary to simulate it with 100% coverage.

So this management approach has a lot of parallels to the “ruthless prioritization” process. Do you have any thoughts about that?


JS: Really, it depends on how rigorously you want to verify your design and on the risk you want to take. Just because something can be eyeballed on the bench, doesn’t make it the right way to verify it . I think it’s very, very important in FPGAs to understand what you’re deferring to the bench testing rather than simulation.

The things that you want to push on the bench are things that can’t be done easily in simulation, or things that make more sense for performance, or practical reasons. On the bench, controlling the design to put it in states that exercise all user scenarios can be very difficult. It can often be difficult to exactly duplicate these conditions when regression testing.

It’s just as important to plan and prioritize which features will be verified in real hardware, as it is in simulation. It’s just as easy to waste time on features that are not important to the final product on the bench. In fact, if you factor in repeatability issues, hardware setup, and phony debugging, you can end up wasting a lot of time on unimportant features.

There are also things we can also do in FPGA designs that aren’t typically possible in ASIC, such as building verification modes that can operate in simulation and on the bench, e.g. short frames. The mode is only a mode ever used for verification, and creates something short enough or concise enough to simulate, but it can also be repeated on the bench. Although typically not a real mode of operation, it does exercise areas of functionality very completely, that we would otherwise not be able to simulate if we deferred the whole thing to the bench.

So, don’t just drop stuff onto the bench assuming that the simulations will be too long. Take a more pragmatic approach and analyze what’s actually required, understand the risks you’re taking and find ways to mitigate them.


AM: Definitely agree. Your points emphasize how a lot of coordination is needed between the verification plan and the laboratory validation plan, meaning the planning of what gets tested in the lab. This is absolutely key if the strategy of splitting the burden between simulation and lab testing is to be successful in saving verification cycles.

Thoughts on Verification: Verification Languages of Today and Tomorrow (Part 3 of 3)

Thursday, February 28th, 2013 by Alex Melikian

In part 3, Jonathan and Alex discuss some of the alternative verification platforms available outside those offered by the major vendors, and the qualities that make a verification language so effective at its purpose. Part 1 and 2 can be found here and here respectively.

Alex Melikian: Changing gears a bit, and I know I risk dating you here, but you’ve been around for a while and seen a lot of languages come and go. And of course some of them have stuck around. The verification languages that we mentioned at the start of this conversation were not the only ones that have appeared. There have been some attempts by third party groups, some of who have constructed and publically released their languages for verification.

For those that didn’t catch on, what do you think are the reasons they failed to capture the interest of the verification community? Or, asking this from another angle, what elements to a verification language are absolutely necessary for it to be considered viable and worthy?

Jonathan Bromley: Well, beware of my personal bias here, obviously, because for one thing I’ve been heavily invested in SystemVerilog standardization for some time now. And for another thing, I’m personally a little conservative in my nature, so I would say that the two things that I would be looking for in any verification tool are completeness and standardization. Completeness is required because I don’t want to have to reinvent wheels for myself. I don’t mind writing code; that’s okay. But I do mind doing stuff that’s going to be superseded by somebody else’s efforts six months down the line. And standardization because I want my skills to be portable and I want my code to be portable as far as possible; I want to be confident that a range of different tool vendors are going to be supporting whatever code it is that I write.

(more…)

Thoughts on Verification: Verification Languages of Today and Tomorrow (Part 2 of 3)

Wednesday, February 13th, 2013 by Alex Melikian

In Part 2, Alex Melikian and Jonathan Bromley discuss the upcoming additions to the SystemVerilog LRM, as well as their approaches to handling new elements or constructs of a language. Part 1 can be viewed here.

Alex Melikian: You’ve been following the developments of SystemVerilog 2012 very closely. Can you tell us about some of the new language additions that we should be looking out for in this upcoming version of SystemVerilog?

Jonathan Bromley: Yes. I’ve been involved in that more than any normal, reasonable person should expect to be. I’ve been serving as a member of the IEEE committee that works on the testbench features of SystemVerilog for the past 7 years.  I think there’s some very exciting stuff coming up in SystemVerilog 2012. It was deliberately set up as a relatively fast track project. Normally, the revision cycle for IEEE standards is five years, but SystemVerilog 2012 comes only two and a half years after the 2009 standard. So it’s really fast tracked. And it was very carefully focused on a small number of new features. So there’s not a huge list of big ticket items. But there are a couple of things in the verification world that I think are really important.

The first one is a big extension to flexibility of the coverage definition system. You can now define your cover points and your cross cover points in a much more sophisticated, much more algorithmic way than was possible before. There’s a big bunch of stuff that came out there, which looks really exciting. And I get the impression that the vendors are going to rally behind these new items very quickly.
(more…)

Thoughts on Verification: Verification Languages of Today and Tomorrow (Part 1 of 3)

Tuesday, February 5th, 2013 by Alex Melikian

In this edition, Alex Melikian discusses with Verilab consultant Jonathan Bromley about the various verification languages that exist today, and where they may be headed for tomorrow. Jonathan is a veteran consultant and author of numerous conference papers, including the SNUG Austin 2012 Best Paper “Taming Testbench Timing”. He has closely monitored the development of design and verification languages, and since 2005 has served on the IEEE committee that works on development of testbench features in SystemVerilog.

In Part 1, Alex and Jonathan review the different verification languages available today, their histories and differences.

Alex Melikian: Hello, Jonathan, thanks for joining me on this edition of “Thoughts on Verification”. So the theme of this conversation is going to be about verification languages, the ones that exist today and what they’re going to be like tomorrow. So to get started, for the readers out there who are not too familiar with verification languages, maybe you can run through a few of them and describe what exists and what is available.

Jonathan Bromley: Well, whatever I say, I’m sure it will be incomplete. But if you go back maybe 15 years, people who were doing verification of any digital designs were likely using the same languages that they were using to do the design itself. And I guess there’s a good historical reason for that because those languages typically were actually designed for verification. They were designed to model electronic systems rather than to create them. And it was only at the beginning of the 1990’s that logic synthesis became popular as a way of taking a subset of those languages and turning it into physical hardware. So it makes good historical sense that those traditional languages, typically VHDL and Verilog, would have been used for doing verification.

But it wasn’t too long before people began to realize those languages were running out of steam and weren’t flexible enough. They weren’t dynamic enough. They weren’t good enough at coping with the kind of software-like constructs like strings, for example, that you expect to be able to use. So people moved on, and we now see people doing verification with languages that may or may not look quite a lot like those earlier ones.
(more…)

Thoughts on Verification: Agile From a Verification Perspective (Part 3 of 3)

Wednesday, December 12th, 2012 by Alex Melikian

In Part 3, Alex and Bryan discuss some of the growing pains of adopting or solidifying Agile methods in your verification process. Bryan also discusses about his website, which brings to light Agile related issues in SOC development. Part 1 and 2 can be viewed here and here respectively.

Special Note: If you missed Bryan’s presentation “Yes We Kanban” at the MTV Conference, never fear! Bryan will soon release a whitepaper on Kanban and its merits in the verification world. Look for it to be published soon on the Verilab website.

Alex Melikian: I’m beginning to get the picture that Agile is not a “one size fits all” solution, but a collection of methods that can be cherry picked to match the culture and working environment of the team. As mentioned previously, where cross auditing is used instead of pair programming, or vice versa.

Sadly I feel there are companies out there that don’t engage in any of these Agile techniques. There’s always a reason like lack of time and/or resources. Others however, I believe is simply due to omission, and that’s something that has to change. So how can we at Verilab help some of our clients adopt and benefit from Agile techniques?

Bryan Morris: If the client is not aware about agile, I think we can provide some education on the various frameworks and techniques, and then help them decide which best fits their unique culture. As well, we can provide some guidance on how to introduce the agile techniques. Almost nothing will guarantee an unsuccessful agile adoption if you try swap out what they are doing and and introduce a completely new “agile” way of doing things. I truly believe that it’s best to use an incremental approach. Pick and choose a few techniques that best fit your team’s culture and experience, and the closer to what you’re already doing the better. Make that successful, and then pick the next “low hanging fruit” to tackle. I think we can help to educate and encourage our clients.
(more…)

Thoughts on Verification: Agile From a Verification Perspective (Part 2 of 3)

Thursday, December 6th, 2012 by Alex Melikian

In Part 2, Alex and Bryan discuss some Agile techniques and tools, and which ones can fit into your verification project management flow. Part 1 can be found here.

Special Note: For those attending Microprocessor Test and Verification (MTV 2012) conference in Austin TX, we cordially invite you to attend Bryan’s presentation “Yes we Kanban!” on December 10th. Bryan will present the concepts of Kanban, an Agile methodology, and how it can work for your verification project management needs.

Alex Melikian: I think in our business, people are used to things moving really quickly and having to cope with it. Do you think those involved in project management or even verification are already doing something that is similar to an Agile technique, but they just don’t know it?

Bryan Morris: Yes, definitely. I think people are using a lot techniques and sub-sets of Agile in isolation with other techniques. I think very few teams are are using a pure waterfall scheduling approach. Most teams already break the project down into little chunks (or mini-milestones). Some teams do code reviews that allow you to review work in progress. People do mid-cycle or mid-project reviews to understand where they can improve their process. They say ‘stop’ and review what we’ve done and figure out what we need to do to move forward. So yeah, I agree, I think there are a lot of people who are using these in pieces. I think what the Agile framework allows you to do is pull everything into one package that creates a common understanding of how it’s going to work.

AM: So, kind of a glue that ensures everyone’s work fits together.

BM: Yes, exactly.

AM: We were talking about a ‘customer’ before. Who is the customer in an Agile context? You mentioned the marketing department, but who else can be that customer?

(more…)

Thoughts on Verification: Agile in the Verification World (Part 1 of 3)

Wednesday, November 28th, 2012 by Alex Melikian

In this edition of ‘Thoughts on Verification’, Verilab consultant Alex Melikian discusses Agile techniques and methodologies with Verilab senior consultant Bryan Morris. Before turning towards the verification world, Bryan came from a long history of software engineering and related project management. His experience and pedigree offer in-depth knowledge of how Agile can help development teams improve productivity and responsiveness when facing the increasing demands of a modern day ASIC/FPGA project.

In Part 1, Bryan explains the concepts and origins of Agile, as well as describing examples of how it can be applied in hardware development projects.

Special Note: For those attending Microprocessor Test and Verification (MTV 2012) conference in Austin TX, don’t miss Bryan’s presentation “Yes we Kanban!” on December 10th. Bryan will present the concepts of Kanban, an Agile methodology, and how it can work for your verification project management needs.

Alex Melikian: Hi Bryan! Thank you for joining me on this edition of ‘Thought’s on Verification’. Today we’re going to be talking about a topic that is a bit of a mystery for me and I suspect a few of our readers: Agile methodologies and techniques. Before we get into it, I would like to ask you to give a little introduction about yourself for our readers.

Bryan Morris: Great! I’m Bryan Morris and Senior Consultant at Verilab. I’ve been in the industry for about 27 years, the first 15 years were principally in embedded software design. Doing software on routers, wireless base-stations, and then gradually I moved up the food chain into a systems analysis role, where I was managing a group that did performance analysis of algorithms that were going to be implemented on ASICs. That led me into the ASIC design and verification space. Over the last 12 years I’ve specialized in ASIC/FPGA verification.

AM: How were you introduced to Agile? Or, in a nutshell how would you explain what Agile is?

BM: My introduction to Agile is interesting. Old things become new again, you know, the idea of doing incremental development was done back when I started in software. There were quite a few “agile” ideas that were being used: ‘evolutionary prototyping’, ‘incremental development’, that form a part of what Agile is today.
(more…)

Thoughts on Verification: A ‘Fresh’ Look at UVM (part 2 of 2)

Tuesday, October 9th, 2012 by Alex Melikian

In Part 2, Verilab consultants Alex Melikian and Vanessa Cooper discuss some of the challenges of learning and adopting the UVM into a new verification environment, or an existing one. They also provide tips and available resources to help one accelerate their ramp-up and adoption process. Part 1 can be viewed here.

Alex Melikian: Let’s talk about the learning curve involved with adopting UVM. These things always imply an initial investment in terms of time. What do you say is the quickest payoff or quickest ROI that someone can gain from using UVM?

Vanessa Cooper: Well, I guess I’ll go back to the reuse issue. You’ve created some code – say, an AHB VIP. Hopefully, the next person who needs an AHB driver doesn’t have to reinvent the wheel because you’ve already created it. If they can pick it up off the shelf and run with it, that’s the quickest payback.

And once you get over the hump of learning the UVM I think productivity increases because everybody’s marching along the same path. You know where the files are. You know what type of files you need to create.

And it’s just a lot easier when someone new comes in, and they know UVM. They don’t have that huge learning curve of “okay now, where would I find my stimulus?” They know exactly where that is. That is another quick payback. But like you said, there is an initial learning curve. I’ll go back to what you said about the registers. I think on my first UVM project the register model was the biggest thorn in my side because it was a tad bit more challenging to learn than just the basic concepts of getting stimulus up and going.

And it took awhile of really stepping through the library, understanding what was going on, and how the register could be used as a scoreboard, how to do checking before I got it up and working correctly. Now, once that’s done, doing it again is simpler.
(more…)

Work For Verilab