The Case Against UVM

The Case Against UVM

Author: Irfan Waheed , Founder & CEO, SilverLining EDA

In DV land, UVM represents the status quo.

It has widespread adoption and an extensive user base. Features like reusability, scalability and modularity have helped UVM maintain a stranglehold over the DV universe.

But discontent is simmering just beneath the surface. There are five common grievances against the UVM juggernaut.

Steep learning curve

One of the most insightful articles about Design Verification I have read is titled On the Origin of Bugs authored by Bryan Dickman .

He writes:

Designers are perfectly capable of performing the verification of their own designs, but it is accepted good practice, where resources are available, that the verification tasks are undertaken by dedicated verification engineers who have a slightly different skillset and with the added benefit of an independent interpretation of the design specification

Enter UVM.

The complexity of UVM has created a wide chasm between the skills of design and verification engineers. This has upended the conventional wisdom espoused by Bryan Dickman and created a cadre of DV engineers who have prioritized UVM skills over architecture and micro-architecture knowledge. As a result, they struggle to keep up with their design peers which increases the risk of bug escapes.

UVM owes its steep learning curve to a plethora of base classes and concepts like sequences, agents, drivers, monitors, and the factory pattern. For fresh engineers, learning UVM can be a jarring experience.

Question: What's my favorite UVM fun fact ?

Answer: In UVM, there are 12 standard simulation phases.

Let that sink in. That's more than the number of pipeline stages in most CPUs.

The UVM tax

To construct the UVM Tax argument, let's take a look an article titled How, What, UVM and Why by Neil Johnson. Neil suggests splitting DV engineers into how and what engineers:

A how engineer specializes in building out testbench infrastructure while a what engineer specializes in verifying product features. The what engineer leads because they know what requirements need to be tested; the how engineer is a supporting role serving the needs of those requirements. Importantly, the how engineer “knows” UVM and how everything fits together while the what engineer may not

While this may be a good idea to improve productivity without abandoning the UVM framework, it actually quantifies the overhead laden upon the industry by UVM. The what engineer represents the verification engineer described by Bryan Dickman, while the how engineer is the overhead which has to be born due to UVM.

We can call this the UVM tax.


Inconsistent adoption

When UVM's steep learning curve meets real world schedule pressure, the result is UVM spaghetti.

Let's look at a few concrete examples of how and why real world UVM adoption ends up running afoul of some of UVM's salient selling points like reusability, scalability and modularity.

12 simulation phases

Let's start with my favorite UVM feature: the 12 simulation phases.

In theory, each phase in UVM has a specific purpose, ensuring that different parts of a testbench execute in a structured order.

This implies that each line of code in your UVM testbench has a home where it belongs: a native phase.

In practice, engineer A might choose to put code in the check_phase, engineer B picks report_phase and so on. Your testbench might still seem to work ok if you place code in a phase where it doesn't belong but you are on a slippery slope thereafter.

The buffet of UVM classes

When figuring out how to write a driver for a DUT, you will have to deal with at least the following classes:

uvm_component, uvm_object, uvm_env, uvm_agent, uvm_sequence, uvm_sequence_item, uvm_sequencer, uvm_driver

Ditto from the previous section. Enough said.

Active vs passive behavior

Sharing UVCs between unit and higher level UVM testbenches is a good practice. In theory, it allows lower-level UVCs to be re-used at a higher-level of hierarchy.

Doing it correctly is not straight forward though. In addition to following UVM recommendations on the topic, it is also necessary to craft unit environments where the scoreboards and checks can be re-used at a higher-level.

This is one of the cases where ifdefs creep into the testbench. With ifdefs, unit UVCs are in theory being re-used at a higher level, but their presence is merely symbolic.

The ifdefs have outmaneuvered UVM.

config_db abuse

In UVM, analysis Ports are intended to facilitate runtime communication of transaction-level data.

On the other hand, uvm_config_db is designed for static configuration data that is typically set up during the build phase of the simulation.

Unfortunately, nothing prevents testbench developers from using config_db as a quick and dirty alternate to analysis ports.

Incompatibility with open-source simulators

We have already discussed one type of UVM tax which is paid in the shape of headcount bloat.

But there's another UVM tax hiding in plain sight: EDA bills.

Slow simulation speeds

Countless engineering hours are invested to develop UVM environments. Maximum simulation speeds (measured in cycles simulated per second or CPS) achieved are generally in the sub-KHz range.

Given that the actual silicon being verified clocks at a frequency in excess of 3 GHz, UVM is giving us a peak frequency which is slower by a factor of 3 million!

But it gets worse.

As monitors and scoreboards are added to beef up checking, simulation speeds drop further.

DV managers swing into action demanding their troops eke out every Hz of simulation speed. Engineers being paid top dollar to find bugs quickly find themselves in unfamiliar waters.

They might succeed in improving CPS a bit, but simulating 1 second of silicon execution will still take more than a month. You could take a road trip through all the National Parks in a month.

Missing the forest for the trees ?

Penny wise, pound foolish ?

UVM Super Tax ?

Pick your metaphor.

Closing Argument

Time is money and we are losing big by maintaining the status quo.

We need a new methodology, a new paradigm.

One that increases the RoI for Verification. One that reduces time to market.

More in the next article. Stay tuned.?

Alexander Grobman

CPU design verification engineer at Cadence

1 个月

Another point - before UVM/Vera era single tool could be used to debug issues (VCD waves) Now main debugging tool is `uvm_info ... Design and verif engineers speak different languages regardless that it is System Verilog. Each group uses almost not overlapping sets of constructs from SV

Alexander Grobman

CPU design verification engineer at Cadence

1 个月

I fully agree. a clock generator can be written with 1 line of verilog code has agent, driver, sequence, seq item, monitor , etc in UVM .. ??

Michael Kavcak

ASIC engineering consultant for architecture, design, and verification.

1 个月

Specman/e always was, and still is, the superior verification language. SystemVerilog, and subsequently UVM, only won out due to having the word Verilog in the name.

Donald McCarthy

Infrastructure specialist. at Graphcore UK.

1 个月

Slapping on C++ classes to Verilog was always going to lead to bloat and Comp-Science over electronics. I'm waiting for someone to give us a hardware verification oriented language - that can be both a dynamically interpreted language when debugging with a REPL, and statically compiled fast, when running a regression.

要查看或添加评论,请登录

SilverLining EDA的更多文章