Can RISC-V Replace UVM and Disrupt Verification?
SilverLining EDA
Accelerating Verification using Cloud FPGAs and Paradigm-Shifting debug tools
Author: Irfan Waheed, Founder & CEO, SilverLining EDA
In the previous article in this series, I presented my argument against UVM. As a follow-on, I will outline an alternative methodology that could transform verification and free the industry from the UVM juggernaut.
Over the last decade, RISC-V has been all the rage in the semiconductor industry. RISC-V has disrupted the industry in two distinct ways.
Firstly, by offering standardized and ratified architecture profiles, RISC-V allows software to rely on the existence of a certain set of ISA features in a particular generation of hardware implementations. This enables the development and evolution of a hardware-software ecosystem which can be considered a "socket compatible" alternative to the x86/ARM ecosystems.
The second disruption, and arguably the more profound one, has been the spawning of a bonafide and vibrant field known as hardware/software co-design. This has been made possible by the customizability of RISC-V and is particularly valuable for emerging fields like AI and cryptography, where specialized hardware offers better performance and efficiency. By enabling hardware/software co-design, RISC-V has given engineers an additional degree of freedom to develop optimal solutions. In Engineering, a degree of freedom is worth its weight in gold.
Can we exploit this degree of freedom to re-imagine Verification?
But before trying to answer the question posed above in a meaningful way, let's examine how verification is actually done in UVM based DV flows.
A Design Under Test (DUT) is implemented in the synthesizable subset of a HDL (typically System Verilog).
The DUT is then wrapped by a testbench which consists of UVM VIPs (AXI, APB, CHI etc) along with drivers, monitors and scoreboards which are also UVM compliant. Reference models written in C/C++ are also in the mix to enable co-simulation and checking with respect to a golden model.
The unifying feature of all testbench code is that it is non-synthesizable: code that cannot be directly converted into hardware by synthesis tools.
A Verilog simulator converts the testbench and design code into an executable binary by compiling it into intermediate code and linking it with its simulation libraries. When the simulation is kicked off, the executable is run on a general-purpose CPU. This executable simulates hardware by evaluating signal values and processing events in a time-ordered fashion. The simulation advances the clock and evaluates the state of the design at every 'tick'.
UVM testbench code is also simulated in parallel with the DUT. To avoid race conditions and simulation lockups, it is critical to abide by UVM guidelines and place code in the appropriate simulation phase. As discussed in the previous article, there are 12 simulation phases in UVM.
A Verilog+UVM simulation is fundamentally different from typical software as it mimics hardware behavior by modeling parallelism (e.g hardware in 2 different pipeline stages which runs concurrently) as well as sequential behavior (e.g. hardware in 1 pipeline stage which has logic cones fed by 1 set of flops).
It also mimics the testbench behavior by simulating time-consuming tasks as well as wait() and randomize() statements. It is pertinent to mention that UVM support is available only in commercial Verilog simulators.
To summarize, although testbench code is different from typical software, it still runs on a general purpose CPU (likely an x86 based Intel/AMD) just like all other software.
What if we re-purpose this software so it can run on a RISC-V soft core? What if we changed the paradigm and shoe-horned all testbench code into a RISC-V based framework where the DUT is connected to and driven by a golden RISC-V soft core? In practice this will require us to write the testbench code in a way which allows it to run on a RISC-V softcore. In other words C/C++ or assembly.
There are 3 fundamental requirements to consider in a DV framework, regardless of whether your language of choice is SV/UVM or C/C++.
We need to determine how to address them in this new paradigm.
DUT Interfaces: how to drive them and consume their results?
When we talk about interfaces in chip design, there are two types: standard and custom. PCIE, AXI, APB, CHI, USB, JTAG and other standardized interfaces fall in the 'standard' bucket. Anything else would be considered custom.
How can a RISC-V soft core support both types?
Owing to its customizability, RISC-V fits the bill perfectly for custom interfaces. Implementing custom instructions to drive custom interfaces seems like a great engineering solution for this problem. We can implement the custom instructions in the soft core and expose them to the DV engineer who uses them in the C/C++/Assembly testbench code.
What about standard interfaces ?
This will require our RISC-V soft core to:
This isn't trivial, but isn't rocket science either.
Monitors for debugging: how to implement them ?
This is relatively simple. We can implement printf() functions in C which display values of any DUT state we make addressable by the RISC-V core.
Reference models and co-simulation: how do they fit in ?
This is also a natural fit in the new paradigm. Reference models are typically written in C/C++ anyway. We just need to compile them for the target RISC-V core.
The reference model will also run on the same core running the stimulus and monitoring code. This provides a tightly coupled environment where all testbench code is in one language and better yet, does not require the verilog to be re-built anytime the testbench code changes. We can just compile the C/C++ code and rerun the test.
VeriFire by SilverLining EDA is an EDA tool which implements the RISC-V based verification methodology described above. It has two parts
VeriFire Engine
A VeriFire Engine (VE) is a RISC-V–based, synthesizable IP that can be customized to interface with various types of designs under test (DUT). A VE provides a means to stimulate, monitor, check, and debug any DUT with which it is connected. It can be connected to a DUT using standard interfaces, like AXI, as well as custom interfaces. All drivers, monitors, and scoreboards are coded in C or C++ targeted for the VE. Depending on the complexity of the DUT and the number of interfaces it has, the VeriFire environment will consist of one or more VEs.
A VE can be used for both active and passive mode verification. In active mode, it acts as a driver, monitor, and scoreboard for the DUT. In passive mode, it acts only as a monitor and scoreboard.
FireBolt
FireBolt is a set of software utilities that lets the verification engineer monitor the DUT while it is being simulated. FireBolt is also used to compile tests, load them onto the VE, and determine whether the test passed or failed among other verification infrastructure tasks.
FireBolt and the Verilator Simulation (which encapsulates the VE and DUT) are 2 processes running on the host machine. The figure below shows how FireBolt and VE work together to create a Verification Environment for a DUT (shown in red).
The figure below shows a simplified code snippet for how a DUT (Floating Point Unit in this case) can be driven, checked and monitored. Custom CSRs are used to drive the DUT and consume the results. The reference result is computed by a function softfloat_add() (not shown in the snippet)
RISC-V provides us an opportunity to re-imagine Verification. VeriFire exploits the additional degree of freedom offered by RISC-V to innovate a new DV methodology that is not only practical, but also reduces Verification and EDA bills by removing dependence on UVM and commercial EDA tools.
There are several other secondary benefits of this methodology which we will cover in the next article. Stay Tuned!
FPGA/ASIC designer, open-source enthusiast
1 个月Not sure I understand your proposal. You're implementing drivers, sequencers, monitors and scoreboards in C, and you let them run on a RISC-V CPU connected to the DUT. Everything is simulated in Verilator. Isn't this extremely inefficient? I could see the benefit if you emulated the whole RISC-V plus DUT on FPGA (and I see you are working on it), but on a PC I would rather use the full power of the host.
Dave Keeshan We have extended this solution to work on AWS FPGAs and also addressed the debugging problem you alluded to. We will cover that in the next article. Thanks for your feedback and comment!
I was investigating something similar, to allow block level designers to use the arm in a zynq, with its axi bus (with shins to ahb, apb etc) to do accelerated testing. It it super fast when it works, but hard to debug if there is an error. Since then the commercial emulators matured, but they are so expensive This looks interesting