MathLib + VUnit
Lars Asplund
Main author and maintainer of VUnit. ASIC/FPGA/DSP Developer consulting for Q-Free ASA
This is the first of a series of posts in which I will review and provide personal insights on the presentations featured in the VUnit track at FPGAworld. In this review, I will cover the second session that was hosted by Kunal Panchal , a UVM verification lead at Ericsson. The presentation was a collaborative effort with Srinivasan Venkataramanan (AsFigo), Balaji Chirumamilla (Capgemini Engineering), Anirudh Pradyumnan Srinivasan (King’s College London), and Deepa Palaniappan (AsFigo).
Beyond his role at Ericsson, Kunal is actively engaged in several side projects, one of which is the open-source MathLib project. During the presentation, Kunal elaborated on the MathLib project's objectives and illuminated the role that VUnit plays in ensuring its verification.
For you who have followed my previous writings, it should come as no surprise that I advocate short code and test iterations to deal with the continuous stream of uncertainties/bugs/defects in everyday development. I don't endorse this principle because it's promoted by management gurus and books but because control theory dictates that this is the only way to successfully deal with uncertainties. To illustrate my point I often provide a physical example, like the double pendulum below, showing how minute uncertainties can accumulate and cause the system to spin out of control.
I always fear that these physical examples will be seen as nice visual metaphors that are not fully relevant to reality. However, this is not the case. It's the same control theory, only a different process/system.
Using examples from our own domain is a bit harder because experiences people have with VUnit or even the fact that they use VUnit is not public information. Having a user conference like this changes that. All of a sudden there is open real-world data to study. In the case of MathLib there is also another dimension to this.
When we have a process or system with uncertainties there are basically two approaches to address the issue:
1. Eliminate the uncertainties
2. Continuously measure deviations and make the necessary adjustments. Timing is of utmost importance here, as we need to keep the pace of the emerging uncertainties to be successful. This is the well-established feedback principle from control theory.
MathLib is an example of the first. Its core mission revolves around mitigating the uncertainties stemming from the use of mathematical operations across various modeling languages. In a typical scenario, system architects model system behavior in Matlab, while verification, often constrained by limited Matlab licenses, is based on HDL models. Small differences in the mathematical operations provided by the languages accumulate and what's being verified is not what was intended.
The MathLib project addresses this problem by providing HDL implementations of widely used Matlab functions, designed to mirror the original behavior. The initial focus centers on SystemVerilog implementations but VHDL support is also on the project's roadmap.
The feedback approach of dealing with uncertainties resonates well with the other part of Kunal's talk — the FPGA/ASIC development process itself is a process of uncertainties and we need feedback in the form of tests and reviews to compensate for the flaws and bugs in our design and implementation. Just like the pendulum, the project runs out of control if we cannot keep up with the bugs/uncertainties as they appear. Kunal presented this graph:
The longer a bug goes undetected, the more damage it can do and the more costly it becomes.
Around 2010, when the concepts of VUnit first took shape, the EDA verification industry was predominantly focused on SystemVerilog, the emerging UVM standard, and the role of the verification engineers in mastering these products. The spotlight was on the testing phase where the cost of defect detection is relatively high. Don't get me wrong, we need verification expertise and individuals who concentrate on the broader context. However, it's important to recognize that the majority of bugs are simple and entirely comprehensible to the designers who introduced them. Allowing the bugs to leak into the system and the organization in charge of its development is highly counterproductive.
Conversely, in the software industry, there was a significant emphasis on early-phase verification to avoid the costs associated with late-discovered defects. The focus wasn't solely on early verification; development also followed a highly iterative approach with short code/test iterations, all aimed at swiftly identifying and addressing issues. "Test early and often" became the guiding principle. Kunal illustrated the absence of this principle as a vicious circle of insufficient testing.
Unit testing emerged as the primary tool employed by software developers to adhere to the "test early and often" principle. However, it wasn't limited to just that. Test-Driven Design (TDD), a widely embraced unit testing approach, advocates the creation of tests before writing the code itself as a way to guide the design of the code. This proactive approach moves defect detection to an even earlier stage in the development process.
Some flavors of unit testing have also included semi-formal test naming conventions. Before writing tests and before writing code, emphasis is on naming test cases in a way that eliminates ambiguity. These names takes the role of low-level requirements and aim to avoid defects often introduced when transitioning from requirements to design and implementation.
While commitment to a concept like early and frequent testing is crucial, its practical execution is not possible without a test framework purposefully designed to support it. This realization led to the creation of VUnit (and other unit testing frameworks), and many of the motives behind MathLib's adoption of VUnit can be traced back to these fundamental ideas:
The VUnit support for these ideas has largely shaped the user group but does it really matter? Well, we've seen the control theory argument and the reasons why MathLib started to use VUnit but there are also another indication coming from the Wilson Research Group Functional Verification Study, conducted by Siemens every other year. This study offers many interesting data points, and one of these was also highlighted during Kunal's presentation:
领英推荐
VUnit and other tools not explicitly mention in the following survey question were not featured in the findings of this study:
Which Testbench base-class library methodologies does your project currently use? (Please select all that apply.)
1. Accellera Universal Verification Methodology (UVM)
...
11. Other (please specify)
I was curious as to why and reached out to Harry Foster, the chief architect of the survey, to learn more. It turns out that the formulation of this question has remained unaltered over the years to enhance the reliability of trend analysis. However, Siemens has received objections regarding the terminology "Testbench base-class library methodologies" and its applicability to the "other" tools. As a result of this ambiguity, the response frequency in that category was deemed unreliable and the data was not presented.
During our discussions, I was also provided additional insights, one of which relates to the bug escape graph. It was observed that two tools, VUnit and cocotb, stood out as the tools with significantly higher presence within the group of survey participants who reported no bug escapes. The prevalence of these tools in the bug-free group was 3 to 4 times higher than in the group with bug issues.
As is the case with any survey, the results are subject to interpretation, and the natural question that arises is: why this difference? Is it possible that these tools offer unique capabilities for uncovering bugs? While I believe that VUnit has a very comprehensive and sometimes unique feature set, the effectiveness of these tools ultimately depends on the users' ability to craft good tests. The feature set is hardly the only explanation.
Could it be that these tools are favored for less complex projects, where the risk of bugs is inherently lower? This is an opinion that occasionally surfaces, but there is no factual basis for such a misconception, as clearly demonstrated in this article.
Is it possible that VUnit and cocotb have a higher proportion of experienced users who excel above the average? To assess verification maturity, the Siemens survey used the adoption of constrained random verification as a measure. UVM and VUnit led the pack, with approximately 70% of users embracing this technique. However, cocotb had a considerably lower adoption rate of this methodology, which suggests that this explanation may not fully account for the observed differences. Another observation to highlight here is that, although VUnit has drawn inspiration from software development practices, it remains fully compatible with conventional hardware verification techniques like constrained random.
I suspect the disparity in bug escapes is related to the culture of testing early and frequently. While cocotb may not explicitly emphasize this principle as clearly as VUnit does, it is fully built on a software language and users are inevitably exposed to the world of unit test-driven software development. Cocotb also supports one of the most popular Python unit test frameworks available: pytest.
After all the praise for VUnit, I believe it's time to address some constructive criticism. Kunal approached me before the presentation with a request to include a slide highlighting areas where VUnit could improve. I wholeheartedly agreed because being part of an open community also means being transparent about areas that require enhancement.
Kunal pointed out that VUnit doesn't offer the same level of educational materials and support packages for SystemVerilog as it does for VHDL. I concur with this observation. VUnit has its origins in the VHDL community, and we have only incorporated basic SystemVerilog support because it naturally complemented the project. Since SystemVerilog provides more built-in language features, some of the support packages we've developed for VHDL aren't directly applicable and there are also other support packages available such as UVM.
However, these externally provided features don't necessarily align with the VUnit philosophy which means there is room for improvements. Given that the VUnit core team predominantly focuses on VHDL, we welcome more SystemVerilog-driven users to actively engage with us. Kunal is already on the task and hopefully we can showcase advancements in the near future.
These were my personal reflections and it became more of a VUnit philosophy lesson than an actual presentation review. Don't worry though, we've got you covered. You can view the full presentation here.
FPGA Expert
1 年Great article!
Algorithms and FPGAs at Truestream
1 年Thanks for a good talk Kunal Panchal, you really seem to have gotten Lars' philosophical engines going :)
Design Engineer
1 年As a naive user of VUnit, it gave good insight into the VUnit evolution along with its pros & cons. Thanks.
CEO & Founder AsFigo
1 年Amazing write-up Lars Asplund - well done Kunal Panchal and Balaji Chirumamilla at FPGAWorld event. Thanks Deepa Palaniappan and Saravanan Ganapathi for your contributions to #MathLib Shankar Hemmady Nambi JU - you may enjoy this post indeed AsFigo