Evolution of the Test Bench

April 11, 2016, anysilicon

Nothing is permanent except change and need constantly guides innovation. Taking a holistic view with reference to a theme throws light on the evolution of the subject. In a pursuit to double the transistors periodically, the design representation has experienced a shift from transistors  à gates à RTL and now to synthesizable models. As a by-product of this evolution, ensuring functional correctness became an ever growing unbounded problem. Test bench is the core of verification and has witnessed radical changes to circumvent this issue.




While the initial traces of the first test bench is hard to find, the formal test benches entered the scene with the advent of HDLs. These test benches were directed in nature, developed using Verilog/VHDL and mostly dumb. The simulations were completely steered by the tests i.e. the stimuli and checker mostly resided inside the tests. Soon variants of this scheme came wherein the HDLs were coupled with other programming languages like C/C++ or scripting languages like perl & tcl.  Typical characteristics of that period were, stretched process node life, multiple re-spins a norm, elongated product life cycle and relatively less TTM pressure.  For most part of the 90’s these test benches facilitated verification and were developed and maintained by the designers themselves. As design complexity increased, there was a need for independent pair of eyes to verify the code. This need launched the verification team into the ASIC group! Directed verification test suite soon became a formal deliverable with no. of tests defining the progress and coverage. After sustaining for almost a decade, this approach struggled to keep pace with the ramifications in the design space. The prime challenges that teams started facing include –


– The progress was mostly linear and directly proportional to the complexity i.e. no. of tests to be developed

– The tests were strictly written wrt clock period and slight change in design would lead to lot of rework

– Maintenance of the test suite for changes in the architecture/protocol was manual & quite time consuming

– Poor portability & reusability across projects or different versions of the same architecture

– High dependency on the engineers developing the tests in absence of a standard flow/methodology

– Corner case scenarios limited by the experience and imagination of the test plan developer

– Absence of a feedback to confirm that a test written for a specific feature really exercised the feature correctly


Though burdened with a list of issues, this approach still continues to find its remnants with legacy code.


In the last decade, SoC designs picked up and directed verification again came to the fore front. With processor(s) on chip, integration verification of the system is achieved mainly with directed tests developed in C/C++/ASM targeting the processor.  Such an approach is required at SoC level because –


– The scenario should run on the SoC using the processor(s) on chip

– Debugging a directed test case is easier given the prolonged simulation time at SoC level
– The focus in on integration verification wherein test plan is relatively straight forward
– Constraints required for CRV needs to be strict at SoC vs block level and fine tuning them would involve a lot of debugging iterations which is costly at SoC level


While the above points justify the need for a directed approach to verify SoC, the challenges start unfolding once the team surpass the basic connectivity and integration tests. Developing scenarios that mimic the use-cases, concurrence behaviour, performance and power monitoring is an intricate task. Answers to this problem are evolving and once widely accepted they will complement the SoC directed verification approach to further evolve into a defined methodology.


Directed verification approach isn’t dead! It has lived since the start of verification and the Keep ISimple Silly principle would continue to drive its existence for years to come. In the next part, we check out the evolution of CRV, its related HVLs and methodologies.


The progress on verification was in linear relationship with the no. of tests developed and passing. There was no concept of functional coverage and even the usage of code coverage was limited. Apart from HDLs, programming languages like C & C++ continued to support the verification infrastructure. Managing the growing complexity and constant pressure to reduce the design schedule demanded an alternate approach for verification. This gave birth to a new breed of languages – HVLs (Hardware Verification Languages).




The first one in this category was introduced by Verisity popularly known as ‘e’ language. The base of this language was AOP (Aspect Oriented Programming) and required a separate tool (Specman) in addition to the simulator. This language spear headed the entry of HVLs into Verification and was followed by ‘Vera’ that was based on OOP (Object Oriented Programming) promoted by Synopsys. Along with these two languages,SystemC tried to penetrate this domain with support from multiple EDA vendors but couldn’t really gain wide acceptance. The main idea promoted by all these languages was CRV (Constrained Random Verification). The philosophy was to empower the test bench with all features of drivers, monitors, checkers and a library of sequences/scenarios. The generation of tests was automated with the state space exploration guided by constraints and progress measured using functional coverage.



As adoption of these languages spread, the early adopters started building proprietary methodologies around them. To modularize development, BCLs (Base Class Libraries) were developed by each organization. Maintaining local libraries and continuously improving them while ensuring simulator compatibility was not a sustainable solution. The EDA vendors came forward with methodologies for each of these languages to resolve the above issue and standardize the usage of language. Verisity led the show with eRM (e Reuse Methodology) followed by RVM (Reference Verification Methodology) from Synopsys. These methodologies helped in putting together a process to move from block to chip level and across projects in an organized manner thereby laying the foundation for reuse. Though verification was progressing at a fast pace with these entrants, there were some inherent issues with these solutions that left the industry wanting for something more. The drawbacks include –


– Requirement for an additional tool license beyond simulator

– Efficiency of simulator took a toll because of passing the control back & forth to this additional tool

– These solutions had limited portability across simulators

– As reusability picked up, finding VIPs based on the HVL was difficult

– Hardware accelerators started picking up and these HVL couldn’t compliment it completely

– Ramp up time for engineers moving across organizations was high


System Verilog

To move to the next level of standardization, Accellera decided to improve on Verilog instead of driving e or Vera as industry standard. This led to the birth of System Verilog which proved to be a game changer in multiple respects. The primary motivation behind driving SV was to have a common language for design & verification to address the issues with other HVLs. Initial thrust to System Verilog came in from Synopsys by declaring Vera as open source and extending its contribution to definition of System Verilog for verification. Further Synopsys in association with ARM moved RVM to VMM (Verification Methodology Manual) based on System Verilog providing a framework for early adopters. With IEEE recognizing SV as a standard (1800) in 2005 the acceptance rate increased further. By this time Cadence acquired Verisity after its quest of promoting SystemC as a verification language. eRM was transformed to URM (Universal Reuse Methodology) that supported e, SystemC and System Verilog. This was followed by Mentor proposing AVM (Advanced Verification Methodology) supporting System Verilog & SystemC.  Though System Verilog settled the dust by claiming maximum footprint across organizations, availability of multiple methodologies introduced inertia to industry wide reusability. The major issues faced include –


– Learning a new methodology almost every 18 months

– The methodologies had limited portability across simulators

– Verification env developed using VIP from 1 vendor not easily portable to another

– Teams confused in terms of road maps for these methodologies based on industry adoption


Road to UVM


To tone down this problem, Mentor and Cadence merged their methodologies and came up with OVM (Open Verification Methodology) while Synopsys continued to stick to VMM. Though the problem was reduced, still there was a need for a common methodology and Accellera took the initiative to develop one. UVM (Universal Verification Methodology) largely based on OVM and deriving featured from VMM was finally introduced. While IEEE recognized ‘e’ as an standard (1647) in 2011, it was already too late. Functional coverage, assertion coverage and code coverage all joined together to provide the quantitative metrics to answer ‘are we done’ giving rise to CDV (Coverage Driven Verification).



This a guest post by Gaurav Jalan, general chair at DVCON India