Scan Test Power Normalization Using Hierarchical Test Flow



Low power test efforts in SOC design are broadly classified into – fine tuning ATPG patterns, enhancing collaterals for supporting low power ATPG, architecture for low power DFT, improving estimation and correlation. The flow discussed here comes under the category of “fine tuning ATPG patterns”.


Currently due to paranoia around power issue, design tend to take a “partition” approach, avoiding testing of the complete SOC in a single go. This approach of testing a SOC partition-wise (or in multiple passes) results in an increase in test time, which in-turn results in test cost inflation.


This power paranoia is due to multiple factors –


  • Huge gap between functional and test power, due to increased random toggling in design. More the gap, more are the chances of facing failures while testing at lower voltages (Vmin). Unduly pessimistic scenarios are quite likely in ATPG patterns, especially while testing at rated frequencies (at-speed).
  • Tester capacity issues in handling transient power droop while moving from slow “shift” to fast “capture”, resulting in failure of initial patterns, as seen in devices like Andorra, Rainbow, Panther, etc….
  • High power consumption during capture can even exceed package specifications.
  • Scan patterns may need to be regenerated if silicon failures are seen due to power issues.


In addition to taking a partitioned approach of testing, multiple design and pattern techniques are also employed–


  • Using power constraints during pattern generation, which limits the maximum activity in design.
  • Generating patterns with low power shift enabled (Reduced random fill).
  • VCD based IR drop estimations


How Hierarchical Test Flow Works?


Hierarchical Test methodology allows user to perform ATPG at the partition level instead of at the top level of the design and then mapping these partition level patterns to top level, wherein user can club multiple partitions to run in parallel if he wise so, without need to changing the patterns. Partition mapping flow enables modular scan pattern generation (bottom-up approach) where one can map scan patterns from partition level to the top level.


The flow is a two-step process:


  1. Pattern generation at partitions level for all partitions.
  2. Top level mapping of partitions and pattern merging.


Figure 1: Hierarchical Test Flow


This is basically a divide and conquer approach which gives benefits in term of run times, compute resources, allows pattern reusability, eases verification.


Now while creating patterns are partition level, since only partition level information is available with the tool, all constraints placed on the ATPG tool for keeping a check on the scan test pattern power are now only applicable on the partition instead of what would have been the entire design in case of top level (non-hierarchical) ATPG flow.


This means after mapping there is a possibility that pattern with high activities in multiple partitions get clubbed together and resultant pattern has an even higher peak power, similar can happen with low activity patterns, resulting in large gap between maxima and minima.


For example – Below is peak capture switching activity plot of 1000 scan patterns for two partitions from one of the designs.

Figure 2: Capture Switching Activity Plot

[x-axis: pattern index, y-axis: capture switching activity (percentage design logic)]



Now here when mapping is performed and patterns of both partitions A and B (grey marker), are run in parallel, new peak switching activity is as shown.


Figure 3: Capture Switching Activity Plot

[x-axis: pattern index, y-axis: capture switching activity (percentage design logic)]


Pmax/Pmin = 1.472                                    Pmax/Pavg = 1.149         Pavg/Pmin = 1.281


There is a 47.2% difference between the maximum and minimum capture switching activity,


Proposed Flow



Our proposal is to extend the hierarchical flow to help address the power concerns. New flow will help in scan pattern power normalization:

  1. Pattern generation at partition level.
  2. Grade patterns based on power parameter.
  3. Pattern re-ordering to achieve normalization.
  4. Top level mapping of partitions and pattern merging.


Figure 4: Proposed Flow

Patterns generated at partitions level are to be graded based on the parameters like average capture/shift activity, peak capture/shift activity (silicon results can also be used). Once graded these patterns are to be reordered to achieve normalization of the selected parameter and the last step is top level mapping of partitions and pattern merging.


With the proposed flow if same partitions (A and B) are mapped and run in parallel the capture switching activity plot would be as shown by yellow marker in Figure 5.


Figure 5: Capture Switching Activity Plot (proposed v/s original)


Pmax/Pmin = 1.158       Pmax/Pavg = 1.041         Pavg/Pmin = 1.113


With proposed flow, the gap between maximum and minimum switching activity across patterns is reduced to 15.8% from 47.2%.




  • Reduced maximum peak during scan testing, without having to take any hit on pattern count due to traditional low power techniques.
  • Minimize gap between maximum and minimum power during testing, thereby reducing chances of facing a power droop issue.
  • More parallelism can be achieved, thereby saving test cost.
  • Can be used to reduce spikes in burn-in mode, by using this approach for shift power normalization instead of capture power for burn-in patterns.
  • Ability to create worst case patterns for IR drop analysis by changing step 3 of proposed flow.


Additionally this process can be used even after tape out, so we can have silicon power results with us and decide on which all patterns to merge, to achieve the best results in terms of test time by increasing parallelism and ensuring zero power related issue impacting results.




This is a guest post by Mayank Parasrampuria

Recent Stories