Trends in DFT and Production Test for IoT Devices

August 09, 2016, anysilicon

Some years ago the term “System On a Chip”, shortened to “SOC”, was coined to describe chips that integrated into themselves the functions of several other chips in an electronic system. The point of this was obviously to reduce cost and form factor in situations where the cost and effort of doing such a thing was worth it. In the early days, the “system” that ended up on the chip would be a collection of digital blocks that would have been designed by different company departments, or different companies. This led to the growth of a market in digital IP: design blocks that could be sold as RTL and synthesised into a single technology.


There were obviously challenges in designing a single object that contained a potential disparate collection of sub-designs. It would be difficult to apply a single DFT methodology across these blocks, so a typical DFT method would be to partition blocks within the chip in a similar way to the boundary scan of separate components at the board level. But quite rapidly, the development of DFT/ATPG standards and tools, combined with the growth of IP catalogues (some from the EDA tool vendors), helped the block boundaries dissolve, and helped lead to the standardisation of digital DFT techniques that could be applied at the top level of the chip.




Now we are seeing that SOCs are getting more complicated in terms of the number of different “circuit modes” that are present (i.e. RF, analog, power, etc.), a trend that will continue as the IoT market expands and we see the integration of functions related to sensors, actuators, controllers, communication, and so on. I expect we will see a migration from “Big D, Little A” towards “Big D, Even Bigger A”. And through this I think we will see a return of the scenario described above, where “on the chip would be a collection of[analog] blocks that would have been designed by different company departments, or different companies”, bringing with it similar DFT challenges.


There have been, and continue to be, efforts to standardise test access for analog, e.g. IEEE 1149.4, but the intense customisation and performance sensitivity of analog design hampers standardisation: the chip designers would rather put effort into customised DFT than compromise their circuit performance. So it’s still quite tricky: standard methods are poorly defined and poorly serviced and standard coverage targets are not defined.


Now I’ll discuss the place that DFT ultimately matters the most: production test. The test program for a large digital chip can be significantly debugged in a few days because scan patterns are easy to simulate, being digital, so will typically be logically correct, having problems on-tester due to problems with internal or external timing or other electrical conditions, or maybe systematic errors such as running ATPG on the wrong version of the design! A small number of bugs can have huge consequences. But the test of a small but very multi-function / multi-mode chip can take months debug: there are more sensitivities to slight errors that may not be obvious; mixed mode simulations typically take more shortcuts that digital simulations; there are more programmable parameters at the chip level.


So what sort of directions is production test heading in ? Here are a few of them:


  • Massive self-test: let the designers go crazy with DFT objects and the control of them: put the test debug problem in the hands of the chip designers and look for a single overall pass/fail bit.
  • Push tester complexity into the loadboard: design the subset of instrument functions that you need onto the loadboard; set up a concurrent test capability (one site, multiple clock domains).
  • Run system-like tests: have the chip’s controller core run application-like code from a flash memory on the loadboard (this also pushes the development towards system-level engineers, which may be good or bad). But be careful of boot times…
  • Challenge the need for test and tester complexity: push back on the test methods being identified by chip designers: “why do we need microvolt accuracy ?”, “why does that carrier wave need to be modulated ?”, “why test it functionally, that’s not done much for digital circuits ?”. For this discussion to work, you will need to understand the physical failure modes that are likely in your chip’s manufacturing process and map them to electrical behaviour. Not easy, but it will pay off.


But some of these strategies require caution: you may have now made a piece of industrial equipment that needs tester-like operation and maintenance capabilities: calibration, diagnostics, spares-and-repairs, and so on.


Why does all this matter ? In some applications, but not all, unit cost will determine success or failure, but until you’re in production, this only matters in theory, so be careful not to over-focus on cost at chip debug & bring-up time, or through NPI (New Product Introduction) activities. Be ready to turn it on when you’ve delivered samples of good enough quality, and you’ve got a reason to do it (i.e. success). A product that has a theoretically low cost is of no use if it’s not available when it’s needed!


This brings into focus the key concern in planning what to do and when: Return On Investment. Get a realistic and agreed volume forecast and plan to that. Every activity costs money and effort, and should bring a clear benefit that is required. If you live or die based on the difference between the manufacturing cost and sale price of the chip then you should take fullcontrol of manufacturing test (this doesn’t mean execution – that part is optional). Why ? Well who cares the most about it ? You do. Be in control, be aware, and find out the things you need to know.


In conclusion, I see digital DFT being well understood by those involved in chip design, and is well addressed by EDA tools – I would go so far as to say that it is “easy” (not low in effort required to do it, but low in effort required to figure out what to do and how to do it). Analog DFT is being done OK but needs improving: “what to do” is not generally agreed, and there are traps in terms of wasted effort and oversights. And in production test, complexity is moving away from the tester and towards the chip. Test engineering is becoming more difficult and interesting again!





This is a guest post by Paul Freeman which is CEO of PF Consulting