Monthly Archives: December 2014


If Your Chip Is Not an SoC, It Soon Will Be

Last week’s post was addressed primarily to those of you who are already designing SoCs. We made the point that more and more SoCs have multiple processors, either homogenous or heterogeneous, and that most or all of those processors do or will have caches. This led to the main conclusions of the post, that multi-processor cache coherency is necessary for most SoCs, and therefore that coherency is now a problem extending beyond CPU developers to many chip-level verification teams.


But what if you don’t have embedded processors in your design? There’s a clear sense emerging in the industry that more and more types of chips are becoming multi-processor SoCs, and most of these will require cache coherency for the CPU clusters and beyond. In this post we’ll describe the trends we see, based in part on what we learned at the recent Linley Processor Conference in Santa Clara. The world as we know it is changing rapidly, offering more challenges for verification teams but more opportunities for us to help.


Some types of chips have had embedded processors for some time. SoCs are the norm in many types of consumer devices: smart phones, tablets, cameras, printers, and more. Many people believe that the Internet of Things (IoT), sometimes known as The Internet of Everything (IoE), will be the next big driver for the SoC market. IoT chips are likely to look like smaller versions of consumer SoCs, with one or more processors, wireless connectivity, and sensors to gather real-world input.


As we mentioned last week, many SoCs have evolved to add multiple processors and many of these processors have caches, driving the need for cache coherency. We noted that many smart phones and other consumer devices already contain multiple embedded processors, and it’s just a matter of time before they contain multi-level, coherent caches. That’s why we made the claim that “if your SoC is not cache coherent, it soon will be.”


As suggested by the title of this week’s  post, we believe that other types of chips are following suit. Networking chips (routers, switches, bridges, modems, etc.) have traditionally not relied on CPUs for their main functionality.  They might have contained a control processor to configure the chip and handle exceptions, but generally this processor was not involved in the main flow of data through the chip. The chip could be verified by a traditional UVM testbench without writing test cases to run on the embedded processor.


Many networking chips are becoming true SoCs, with multiple processors, and soon many of them will have coherent cache structures as well. The Linley conference featured more than two dozen interesting talks on new and upcoming chips for a wide variety of applications.  What was surprising to the Breker attendees is that many of the block diagrams looked rather similar, with multiple processors and multi-level caches. Their need for cache coherency and processor-based performance metrics is looking much like other SoCs.



Embedded processor IP vendors are delivering self-contained CPU clusters, as shown on the left, and SoC teams are incorporating them into chips as shown on the right. Traditional SoC such as those for cell phones and tablets are clearly moving in this direction. However, we’re also seeing that the networking “sea of processors” or “array of processors” in a mesh-connected structure with no shared memory is giving way to fully cache-coherent multi-core designs that look much like these two diagrams.


So, we end up at the same place that we ended last week.  Chips are moving to SoCs and SoCs are demanding multi-processor cache coherency.  TrekSoC and our Coherency TrekApp are out-of-the-box solutions for verifying pre-silicon multi-processor cache coherency and measuring performance under realistic system stress. TrekSoC-Si extends these benefits to hardware platforms (emulators, FPGA prototypes, and silicon in the lab) all the way to post-silicon validation. As always, please contact us to learn more.


This is a guest post by Tom Anderson which is vice president of Marketing for Breker Verification Systems.  This post originally appeared on The Breker Trekker blog at EDACafe.


Moore’s Law Will Not Come To An End Anytime Soon

Gordon Moore said‚ on the 40th anniversary of his law that “Moore’s law is really about economics.” What did he really mean by that? In 1965 when Gordon Moore put forth Moore’s law based on his observation, those years were Golden years of Free Market Capitalism in America. The entire decade of 1950s and 60s was a Golden Era of Free Market Capitalism. As the taxes were high on richest Americans, the taxes on middle class Americans were very low. As middle class Americans paid low taxes, they had a very high consumer purchasing power and hence they generated a high domestic consumer demand. US Economy was not something to worry about for Gordon Moore at the time when he made his famous observation.


What has happened since 1970s is that in order to sustain the relentless progress of Moore’s law, semiconductor manufacturing moved to Japan for lower manufacturing costs. When Plaza Accord was signed in 1985 because of huge problems with US Balance of Payment deficits from its Free trade with Japan, Japanese economy crashed. Hence, In order to sustain Moore’s law , US Semiconductor Industry had to find an alternative low cost manufacturing location to keep manufacturing costs down. The low labor costs in Asia acted as an incentive to move semiconductor manufacturing to China and South East Asia.


This also transformed Integrated Device Manufacturer (IDM) business model of US Semiconductor industry into a Fabless business model. While Moore’s law kept progressing on the physical side by shrinking transistor dimensions, Macroeconomics was completely ignored by American businesses. The trade deficits and budget deficits both started to soar. Fabless business model was indeed a win-win for both pure-play foundries as well as Fabless semiconductor businesses. However, Since Fabless model resulted in twin deficits, sustaining US economy and hence Moore’s law became difficult.




What is to be done to sustain Moore’s law. Moore’s law has both Physical and Economical limits. Based on Moore’s observation, the shrinking dimensions of transistors would one day meet physical limits of scaling. It would not be possible shrink transistors any further. Does this mean that semiconductor industry will stop progress. Absolutely not!!! Human mind has time and time overcome all the technological challenges in shrinking transistor dimensions. While Solid state Physics could transition to quantum mechanics, it would not mean that Industry would stop progressing. Progress of science and technology has continued for last 50 years and it should continue for next 50 years and beyond to keep benefiting from shrinking transistor dimensions.


As pointed out, Over last 50 years Progress of Moore’s law has been scaling at all costs and ignoring Macro-economy in this process. No progress is sustainable without a sustainable macroeconomic progress. Hence, For Moore’s law to continue for next 50 years and beyond, it would not just need a progress on physical side but an equally good progress on economic side. Only when physics and economics succeed would Moore’s law succeed and can semiconductor industry continue to progress.


It took me almost one complete year to work on my upcoming book on ” The Macroeconomics of US Microelectronics Industry”. I am sure that my proposed ideas would lead to a sustainable progress on both physical and economic fronts. The proposed solutions would transform Crony monopoly Capitalism to Free market mass Capitalism. The proposed solutions would not just benefit semiconductor industry professionals but would greatly benefit even business leaders. It would have small but efficient government, low unemployment, stable economy, lower taxes, steady growth in corporate profits and end of speculations which result in bubble economy .


If semiconductor industry professionals are looking for above solutions for this great industry of ours as we continue to find ways to sustain the progress of Moore’s law, I would certainly recommend you to read my upcoming book. You can also visit my blog


to keep yourselves updated on options for pre-ordering a copy my book. I shall make sure that my book is available to be read not just in standard hardback cover format but also in ebook formats like Amazon Kindle Edition, an Apple iPad/iPhone Edition, and a Barnes and Noble Nook Edition.


Be positive. The future is bright for our industry and Moore’s law will not come to an end anytime soon with my proposed solutions. These solutions are also applicable for other industries and hence it would be a good resource to transform the entire US economy to a Free market economic system.


This is a guest post by Mr. Apek Mulay. He is CEO of Mulay’s Consultancy Services. He is an analyst, blogger, entrepreneur, and macro-economist in the U.S. semiconductor industry.



setup- hold time

Setup/hold interdependence in the pulsed latch (Spinner cell)

This is a guest post by Dolphin Integration which provides IP core, EDA tool and ASIC/SoC design service.


The frequency of the very large Systems-on-Chip continuously increases over the years. Operating frequencies of up to 1 GHz are common in modern deep sub-micrometer application specific integrated circuits. The verification of timing in VLSI circuits is achieved by means of static timing analysis (STA) tools which rely on data described in the cell libraries to analyze the circuit. The characterization of the individual cells in cell libraries is therefore highly critical in terms of accuracy of the STA results. Inaccurate characterization of constraint timings causes the STA results to be either overly optimistic or pessimistic. Both cases should be avoided as the optimistic case can cause a fabricated circuit to fail, whereas
the pessimistic case unnecessarily degrades circuit performances.

This article describes the setup/hold pairing for the standard cells and proposes a new method to characterize it accurately. The first part discusses the setup/hold constraints and the existing solution to model the setup/hold interdependence. The second part emphasizes on the Dolphin Integration solution to determine the setup/hold interdependence and the last part applies the new solution on the pulsed latch (spinner system).

Setup hold constraint definitions

Setup time

The setup time for a sequential cell is the minimum length of time during which the datainput signal must remain stable before the active edge of the clock (or other triggered signal) to ensure correct functioning of the cell.

Hold time

The hold time for a sequential cell is the minimum length of time during which the data-input signal must remain stable once the edge of the clock is activated (or other triggered signal) to ensure correct functioning of the cell.
Figure 1 illustrates setup hold times for a positive-edge-triggered sequential cell.



Measurement methodology: Setup constraint values are measured as the delay between the time when the data signal reaches 50% of Vdd and the time when the clock signal reaches 50% of Vdd.
Bisection: The bisection method is an algorithm used to search for the solution of a function. The method consists in repeatedly dividing the range of the input signal into two parts and then selecting the sub-range in which the solution of the output signal is found. The algorithm stops when the tolerance criteria for the output signal is reached.
Pass-fail: The Pass-Fail method is a particular case of the bisection method where the result for the output signal must PASS for one limit value of the range of the input signal and must FAIL for the other limit. Figure 2 presents the Pass-Fail method.


Push-out: The cell is considered functional as long as the output reaches its expected value and the delay of the output does not exceed the reference duration by more than X% (pushout methodology). The reference delay is the one measured with a large setup time (ideally
infinite). X is called hereafter the “percentage of degradation” (see Figure 3).

Minimum hold time and minimum setup time: The minimum setup time is measured when the hold time is considered infinite. In the same way, the minimum hold time is measured when the setup time is considered infinite.



Setup Hold interdependence

The setup time depends on hold time and vice-versa which means that an interdependence exists between setup and hold.


Figure 4 presents the interdependent curve of Setup-Hold. Whatever the point selected on the curve the output signal delay is degraded by X% (push-out method). The hatched zone represents the zone where the cell does not meet the bisection or functional criteria. Point C
corresponds to the minimum setup and minimum hold which is provided by most standard cell library providers. This point, found in the hatched zone, is very optimistic as it is far from the safety barrier represented by the curve. In some cases, this characterization may have
an impact on the functionality of the circuit for paths with relatively small setup and hold slacks. In fact, the distance between the characterized point and the curve represents the required positive slack for setup and hold at STA level. The constraint is hidden to the SoC designer as library providers do not provide those margins and do not elaborate on the impact of their characterization choices. Point A is found in the security zone but it is very pessimistic/negative with respect to the performance of the circuit. There is a considerable reduction in the circuit speed. Point B is the optimum point because it has the advantage of being located at the frontier of the security zone, which means there are no hidden
constraints, without leading to any important reduction in performance of circuit (in terms of speed). So, Dolphin Integration, as library provider, has setup a characterization methodology to take the interdependence into account while characterizing optimally both the setup time and the hold time on the frontier of the functional region (point B in Figure 4).


Methodology of determination of the setup/hold curve To determine the setup-hold pair on the curve, we use the bisection algorithm linked to the push-out criteria. The X% percentage of degradation of the push-out is shared between the setup time XS% and hold time XH%. This determination of setup and hold pair can be performed in two distinct ways: by injecting measured setup time in hold time determination
or conversely by injecting measured hold time in setup time determination as detailed below. To inject setup time in hold time determination, we measure the setup time with an infinité hold time using the XS% push-out criteria. Then, the hold time is characterized with the
XH% push-out criteria while re-injecting the measured setup time. The result is shown below as the yellow curve of Figure 5.
To inject hold time in setup time determination, we measure the hold time with an infinité setup time using the XH% push-out criteria and then the setup time is characterized with the XS% pushout criteria while re-injecting the measured hold time. The result is shown below as the blue curve of Figure 5. With this method and the appropriate parameters of degradation, we characterize point B in Figure 4, whereas by adding fixed margin to the determined setup and hold time, the pessimistic point A of Figure 4 is obtained.


The setup/hold interdependence in the pulsed latch (spinner system)

The spinner system is a particular design of the pulsed latch developed by Dolphin Integration [1]. The pulsed latch represents an alternative to the conventional flip-flop for ultra high-density logic design.

The pulsed latch is denser than the flip-flop. The replacement of conventional flip-flops by pulsed latch presents a 10% area saving after P&R for both mature and Advanced technological nodes. To benefit the advantage of the pulsed latch (spinner system) mastery of the constraint timings characterization is required as illustrated with the P&R on the benchmark (Motu-Uta [2]).
The methodology of characterization with re-injection is illustrated with the Dolphin Integration standard cell stem SESAME-uHD-BTF at 55nm process (Figure 5). The choice of point B(see Figure 5) is a compromise for the Setup/hold pairs that provides a good result in both timing and area on the Motu-Uta benchmark. The degradation timing between the C, B points, degrades only of 1% of the circuit frequency. As a result, the circuit will be more reliable.



This article shows the importance of the choice of the characterization methodology for the Setup-Hold pair. The solutions proposed by Dolphin Integration provide the best compromise between circuit speed and reliability.


This paper showcases the study on the Setup/Hold inter-dependence. It examines different existing methods for characterization and presents a new method to determine the Setup/Hold pairing for Standard Cells. This new method developed by Dolphin Integration is applied particularly on the pulsed latch (spinner system) in order to obtain the best compromise between circuit’s speed and the reliability.


[1] ChipEstimate – 2013-02-26 – Spinner System: optimized design and intégration
methodology based on pulsed latch for drastic area reduction in logic designs
[2] ChipEstimate – 2010-02-23 – Choosing the best Standard Cell Library without falling into
traps of traditional -benchmarking methods





Click here to learn more about Dolphin Integration products and services.


The IP Blame Game

This is a guest post by Methodics that delivers state-of-the-art semiconductor data management (DM)  for analog, digital and SoC  design  teams.

The topic of IP quality in the SoC era is difficult to define, and solutions to problems relating to IP quality, verification, and use are hard to find. Debates rage between IP users, suppliers, and EDA vendors about where the responsibility lies for making quality IP available for use and re-use in an efficient, predictable, and scalable manner.


The use of IP—whether internally developed or sourced from a third party—is inherently complex. IC and SoC projects require a large volume of IP blocks, and the difficulty of managing this volume is compounded by factors such as geographically diverse design teams, a lack of standards for IP use and quality, and shifting design parameters. Today, many design organizations struggle to keep project data organized properly and to communicate changes effectively. Finally, exacerbating the situation, companies suffer from poor or no permission management strategy, bad performance, inconsistent data management systems, and spiraling disk/network resource requirements.



While there are many tools available to help verify, debug, assemble and otherwise manipulate IP, there’s a distinct absence of solid design data management systems that address the specific needs of IC and SoC designers. As a result, IP use often suffers from a bad rap, at least when quality is at issue. Users blame providers, and tool vendors and CAD managers are often caught in the middle, trying to put together solutions that track changes, understand and monitor IP use and quality with models, and offer some degree of version control. Complicating matters is the fact that the term “IP quality” has different meanings to different people: Is IP quality 1) the functional correctness of the IP—does it work they way it is supposed to (i.e., is it bug free)? or 2) defined by the IP’s ability to do what is expected with respect to design parameters such as power, timing, area, etc.?


Developing and integrating quality IP by either or both of those definitions requires a system that can effectively track changes and input across the entire design team at the desktop level and provide real-time access to a wide range of meta data and quality information on IP, as well as keep project managers and other senior management informed on how the use of IP is impacting schedules, budgets and design resources.


Historically, there has been no single way to control, measure, or manage the use of IP in IC and SoC projects. In the past, designers used relatively simplistic RCS/CVS file versioning to manage changes in designs. Over the years, next generation DM (data management) tools emerged that improved performance and reliability. These tools added a layer of abstraction over the file versioning problem. As designs became more complex and design teams diversified, it became common to have multiple DM repositories and even multiple DM tools used on a single project. Companies have also addressed IP management problems through proprietary solutions, or have tried to integrate enterprise PLM (Project Life-Cycle Management) systems.


These approaches have not solved the problems efficiently and effectively, but rather are time-consuming and distracting for an organization that needs to be focused on IC and SoC design rather than project management systems. Further, none provides a complete way to address the specific needs of a complex SOC project.


An SoC-oriented design data management system as shown on the right can dramatically improve IP quality. Improvements in the way designers can access IP information ‘on-the-fly’ and use it to ensure they are utilizing functionally-correct and design-appropriate IP will pay huge dividends. Of course, such a system must be easily integrated within the existing design flow and be non-disruptive to designers. If implemented correctly, a robust DM solution not only ensures higher levels of IP quality, but will result in significant improvements in designer productivity, development costs, and time to market. And, maybe even end the finger pointing!


Mixed Signal Design & Verification Methodology for Complex SoCs

This is a guest post by S3 Group that provides design, verification and implementation of the most complex IC solutions.

This paper describes the design & verification methodology used on a recent large mixed signal System on a Chip (SoCs) which contained radio frequency (RF), analog, mixed-signal and digital blocks on one chip.

We combine a top-down functional approach, based on early system-level modelling, with a bottom-up performance approach based on transistor level simulations, in an agile development methodology. We look at how real valued modelling, using the Verilog-AMS wire that carries a real value (wreal) data type, achieves shorter simulation times in large SoCs with high frequency RF sections, low bandwidth analogue base-band sections and appreciable digital functionality including filtering and calibration blocks. We obtain further system level verification and confirmation of block design through periodic S parameter analysis, which can allow simulation of certain performance parameters (e.g., noise figure and gain) for a full analogue chain. We discuss the importance of sub-block analogue co-simulation, along with the importance of correlation between behavioural models and transistor level schematics to ensure representative behaviour for the blocks. We use a recent complex SoC design as a Test Case to provide a practical illustration of the problems that were encountered and the solutions employed to overcome these problems.

Click on the link to read the article:


White Paper: Mixed Signal Design & Verification Methodology




Medical ASIC Design for IoT: Meeting FDA Guidelines

This is a guest post from Neil Miller, Engineering Manager at Nuvation Engineering, a provider of complex electronic product development and design services.


In the past few years, the market for IoT devices has exploded, opening up a whole new world of possibilities for telehealth and medical applications. Advancements in sensor design, battery life, and wireless networking technologies have allowed everything from insulin pumps to pacemakers to be connected to the internet. To successfully launch a new medical product in the IoT market, manufacturers need to understand the regulations that ensure medical device designs are reliable, safe, and secure.

The FDA recently released guidelines pertaining to wireless medical device design, development, testing and use. We’ll address a few of their recommendations here.


Selection and performance of wireless technology

Prior to selecting a wireless technology, medical device manufacturer should consider if their intended product and use case is suited for a wireless environment. Assuming that it is, the FDA recommends:

  • Choosing an appropriate wireless technology (e.g., WMTS, IEEE 802.11) and RF frequency of operation for the intended application.
  • Considering risks such as data corruption or loss and interference from simultaneous transmitters in a given location, which can increase latency and transmitted signal error rates
  • Considering backups as a mitigation in the event that the RF wireless link is lost or corrupted

The choice of wireless technology will depend on a number of factors, including range, battery life, data sampling rate, coexistence, and operating spectrum, just to name a few. Of course, the actual application of the medical device is also important. Some frequently monitored human biological signals and their associated sample rates are listed in the table below.[1]


medical device development


The optimal wireless technology can be determined through careful analysis of the product requirements and environmental limitations. Personal heath devices usually operate in the 2.4GHz industrial, scientific, and medical (ISM) radio band, which includes Bluetooth, Wi-Fi, and ZigBee. Refer to the table below for some frequently used wireless standards for medical devices.

Depending on how much data is being transferred, which medical device is talking, and where it’s been transferred to, a mesh network of medical devices or a gateway device may be used.

Pre-certified wireless SoC modules are quite ubiquitous, enabling medical device OEMs to achieve a quick time to market. Nuvation design partner Texas Instruments offers various wireless SoC modules that have been pre-certified for medical applications, using standards such as Wi-Fi, Bluetooth, Sub-GHz, and ZigBee.




Wireless SoC OEMs provide documentation in support of the regulatory status of their SoC modules (e.g. FCC/Industry Canada for North America and CE for Europe). A statement is usually found on the datasheet or product specification. When using pre-certified wireless modules designers need to pay close attention to the constraints under which the module passed regulatory approval. For example a specific type of antenna may be required, there may be a recommended PCB placement 0r other layout guidelines, or required values for any external components. Any deviation can impact whether a pre-certification credit can be claimed, which may result in expensive regulatory testing and delays in product launch.


Wireless coexistence

The FDA recommends taking into account other wireless technologies and users that might be expected to be in the vicinity of the wireless medical device. Coexistence will be dependent on frequency, space, and time. The likelihood of coexistence of medical devices is increased if the separation of channels between wireless networks is increased (frequency), the signal-to-interference ratio (SIR) of the intended received signal is increased (space) and the overall channel occupancy of the wireless channel is decreased (time). The designer should consider situations where multiple devices will be in close proximity, such as in a hospital.

Designers should also select a wireless technology that has coexistence features built in. Bluetooth, for example, uses Adaptive Frequency Hopping (AFH) to facilitate coexistence with Wi-Fi devices. Texas Instruments has a proprietary wireless audio technology that uses built-in adaptive frequency-hopping algorithms to minimizing interference.


EMC of the wireless technology

The FDA recommends:

  • EMC should be an integral part of the development, design, testing, and performance
  • Conformance to the IEC 60601-1-2 standard or other appropriate standards


(click to enlarge)


Best practices for EMC compliance include things like good enclosure design, thorough signal integrity analysis, and the use of shielded connectors. At Nuvation we conduct Design for EMC Compliance reviews as part of the design risk mitigation.

There are many regulatory organizations and associated standards for medical devices, depending on the technology and application. Some of the common ones for IoT devices are listed in the table on the left.


Information for proper set-up and operation

The FDA recommends providing users with the specific RF wireless technology type (e.g., IEEE 802.11b), characteristics of the modulation, and effective radiated RF power, and a warning label. While this seems straightforward, products can fail certification because manufacturers fail to adhere to documentation requirements as outlined in the medical standards.

Of course, there is much more involved with meeting the FDA requirements and successful medical product development. Nuvation has delivered many electronic product designs for various medical and IoT applications, with clients including Abbott Laboratories, Boston Scientific, PediaVision, and Numera. Contact Nuvation to learn how we can accelerate the time to market for your products.


[1] ^ Source:

This is a guest post from Neil Miller, Engineering Manager at Nuvation Engineering, a U.S. and Canadian-based provider of complex electronic product development and design services.