Monthly Archives: April 2015

2

Are Power Planes Necessary for High Speed Signaling?

The performance of a system depends heavily on the communication speed between integrated circuits, which is constrained by the power delivery networks (PDNs). The disruption between the power and ground planes based on the low target impedance concept induces return path discontinuities during data transitions, which create displacement  current  sources  between  the  power  and  ground  planes.  These  sources induce excessive power supply noise which can only be reduced by increasing the capacitance requirements through new technologies such as thin dielectrics, embedded capacitance, high frequency decoupling capacitors and other methods. The new PDN design proposed here using power transmission lines (PTLs) enables both power and signal transmission lines to be referenced to the same ground plane so that a continuous current path can be formed. Extensive simulations and measurements are shown using the PTL approach to demonstrate the enhanced signal integrity as compared to the currently practiced approaches.

 

1. Introduction

A power delivery network (PDN) is the network that connects the power supply to the power/ground terminals of the ICs. In conventional design of PDNs, the PDN impedance is required to be less than the target impedance over the frequency range of interest to minimize the IR drop and to suppress the inductive noise during data transitions. As a result, most PDNs in high-speed systems consist of power and ground planes to provide a low-impedance path between the voltage regulator module (VRM) and the integrated circuit (IC) on the printed circuit board (PCB), as shown in Figure 1.

Figure 1: Power distribution network using power and ground planes.
Recently, on-board chip-to-chip communication is being pushed from several Gbps towards tens of Gbps due to the demand for higher data rate [1]. Single-ended signaling is widely used for memory interface, but it suffers from simultaneous switching noise (SSN), crosstalk, and reference voltage noise [2]. Although differential signaling is free from common-mode noise, it doubles on-chip interconnect count, off-chip printed circuit board (PCB) trace count, and I/O pin count [3], which results in higher cost. To achieve better signal integrity with less expense, studies on pseudo-differential signaling schemes have been undertaken by several researchers [3]-[8]. The original pseudo-differential signaling adds a reference line after a group of data lines, usually limited to four, which results in N+1 physical lines routed in parallel to communicate N signals. Afterward, further improved versions of pseudo-differential signaling schemes have been proposed, which  are  bus  inversion  scheme  [4],  incremental  signaling  scheme  [3][5],  balanced coding scheme [6]-[8], and so on.
These signaling schemes still have a limitation in terms of noise reduction due to the PDN. For off-chip signaling, charging and discharging signal transmission lines induce return currents on the power and ground planes [9], as shown in Figure 2. The return current always follows the path of least impedance on the reference plane closest to the signal transmission line. The return current path plays a critical role in maintaining the signal integrity of the bits propagating on the signal transmission lines. The problem is that the disruption between the power and ground planes induces return path discontinuities (RPDs), which create displacement current sources between the power and ground planes. The current sources excite the plane cavity and cause voltage fluctuations. These fluctuations are proportional to the plane impedance since the current is drawn through the PDN by the driver. Therefore, low PDN impedance is required for power supply noise reduction, which can only be achieved by increasing the capacitance requirements through new technologies such as thin dielectrics, embedded capacitance, high frequency decoupling capacitors, and others [10]-[14]. In addition, use of power and ground planes as part of the target impedance concept is making the packages and boards more complex, leading to the need for sophisticated design tools and methodologies. So, the  question  to  be  asked  is:  Are  power  (voltage)  planes  necessary  for  high  speed signaling?  Instead,  can  an  alternate  method  be  developed  based  on  a  high  target impedance concept that provides a more stable signaling environment with less complexity  in   terms   of   the   design   tools   and   methodologies   required?   This   is accomplished in this paper by using power transmission lines.

 

1

 

 

Read the rest of the articles here.

 

_____________________________________________________-

This is a guest post by Suzanne L. Huh and Madhavan Swaminathan which is the  Founder of E-System Design.

ninja-ok

Beyond RTL part 2: Domain-Specific Languages

This is the second part of my “Beyond RTL” series, where I examine alternatives to Register Transfer Level (RTL). The first part talks mostly about High-Level Synthesis, its genesis, and the state of the art of free and commercial tools that transform C/C++/SystemC to RTL. I highlighted the fundamental limitations that these tools have when it comes to transforming sequential software to hardware. In this post I present the other type of alternatives to RTL, using Domain-Specific Languages (DSLs). Domain-Specific Languages have the advantage of being more suited to a particular task than general-purpose languages. We’ll see first internal DSLs (a DSL that is embedded in a host general-purpose language), and standalone DSLs (a language on its own, that can be used for hardware design but not exclusively).

 

In the case of hardware design, internal DSLs are not so much an alternative to RTL as an alternative to the languages used to write RTL hardware, namely Verilog and VHDL. Examples include MyHDL (based on Python, see also Migen) and Chisel (based on Scala). These DSLs add a few hardware-specific syntax/semantics, restrain the host language’s constructs that can be used for synthesis to a well-behaved subset, allowing the rest in testbenches (this is how we call “tests” in hardware design). Although SystemC is generally presented as a C++ framework for system modeling, it can also be considered a DSL given the numerous changes/restrictions in syntax and semantics compared to vanilla C++. Hardware DSLs have the advantage to proposemodern alternatives to the ageing Verilog and VHDL languages, sometimes with significant improvement (the best example in my opinion how MyHDL models “current/future value” compared to VHDL/Verilog signals) while benefiting from a full-fledged language and development tools.

 

The main downside is that we have once again the same dilemma as in Verilog and VHDL: what constructs of the language can I use in simulation, and conversely, how do I write code that is suitable for synthesis? In the examples I mention (and probably in other similar embedded DSLs), the host language is often large or complex. If you think for a moment about the extent of what you can write in C++ (sub-classes with virtual destructors and friends, anyone? ^^), how do you limit yourself to the comparatively tiny subset that SystemC forms? An IDE (Integrated Development Environment) helps a lot when writing code (when designing hardware too, see the excellent post Philippe Faes of Sigasi wrote about this), but to be helpful in this case, the IDE must be aware of the DSL’s syntax and semantics, which is generally not the case.

 

The second family of DSLs that can be used for hardware design arestandalone DSLs. Like internal DSLs, there are a lot of standalone DSLs, I’ll give three examples related to hardware design, but feel free to suggest additional languages if you’d like to see them included in this post!

 

  1. Let’s start with what is probably the most well-known DSL in the Electronic System Level market, Bluespec SystemVerilog (BSV). BSV is a powerful language that is architecturally transparent, which means that it is the role of the designer to express the architecture. BSV combines the most powerful aspects of Haskell (type inference, parameterization, polymorphism) with a SystemVerilog-like syntax (I highly recommend reading the first chapter of BSV by Example to learn about the key ideas behind BSV).
  2. Another interesting language is a DSL for dataflow programming, which has been standardized by MPEG as RVC-CAL (based on CAL). This language coming from academia allows the description of architecture-agnostic dataflow programs composed of actors that send each other tokens via FIFOs. It is possible to generate efficient hardware implementations from RVC-CAL programs, depending on how the program is written.
  3. Finally, we have created our own standalone DSL for hardware designhere at Synflow, which we have named Cx. Cx is a simple C-like language dedicated to hardware design. We have created Cx based on our experience with compiler development and hardware design, and because we were not satisfied with existing languages: we wanted to have a language that could be mapped simply to efficient, clean hardware, and that looked familiar, so it would be be easy to learn and to use.

_________________________________________________

This is a guest post by Synflow, an innovative EDA company based in Europe.

energy-graph

Low-Power Design Strategies for Connected Devices

This is a guest post from Craig MacKenzie, Staff Design Engineer at Nuvation Engineering,  provider of complex electronic product development and design services.

_____________________________________________

With the rapid growth of mobile technology and battery-powered devices, power consumption has become an increasingly important metric for electronic products. Designing low-power systems and devices is particularly difficult due to the complex and numerous hardware and software interactions that need to be considered. When designing a system that requires years of battery life, a good power management plan is necessary. Here are some strategic tips that Nuvation engineers have implemented to successfully design many low-power devices.

 

Power States

The first step in any low power design is to establish the various power states in which the system will operate. In the case of a single MCU design, these could simply map to the power modes listed in the device datasheet. In a system with multiple processors or non-trivial external hardware blocks (radio sub-systems, analog interfaces, etc.) the power states could be a more complicated arrangement of power vs. performance trade-offs at various clock speeds and supply voltages.

Low-power embedded design

 

Once the system’s power states have been defined it is highly recommended that they be implemented on either prototype hardware or a suitably equipped/modified evaluation platform to verify that the required power consumption is achievable. Depending on system complexity, this could involve anything from updating a few register fields to creating representative CPU processing loads.

 

Power Transitions

The next step is to define the events that cause transitions between power levels on the device. Some typical examples would be an RTC alarm, activity on a communications or user interface, or a measurement threshold being exceeded on a sensor interface.

Once again, it’s a good idea to test these events in isolation to ensure that they function as expected. For example: verify that a system’s RTC alarm correctly wakes the CPU when its clock is gated off, or ensure that any transition latency requirements can be achieved (this is particularly relevant to communications and user interfaces). Discoveries made at this stage are likely to propagate back up to the system design level as timing and stability issues are discovered.

 

Power Transition Management

Next, think about how transitions between power states will be managed. In a single-threaded application this could consist of a set of stand-alone function calls made in appropriate places. In a multi-threaded environment with many independent tasks, some mechanism for arbitrating power requirements between tasks needs to be implemented. Often it is easiest to implement transition management as a logic-only process at an “everything on” power state before introducing hardware state transitions, since normal debug infrastructure may be unavailable due to gated clocks and disabled hardware.

 

System Power Verification

Once all the building blocks are in place, it’s necessary to verify that the power consumption meets the overall requirements of the design. Making use of automated unit tests (a subject for another article) will make life significantly easier, as they permit isolating problem event sequences and transitions. Use of a measurement instrument with a remote interface is also recommended, as it can be integrated with the automated tests to identify power issues as they happen over inconvenient time scales (hours, days, etc.).

Following these guidelines will help to maximize battery life for low-power devices. For prototypes where a fast time to market is critical, the verification steps along the way reduce the risk of time-consuming and costly board re-spins.

Are there any strategies you use to improve battery life? Leave us a comment!
_______________________________________________________________________
This is a guest post from Craig MacKenzie, Staff Design Engineer at Nuvation Engineering , a U.S. and Canadian-based provider of complex electronic product development and design services.

ninja-ok

Beyond RTL part 1: High-Level Synthesis

Let’s say for a minute that you believe that it is finally time to drop RTL (maybe it was my previous post that convinced you). What can I say? I’m glad! You now have to pick among several competing technologies, each with its pros and cons, each of course claiming to be the best, and not one compatible with the others.

 

So let’s begin with what is probably the most well-known alternative to RTLHigh-Level Synthesis (HLS). As you can see, Wikipedia lists many tools that implement one form of HLS or another. So what is the promise of HLS? To take a software description (code) and turn it into optimized hardware.This paper discusses the history of HLS, tracing back its origin to the 1960’s; commercial products were already available in the 90’s (the oldest that I know of is NEC’s CyberWorkbench, which was already around in 1988). Did HLS succeed? That is, if you want to create a H.264 decoding chip, can you take a piece of software (for instance x264, an optimized software implementation of H.264) and turn it into a chip? Well, no. Come on, did you really expect another answer? :-)

 

Think about it, a digital circuit is composed of a huge number of components, of which a lot are active simultaneously, whereas software is (still) mostly a huge sequence of instructions issued sequentially, that keep reading/writing memory (be it with pointers or references) to do their computation (remember the von Neumann architecture, I talked about it before. Keep in mind that automatic parallelization won’t give any satisfactory results in the presence of pointers, because Precise Flow-Insensitive May-Alias Analysis is NP-Hard. This means it is very difficult to know in advance if two pointers may point to the same memory address (known as aliasing) even if they are in the same procedure. Oh, and if you have dynamic memory allocation (which is present in virtually every piece of software), then you are confronted with The Undecidability of Aliasing… The conclusion is that it is not possible to extract parallelism from arbitrary sequential software. Not now, not ever.

 

You may wonder what can HLS do, then? Well, the thing is that it is possible to extract some parallelism from a limited amount of sequential software written in a certain way (i.e. with some restrictions, and sometimes with help from the user who wrote it). With that in mind, let’s examine what some of the tools available out there can do (I detail in a future post other alternatives to RTL):

 

  • some free HLS tools: C to VerilogGAUTLegUpShang. GAUT and C to Verilog only seem to work at the function level. LegUp can translate a fair amount of C, as long as it respects the CHStone coding style, which includes inter-procedural calls and pointers (although I suspect that if you want efficient hardware, pointers must alias to a well-known memory location). However, the objective of LegUp is to make FPGAs easier to use by software engineers, and to be able to execute a particular C program faster in hardware than in software (which they do very well), rather than create really efficient hardware (for instance, this paper shows that their implementation of AES requires 14,000 clock cycles to encode+decode a block, whereas an efficient pipelined hardware implementation can encode/decode one entire block at each clock cycle). Shang claims to outperform LegUp by “30%” (in frequency? clock cycles? throughput?).
  • most commercial HLS tools (Catapult, C-to-silicon, CyberWorkBench, Cynthesizer, ImpulseC, Synphony, Vivado), on the other hand, focus on creating efficient hardware (their objective is to generate hardware whose performance rivals with hand-written designs) generally from a subset of C/C++ with (proprietary) extensions for hardware and for task-level parallelism, or from a SystemC design. Almost all of the tools support, and some favor, SystemC, as it is a standard, although it is not really software anymore (even if SystemC is a C++ framework, low-level SystemC code looks more like Verilog or VHDL than C++ in my opinion). As for the subset of C/C++ that these tools accept, it varies, although it generally forbids arbitrary pointers, recursive functions, etc. What about the performance? Generally, expect great performance for filters (their favorite example is still the FIR filter) and similar regular, easy-to-pipeline algorithms. For whole designs, you will have to try these tools by yourself :-) If you did I would be glad to know the results you obtained – provided the tool vendor allows you to disclose them (and often they do not).

This is a guest post by Synflow, an innovative EDA company based in Europe.

mems_gyroscope 150px

2014 top MEMS Players ranking: Rising of the first MEMS titan

With an impressive 20% growth in MEMS revenue compared to 2013, and sales revenues of more than $1.2B, Robert Bosch GmbH is the clear #1. From Yole Développement’s yearly analysis of “TOP 100 MEMS Players”, analysts have released the “2014 TOP 20 MEMS Players Ranking”. This ranking shows the clear emergence of what could be a future “MEMS titan”: Robert Bosch (Bosch). Driven by MEMS for smartphone sales – including pressure sensors -, Bosch’s MEMS revenue increased by 20% in 2014, and totaling $1.2B. The gap between Bosch and STMicroelectronics now stands at more than $400M.

 

“The top five remains unchanged from 2013, but Bosch now accounts for one-third of the $3.8B MEMS revenue shared by the top five MEMS companies. Together, these five companies account for around onethird of the total MEMS business”, details Jean-Christophe Eloy, President & CEO, Yole Développement (Yole). “It’s also interesting to see that among the top thirty players, almost every one increased its revenue in 2014”, he adds.

In other noteworthy news, Texas Instruments’ sales saw a slight increase thanks to its DLP projection business. RF companies also enjoyed impressive growth, with a 23% increase for Avago Technologies (close to $400M) and a 141% increase for Qorvo (formerly TriQuint), to $350M.

 

Meanwhile, the inertial market keeps growing. This growth is beneficial to InvenSense, which continues its rise with a 32% increase in 2014, up to $329M revenue. Accelerometers, gyroscopes and magnetometers are not the only devices contributing to MEMS companies’ growth. Pressure sensors also made a nice contribution, especially in automotive and consumer sectors. Specifically, Freescale Semiconductor saw a 33% increase in pressure revenue, driven by the Tire Pressure Monitoring Systems (TPMS) business for automotive. On the down side, ink jet head companies still face hard times, with Hewlett-Packard (HP) and Canon both seeing revenues decrease. However, new markets are being targeted. Though thus far limited to consumer printers, MEMS technology is set to expand into the office and industrial markets as a substitute for laser printing technology (office) and inkjet piezo machining technology (for industrial & graphics).

 

“What we see is an industry that will generally evolve in four stages over the next 25 years. This is true for both CMOS Image Sensors and MEMS”, explains Dr Eric Mounier, Senior Technology & Market Analyst, MEMS devices & Technologies at Yole. He explains: “The “opening stage” generally begins when the top three companies hold no more than 10 – 30% market share. Later on, the industry enters the “scale stage” through consolidation, when the top three increases its collective market share to 45%.” According to Yole, the “More than Moore” market research and strategy consulting company, MEMS industry has now entered the “Expansion Stage”. “Key players are expanding, and we’re starting to see some companies surpassing others (i.e. Bosch’s rise to the top). If we follow this model, the next step will be the “Balance & Alliance” stage, characterized by the top three holding up to 90% of market share”, comments Dr Mounier.

 

 

Among the 10 or so MEMS titans currently sharing most of the MEMS markets, Yole’s analysts have separated them into two categories:

(1) “Titans with Momentum” and “Struggling Titans”. In the first category we include Bosch, InvenSense, Avago Technologies and Qorvo. Bosch’s case is particularly noteworthy, since it’s currently the only MEMS company with dual markets (automotive and consumer) and the right R&D/production infrastructure.

(2) On the “Struggling Titans” side, Yole identifies STMicroelectronics, HP, Texas Instruments, Canon, Knowles, Denso and Panasonic. These companies are currently struggling to find an efficient growth engine.

 

The figure below, entitled “2009-2014 Historical MEMS Sales for Major MEMS Companies” summarizes the 2009 – 2014 historical sales for six major MEMS companies. Without question, both Bosch and InvenSense are growing, while others like STMicroelectronics and Knowles are suffering a slow-down or MEMS sales decrease. Another interesting fact about Yole’s 2014 TOP MEMS Ranking is that there are no new entrants (and thus no exits).

 

More market figures and analysis on MEMS, the Internet of Things (IoT) and wearables can be found in Yole’s 2014 IoT report (Technologies & Sensors for Internet of Things: Business & Market Trends, June 2014), and the upcoming “Sensors for Wearables and Mobile” report (Detailed description available soon on IMicronews.com, reports section). Also, Yole is currently preparing the 2015 release of its “MEMS Industry Status” (12th edition). This will be issued in April and will delve deeper into MEMS markets, strategies and players analyses.

 

_________________________________________________________________________

This is a guest post by Yole Développement that provides marketing, technology and strategy consulting.

 

Get 3 quotes from MEMS foundries – click here.