Monthly Archives: March 2018

Managing IC Qualification – A Quick Guide

Many IC designers pay little attention to IC qualification and consequently pay high price and delays before the chip reaches to high volume. The mindset of experienced IC designers is considering IC quality (and production test) through all phases of the IC design process. Today, more than ever, re-tapeout is costly and can be very painful for startup fabless companies.


Kid with jet pack riding bike. Child playing at home. Success, leader and winner concept


IC quality level is determined by the target market: consumer, space, automotive etc. Each market has its own quality requirements. It’s advised to check the required quality level by conducting customer interviews. Some customers will have stringent quality requirements therefore you will have to conduct longer and more comprehensive qualification tests.


We recommend performing a mini qualification tests with your MPW prototypes (e.g. before full maskset tapeout) to ensure the major qualification tests are successful. You can read about it here: How to bulletproof your ASIC design?


Some qualification tests are related to verifying the ASIC design functionality, the very common tests are:



Some qualification tests are related to quality and reliability tests. Some of the common stress tests are:



JEDEC has a few documents describing the different tests. Below please find our brief description of each test.


Depositphotos_87180538_l-2015 s1


ESD Test

ESD test is exposing ICs’ IO pins to transient high voltage to ensure ESD protection.

  • HBM (Human Body Model): 2KV
  • CDM (Charge Device Model): 0.5KV
  • MM (Machine Model): 0.2KV


Latch up Test

The test is a series of attempts that trigger the SCR structure within the CMOS IC while the relevant pins are monitored for over current behavior. Get a price quote.



High temperature stress testing of ICs while they are biased, typically running for 1000 hours. Spec: JESD22-A108. Get a price quote.


HTS Test

High temperature stress to stimulate storage conditions. The test is done in 125C-150C, without any bias. Spec: JESD22-A103. Get a price quote.



Highly accelerated temperature and humidity stress test, the test is done in 85% and 85C. Specs : JESD22-A110 (biased), JESD22-A118 (unbiased). Get a price quote.


Preconditioning Test

Preconditioning test is exposing ICs to thermal conditions to simulate the conditions IC will have during PCB soldering process. Get a price quote.


Temperature Cycle Test

Temperature cycling is accelerating ICs failure rate to expose wire bond, bumping, crack issues etc. Spec: JESD22-A104. Get a price quote.


Next Steps

If you cannot conduct the qualification tests internally then it’s better to let one vendor manage the entire qualification tests for you. Remember that the ICs should be tested before and after the qualification tests, therefore, the IC qualification test vendor should be somewhat close to your test partner. AnySilicon platform consists of top IC qualification vendors, please click here to choose a vendor.




Mindtree announces BQB qualification of its Bluetooth Mesh v1.0 Software Stack and EtherMind Bluetooth v5.0 Software Stack & Profiles

Mindtree Ltd., has announced the BQB (Bluetooth Qualification Body) qualification of its Bluetooth Mesh v1.0 Software Stack along with its Bluetooth v5.0 Software Stack & Profiles (Declaration ID #D038060). These IP cores have already been licensed to major semiconductor companies and will be showcased in the upcoming Bluetooth Asia (May 30th, 31st) in Shenzhen.

Bluetooth Mesh v1.0 Software

Bluetooth Mesh is set to revolutionize the connected homes, industrial and lighting markets. Mindtree’s Bluetooth Mesh Software stack, is a complete implementation combining the core mesh profiles and mesh models with all the man-datory and optional features. Mindtree’s Bluetooth Mesh Software has already been licensed to major semiconductor companies.

The Bluetooth Mesh software stack offers a simple set of interfaces that helps developers upgrade their Bluetooth low energy-enabled embedded device into a mesh-ready node. Mindtree’s Mesh Software runs on any Bluetooth low energy stack, interfacing into the standard GAP and GATT interfaces.

EtherMind Bluetooth v5.0 Single Mode Stack & Profiles Software

Bluetooth low energy version v5.0 specifications focuses on increased data throughput and better power efficiency. Mindtree’s BQB-certified Single Mode Stack & Profiles supports all mandatory and optional single mode profiles and roles. The Software is production-proven and has been licensed to tier-1 semiconductor manufacturers and OEMs globally.

Mindtree’s EtherMind Bluetooth v5.0 Software Stack & Profiles is optimized to run equally efficiently on embedded platforms and host CPUs. It requires significantly less (2x-10x) memory compared to competitive offerings. The Software Stack & Profiles are designed for portability across OS, MCU architecture and is inter-operable across any BQB-compliant Bluetooth low energy chipset.


“Our recent BQB certification of the Bluetooth Mesh v1.0 and the v5.0 Protocol Stack & Profiles maintains Mindtree’s continued leadership in the Bluetooth IP Market and reaffirms our commitment to reduce our customers’ time-to-market with a clear roadmap for adoption of Bluetooth IP” said Rajanikant Mo-han, Senior Director, Short Range Wireless Business, Mindtree Ltd.

Nigel Dixon, CEO of T2M, remarked “We are very proud to have been the glob-al business development partner of Mindtree for over 5 years. We are de-lighted to strengthen our Bluetooth IP portfolio with Mindtree’s BQB qualified MESH and v5.0 Protocol Stack Software and Profiles. This, in combination with our clients’ leadership RF IPs, offers our customers a complete certified system solution giving licensees a significant competitive edge in a very bloody market. Mindtree’s extensive commitment to the development of leadership Bluetooth technology assures our customers of superior technology implementation along with a solid roadmap support for the future evolution of this exciting standard”.

About Mindtree
Mindtree is a leading IP and Technology Services provider with 43 offices across 17 countries. It is one of the fastest growing technology firms globally with more than 200 clients.
Mindtree has been a leading technology developer and IP licensor in Bluetooth for more than 15 years. Mindtree has consistently certified generations of its silicon and software IP from v1.1 to v5.0 by Bluetooth SIG. Mindtree’s Bluetooth technology is the most widely used and real world validated 3rd party Bluetooth technology, with 4 out of the top 5 microcontroller companies, module makers, OEMs and ODMs having licensed Mindtree’s Bluetooth IP. Mindtree is acknowledged by the Bluetooth SIG as one of the co-creators of the Bluetooth 5.0, 4.2, 4.1, 4.0 specifications.

About T2M
T2M is the world’s largest independent global semiconductor technology provider, supplying complex IP, software, KGD and disruptive technologies enabling accelerated production of IoT, wireless, consumer and automotive electronics devices. Located in all key tech clusters around the world, our senior management team provides local access to leadership companies and technology. For more information, please visit

The Real Benefit of using an ASIC in Your Next Project

An Application-Specific Integrated Circuit (ASIC) is an integrated circuit (IC) which is designed and customized for a particular use rather than being used for general-purpose applications. An example for ASIC is a chip that is designed to run a high-efficiency Bitcoin miner, or a chip designed specifically for BMW’s brake unit.


An ASIC is a tailor-made chip which is developed specifically for you — based on your specifications.  Just image that you go to a car manufacturer and ask for a custom designed car. Naturally, the car manufacturer will ask you for your dream car specifications. Likewise, in the ASIC world, the functionally, size, power, cost and environmental conditions are all put together into an ASIC specifications document.



An ASIC is typically not publicly available, e.g. other companies cannot purchase your ASIC because it’s made for you and you own the rights for this ASIC.



View of a Businessman in front of a wall with a Processor chip and network connection - 3d render


An ASIC is a technology and as other technologies — with proper planning – it can give your company a competitive advantage – an edge. We believe the main reason for using an ASIC it to get a competitive advantage. Of course, it will require upfront investment, but it could potentially be a very rewarding investment.



If you are unsure whether your company can benefit from using an ASIC, perhaps you should consider having an ASIC feasibility study. In the study an ASIC vendor will be able to evaluate the project and provide costing, schedule and risk assessment. With an ASIC feasibility study, you will have enough information to evaluate the pros / cons and to justify the ASIC project.



Competitive Advantage is the Main Benefit


If your company is looking for a significant competitive advantage — an ASIC would be the right path forward. An ASIC could: make your product smaller and cheaper, lower your product power consumption and improve performance.


In some cases, there is no way to avoid designing and making an ASIC. If your product is a hand-held, small size, battery operated device, then most chances that you would need to design your own ASIC.



But in other cases, the threshold between whether making or not-making an ASIC is not very clear.


An ASIC could integrate several standard chips into a single chip – this will obviously result in a smaller size PCB and a reduced power consumption. An ASIC will use just a small amount of power due to its small physical size as compared to a collection of standard IC components.


In addition to its size and power advantages an ASIC will provide you with IP protection. Since the ASIC is specifically designed for you, it’s hard to see what’s inside the chip. In addition, having fewer parts in your product translates into higher reliability, easy purchasing and operation. This means there will be less production planning and purchasing of products with many parts.


Overall, an ASIC will help you differentiate yourself from the competition and will help you to create a higher entry barrier.



What’s Next?


AnySilicon consists of many ASIC design companies with domain expertise. Try to contact them via this page and ask for a price quote or a feasibility study.


Do you find it difficult to choose a vendor? Then — drop us an email and we will take it from there.

Addressing SRAM IP Verification Challenges

SureCore Limited is an SRAM IP company based in Sheffield, UK, developing low power memories for current and next generation silicon process technologies. Its award-winning, world -leading, low power SRAM designs are process independent and variability tolerant, making them suitable for a wide range of technology nodes. Two major product families have been announced PowerMiserTM and EverOnTM. PowerMiserTM is a general purpose SRAM capable of delivering in excess of 50% dynamic and 20% static power savings compared to industry standard offerings. EverOnTM is a memory developed specifically for the IoT and wearable markets. It delivers near -threshold operating voltages facilitating extremely low power operation. Both product families are based on standard foundry bit cells and no process modifications are needed to deliver these capabilities. Key to achieving market leading low power performance is a comprehensive verification strategy. In this pa-per, co-written with our partners at Solido Design Automation, the key elements of this strategy will be ex-plored.


Verification is an integral part of any integrated circuit development process. The verification process must establish that the design meets its specified yield and performance criteria over the full range of operating conditions before tape-out sign-off. The process generally involves taking abstractions of the design in appro-priate forms, for example post-layout extracted netlists, and running simulations to validate the design per-formance. The verification process must address many different aspects of yield and performance, so several different types of design abstraction and simulation tooling may be required to complete the process. In the case of SRAM this is particularly true.


Verification of a complete compiler instance space presents several unique challenges. These include, but are not restricted to: (1) the need to maximise the coverage over the entire instance space of the compiler range, and (2) the ability to validate design performance and parametric yield sufficiently over the PVT range. It is essential therefore that SRAM verification is based on a variation-aware strategy.


These challenges also have to be addressed within a viable design timescale. To meet this goal, the overall verification task is split into several unique sub-tasks. These include;


  • Behavioural model validation


  • Full operating mode functional verification


  • Top level variation aware parametric functionality


  • Cell level parametric yield validation to 6s


Each of these tasks involves different levels of design abstraction and employs different simulation strategies and toolsets.


These challenges are made particularly onerous when verifying near-threshold SRAM solutions, such as the sureCore EverOnTM family. In order to realise significant power savings at the system level, this SRAM family operates across a very wide operating voltage range, from nominal supply voltage down to near-threshold operation. For example, in a commercially available 40ULP process node, the EverOnTM SRAM supports supply voltages from 1.21V down to 0.6V across process corners and temperature (from -40°C to +125°C). The memory is built around the foundry’s high-density low-leakage bitcell. Simulations have demonstrated that a combination of assist features achieves better than 6σ parametric cell yield in the worst PVT corner. Near-threshold designs that provide such operating ranges demand an extensive approach to verification that relies on a range of validation strategies. These include focussed parametric tests run with Monte Carlo (MC) analysis across the PVT range and high sigma analysis, using Solido Variation Designer.


Within the context of SRAM development, the verification process is complimented by the characterisation process1, that extracts data for a particular memory in order to facilitate SoC integration flows.


Variation- and verification-aware design


SureCore develops memory compilers that push the boundaries of low power performance. Obtaining high yield is essential, and achieving this while pushing such boundaries can only be achieved if variation consider-ations are the first step in the design, not an afterthought. Figure 1 shows a simplified depiction of sureCore’s design and verification approach.


One of the first steps in the design process is the high-sigma analysis of the cell operation and of the critical bit slice. For this, Solido’s High-Sigma Monte Carlo (HSMC) or Hierarchical Monte Carlo (HMC) tool is used. This involves dedicated test benches to test cell read stability, writeability and read correctness (including cell, bit line and sense amplifier), as well as the offset of the sense amplifiers separately. In designs with hierarchical bit lines, additional test-benches are required that include the global sense amplifiers and local write amplifier. For cell-level analysis, HSMC is the right tool, while HMC allows statistical correctness when considering slices where some instances occur more often than others, such as cells and sense amplifiers. In this first phase, ideal excitations are used for the control signals. Later in the design process, these simulations are repeated with the control signals as generated by the actual timing circuit. Although the tools provide a classifier ap-proach that allows the use of non-smooth metrics such as binary outcomes, it is preferential to use well es-


tablished metrics such as dynamic SNMread and WTP at this stage for the additional insights they provide. As these metrics are smooth and well understood, extrapolation of the distributions from a normal MC run to


the tails might seem attractive – this however does not give sufficiently accurate estimates of the actual tail probabilities. When other metrics are used, such as the read current or vddmin, extrapolation will lead to results which can be drastically inaccurate. As such an HSMC approach is mandatory.


Memory compiler verification is a considerable undertaking, so the memory design should aim at making ver-ification as easy as is feasible given the other constraints – verification-aware design. This includes avoiding breakpoints in the instance space and limiting access pattern dependencies. Another crucial aspect is the de-velopment of effective slicing and reduction options which provide crucial simulation speed- up. Together, they bring simulation time for a large memory instance down from 2 hours to 2.5 minutes and drastically reduce server memory load. These algorithms are implemented in the back-end compiler2. By co-developing the memory design and the compiler, these simulation runtime improvements are not only available for verifica-tion and characterisation tasks, but also to the design team.





Figure 1. SureCore’s design and verification approach





Behavioural Validation


The sureCore memory compiler produces several views for system -level validation and integration. Amongst these is a behavioural back-annotatable Verilog model for RTL and gate level simulation. It is imperative that this model accurately reflects the behaviour of the physical design. The sureCore memory compiler comprises of 2 parts: (1) the Front- End compiler (FEC) that creates views for the ‘front-end’ of the design cycle (such as the behavioural Verilog model), and (2) the Back -End compiler (BEC) that creates all of the physical design views for final integration (GDSII/CDL). Functional accuracy of the Verilog model is validated using the FEC to generate the Verilog model and the BEC to generate an equivalent Spice netlist. Both views are tested against a set of common tests and expected responses derived from the test stimuli, as shown in Figure 2. The suite of test sequences is designed to cover the full range of operating configurations.





Figure 2: Verilog vs Spice Verification Flow


Variation-aware Full-Memory Parametric Verification


The sureCore verification flow includes a range of targeted parametric tests. In addition to validating basic functional write- read operations, these tests also validate parametric performance over the full range of spec-ified PVT corner points. In the case of the sureCore EverOnTM family implemented on a 40ULP process node, these corners cover an operating voltage range from 0.6V to 1.21V and a temperature range from -40˚C to 125˚C. This is in addition to all the process corners.


The tests are run using Monte Carlo simulations that are executed at the top level, full memory instance view using sliced and reduced netlists. The netlist slice and reduction algorithms are separately validated for accu-racy. The verification checks are structured to maximise test coverage across the compiler instance space.


Figure 3 shows a simplified depiction of the scripted parametric verification flow. It works by analysing the saved waveform databases containing all signals at the full memory level from every Monte Carlo run per-formed on every selected PVT and instance space corner. Analysing such complete waveform databases and comparing behaviour across the different MC and corner runs allows a wealth of information to be extracted regarding the parametric health of the design, leading to the establishment of confidence in projected perfor-mance and yield capabilities.





Figure 3. Parametric verification and yield analysis.

The parametric checks analysed on every internal node include measurements on the signal transition times, pulse widths, signal levels and signal behaviour consistency. The capabilities of the automatic checks are fur-ther enhanced by sureCore’s in-house Reconvergent Path Analysis tool, described in the next section. This tool determines all gates in the design with multiple active inputs, and checks the relative order of these events. In addition to these generic checks, a set of targeted product family specific parametric tests are included in the standard flow. These product specific tests will differ between sureCore EverOn TM and PowerMiserTM fam-ilies for example, and will include checks on identified critical parametrics such as the measured internal bit line voltages at the associated sampling trigger point (Figure 4).


Information about each measured parametric test from every Monte Carlo batch run on each PVT/instance corner is collated into a summary report for ease of interpretation. The summary collates information about the maximum and minimum bounds observed on each parameter and measured against a specified test limit. The summary log is supported by the generation of a complete results database that allows examination of distributions and statistical analysis to be carried out where further investigation may be required.




Figure 4: Example parametric distribution, one of many captured by the automatic parametric health checks. Even in the worst PVT corner (0.6V, SSG, -40˚C), sufficient global bit line signal is available.




Reconvergent Path Analysis


To further strengthen the verification effort, sureCore developed a Reconvergent Path Analysis tool. This tool extracts all gates and their connectivity from the dspf netlist. A very limited amount of configuration has to be provided to properly deal with virtual supplies, pass gate logic and special constructs such as the local bit lines. This information is then combined with the simulation waveform database. One way to use this is to visualise the activity in the memory, as shown in Figure 5. The triangles indicate rising and falling edges at the output of a gate, the lines between triangles indicate that one output signal is the input for another gate. Some special events are also highlighted. This interactive graph provides a wealth of information to the memory designer.


The same gate information can be used to extend the automatic verification flow. For gates that have multiple active inputs, the input events should always arrive in the same order for all MC runs. For example, for a WL driver, the output of the predecoders should be ready before the timed activation signal arrives, otherwise timing control is lost. When this happens in the tail of the distribution, decoder delay variation causes word line pulse shrinking, which creates an unexpected heavy tail towards short word line pulse widths. If no explicit check on the order of these signal is in place, then it would be very easy to overlook this problem when running only a few thousand MC simulations since the issue doesn’t immediately manifest in the WL pulse width, let alone in the behaviour at the IO ports. Reconvergent Path Analysis finds all gates with multiple active inputs and checks the distribution of the relative timing between the signals. If input order violations are likely to happen within the yield target, Reconvergent Path Analysis can flag these, even if the actual situation did not occur in the performed MC simulations.







Figure 5: Visualisation of the activity inside the memory (read at reduced voltage with several assists enabled). The same infor-mation is used in Reconvergent Path Analysis.


Variation-aware Yield Analysis Validation


As chips become more complex, the chance of failure also increases, creating difficulty in measuring the effects of variation on designs quickly and accurately. Often, extra margin is added to compensate for this uncertainty, sacrificing power, performance, and area. Two available tools for SRAM yield analysis verification and valida-tion are Solido Design Automation’s High -Sigma Monte Carlo (HSMC) and Hierarchical Monte Carlo (HMC). Both of these variation-aware techniques meet requirements for fast, accurate, scalable, and verifiable tech-niques for reducing margins in near-threshold designs. These tools are an intrinsic part of the sureCore verifi-cation process.


Solido’s HSMC approach3 produces an accurate high-sigma (greater than 3s) analysis of a distribution by op-timizing the specific statistical sampling, reducing the number of required SPICE simulations to accurately re-alize a particular yield assessment. The HSMC approach prioritizes the cases that are most likely to fail, focus-ing on the worst-case scenarios, therefore streamlining the number of SPICE simulations required. This tech-nique targets analysis on the extreme tail of a distribution, providing a lean process using fewer resources and simulations to analyse cases where verifiable analysis is most needed. Instead of running all simulations, HSMC provides accurate information in orders of magnitude fewer simulations, reducing over- or under-design in near-threshold situations.


HSMC can provide accurate information about the behaviour of a design at the extreme tail of a distribution, making it an ideal tool for fast and accurate high -sigma Monte Carlo analysis. In bitcell analysis for example, HSMC is typically able to find the first 100 failures within the first 5000 simulated samples. In traditional Monte Carlo analysis, finding the same number of failures would typically require up to 1.5 million samples, often without finding a single failure in the first 5000 samples4. Including HSMC accurately accelerates the design loop by reducing potential design iterations and the need for over-margining in worst-case situations, which is of crucial importance for near -threshold designs. Similar behaviour is observed in sense amp power con-sumption but all 100 failures can typically be found within the first 1000 Monte Carlo samples.


As an extension of HSMC, Solido Hierarchical Monte Carlo (HMC) provides variation-aware statistical verifica-tion on critical paths, providing a lean process for fast, scalable, verifiable, and accurate full memory Monte Carlo analysis. This is especially important when determining yield for the entire chip, including control logic, sense amps, and bit cells, where a simulation for a single instance can be time- and resource-intensive.


For example, in a case to achieve desired overall yield of 3s on a typical memory chip, required yield at the control logic-, sense amp-, and bitcell-level are 4.25s, 5.1s, and 5.95s, resulting in up to billions of Monte Carlo simulations to achieve full coverage (Table 1). Current techniques to ensure full design coverage include potentially running all components to 6s , running local variation at FF and SS corners, and combining required yield from each sub -block assuming that all worst- cases occur simultaneously. Each of these strategies results in over-design, and is very complex to implement.



Table 1. To achieve desired yield of 99.865% (3s, or 1 failure per 741) on a typical memory chip, required yield of each individual element ranges from 4.25s to 5.95s, requiring billions of Monte Carlo simulations for full chip coverage.


Component# of Replications# of Monte Carlo SimulationsRequired Yield (s)
Control logic128 (per chip)1.81 million4.25s
Sense amp64 x 128 » 800080.7 million5.1s
Bitcell128 x 64 x 128 » 1 million7.56 billion5.95s


Solido HMC provides accurate statistical reconstruction of the entire on-chip memory structure through building a statistical hierarchical reconstruction. It applies a similar sampling approach as HSMC, but carries – out multiple parallel high-sigma analysis across each memory component (control logic, sense amp, bitcell) to meet the desired chip yield. This fast, verifiable technique optimizes chip yield and reduces over-design while still maintaining full variation coverage with Monte Carlo accuracy.



Near-threshold SRAM Verification


Near-threshold designs such as the sureCore EverOnTM family demand an especially rigorous approach to ver-ification. When operating at 0.6V in a 40ULP node (near-threshold), the delay of a regular logic gate can in-crease by more than a factor of 10 due to mismatch. As the delay dependency on ∆VT is exponential as weak devices enter sub-threshold, the distribution of delay is strongly non-Gaussian, so extrapolations should be treated with extreme caution. Even when considering two paths consisting of larger devices, or of a long chain of gates, delay difference between the paths can vary dramatically between e.g. SFG and FSG if the paths are not identically exposed to NMOS and PMOS transistors. Incorrect internal timing sequences can be cata-strophic, especially under low voltage operation. Because of this increased sensitivity to variation, care must be taken to cover an extended PVT corner range during verification and internal glitch conditions must be adequately examined.


Near-threshold operation poses a challenge for bitcell operation as standard foundry bit cells will not operate at lower level supply voltages. Alternative cells are much larger and have higher leakage and are hence not an attractive option. This necessitates the use of assist circuitry to deliver bit cell functionality and performance at low voltage. This does increase the number of critical events that need to be monitored and validated during the verification process, along with a need to verify the acceptability of the assist levels across the process and temperature corners, and across the instance space. To ensure high yield, sureCore performs HSMC simula-tions on the cell and bitslice in the worst-case PVT corners, using excitations corresponding to the worst in-stance size. Even in these worst conditions and process corner, sureCore’s EverOnTM memories achieve a HSMC cell failure rate below 1e-9 (6σ).





Verification is the most critically important part of SRAM compiler development. Delivering low power SRAM solutions further exacerbates the challenges as near-threshold operation compounds multiple issues and in-creases the effects of process variation. This paper has highlighted some of the methodologies and tools sureCore uses in order to meet the challenges in a robust, practical, and timely manner. Of course, this must be complemented by a similarly extensive silicon evaluation programme including cross PVT testing as well as HTOL validation to demonstrate long term reliability. By combining these two elements SureCore has demon-strated robust world beating low power memory for power critical applications.








Steve Williams, Principal Engineer, sureCore Limited.

Stefan Cosemans, Principal Design Engineer, sureCore Limited.

Dena Burnett, Technical Marketing Lead, Solido Design Automation.

Amit Gupta, CEO, Solido Design Automation.


sureCore : When Power is Paramount –




U.S. Companies Maintain Largest Share of Fabless Company Semiconductor Sales

Research included in the March Update to the 2018 edition of IC Insights’ McClean Report shows that fabless semiconductor suppliers accounted for 27% of the world’s semiconductor sales in 2017—an increase from 18% ten years earlier in 2007.  As the name implies, fabless semiconductor companies do not operate an semiconductor fabrication facility of their own.


Figure 1 shows the 2017 fabless company share of IC sales by company headquarters location.  At 53%, U.S. companies accounted for the greatest share of fabless IC sales last year, although this share was down from 69% in 2010 (due in part to the acquisition of U.S.-based Broadcom by Singapore-based Avago). Broadcom Limited currently describes itself as a “co-headquartered” company with its headquarters in San Jose, California and Singapore, but it is in the process of establishing its headquarters entirely in the U.S. Once this takes place, the U.S. share of the fabless companies IC sales will again be about 69%.

Figure 1

Taiwan captured 16% share of total fabless company IC sales in 2017, about the same percentage that it held in 2010.  MediaTek, Novatek, and Realtek each had more than $1.0 billion in IC sales last year and each was ranked among the top-20 largest fabless IC companies.


China is playing a bigger role in the fabless IC market.  Since 2010, the largest fabless IC marketshare increase has come from the Chinese suppliers, which captured 5% share in 2010 but represented 11% of total fabless IC sales in 2017.  Figure 2 shows that 10 Chinese fabless companies were included in the top-50 fabless IC supplier list in 2017 compared to only one company in 2009. Unigroup was the largest Chinese fabless IC supplier (and ninth-largest global fabless supplier) in 2017 with sales of $2.1 billion. It is worth noting that when excluding the internal transfers of HiSilicon (over 90% of its sales go to its parent company Huawei), ZTE, and Datang, the Chinese share of the fabless market drops to about 6%.


Figure 2

European companies held only 2% of the fabless IC company marketshare in 2017 as compared to 4% in 2010. The loss of share was due to the acquisition of U.K.-based CSR, the second-largest European fabless IC supplier, by U.S.-based Qualcomm in 1Q15 and the purchase of Germany-based Lantiq, the third-largest European fabless IC supplier, by Intel in 2Q15.  These acquisitions left U.K.-based Dialog ($1.4 billion in sales in 2017) and Norway-based Nordic ($236 million in sales in 2017) as the only two European-based fabless IC suppliers to make the list of top-50 fabless IC suppliers last year.


The fabless IC business model is not so prominent in Japan or in South Korea.  Megachips, which saw its 2017 sales jump by 40% to $640 million, was the largest Japan-based fabless IC supplier.  The lone South Korean company among the top-50 largest fabless suppliers was Silicon Works, which had a 15% increase in sales last year to $605 million.

Report Details: The 2018 McClean Report
Additional details on fabless IC company sales and other trends within the IC industry are provided in the March Update  to The McClean Report—A Complete Analysis and Forecast of the Integrated Circuit Industry (released in January 2018). A subscription to The McClean Report includes free monthly updates from March through November (including a 250+ page Mid-Year Update, and free access to subscriber-only webinars throughout the year. An individual-user license to the 2018 edition of The McClean Report is priced at $4,290 and includes an Internet access password.  A multi-user worldwide corporate license is available for $7,290.

To review additional information about IC Insights’ new and existing market research reports and services please visit our website:



More Information Contact

For more information regarding this Research Bulletin, please contact Bill McClean, President at IC Insights. Phone: +1-480-348-1133, email:

PDF Version of This Bulletin

A PDF version of this Research Bulletin can be downloaded from our website at

To Understand Analog ASICs, First Weed Out the Pretenders

A recent on-line Blog by John Dunn (analog guru and prolific blogger) titled “The Weed-eater Circuit” got me thinking. Basically, John shared a simple 2 transistor schematic (Figure 1 shown below) that he has used as a test when he needed a way to see just how competent someone was at analog circuit analysis, somebody with whom he would soon be working. It sounds obvious to me. I had to pass a competency test to gain the privilege of driving a car. I had to pass several competency tests to get an engineering degree and the same for my ham radio license. You might even have a test for new engineering hires in your company.

analog asic1

John does this because he understands exactly how complex and difficult analog design is; some say it’s far more so than digital design. John knows the pitfalls and wants to avoid them at all costs, including being very picky about who he works with.


I wish OEMs that engage ASIC companies in Analog or Mixed Signal developments would make note of this. Whether the requirement is a massive analog undertaking or only a small portion of the total chip, inexperience…okay, let’s be blunt… incompetence… in the ways of analog design is the single largest reason for failure in these designs.


In my 45 year analog career I must have heard over one-hundred horror stories about failed attempts to develop analog ICs. Let’s be brutally honest; it happens. Even big analog IC companies developing standard products sometimes have problems but they can afford to absorb the occasional failed attempts. They can either delay introduction until the chip works properly (as is often the case) or they can kill the part and move on to something else and no one is the wiser.  To the outside world they have a perfect track record.


However, when an Analog ASIC chip bombs, those choices are not options. There’s a customer waiting, expecting that part to be ready for production. His product is nearly ready to go to market. He has a window of opportunity. The revenue he will derive from it depends on the ASIC being available on time and functioning to the agreed specification. There’s no room for error.


Unfortunately, the semiconductor industry has its share of analog pretenders; companies who claim false skills and experience in analog design; companies who will say just about anything to gain your confidence and win your order. The really sad part is that often they don’t realize that their claims are false because they “don’t know what they don’t know”. Caveat Emptor.


Analog behavior is described by sets of mathematical equations; digital is described by Boolean relationships. There is a distinct difference in the knowledge and skills required to fully understand and be competent in analog design. Analog ASIC designers need to know in depth about semiconductor manufacturing technology, semiconductor material chemistry and physics, semiconductor device physics, electrical circuit theory, control and feedback, thermodynamics and much more, while digital circuit designers need to know about Boolean algebra, linear algebra, digital signal processing, synchronous and asynchronous systems, timing delays, etc.. I’m not trying to imply digital design is easy… it’s not… but it is different… very, very different.


Stories I’ve heard recently about Analog ASIC development failures are beyond horror… they’re devastating. Customers calling and begging for help because the company they chose to develop their ASIC is having a problem. Initial samples are being tested and there are problems with the analog portion. Nobody knows what’s wrong. Worse yet, it seems no one knows how to fix them because the design team is often schooled in digital and has relied upon an analog cell library that was designed by a third party and they cannot possibly understand the intricacies of its behavior in relation to the chemistry and physics of the silicon process.


To compound the problem there are often three entities involved in creating your analog ASIC, all of whom are denying culpability; the company that designed the cell library of basic standard analog functions, the IC design company that used the analog cells in the chip design and the wafer fab that produced the silicon. Each one claims their part of the deal works fine. The problem must be one of the other guys.


Sadly, it’s an all too common situation. In today’s fast paced world, there’s no time for finger pointing.  Customers seeking to develop an Analog ASIC need to understand that compared to digital circuits, designing Analog ASIC semiconductors requires more computational involvement and far greater knowledge of the actual semiconductor fabrication process to assure that every element in the design is rock solid.


It’s more than simply cobbling together some transistors to make an amplifier or A/D converter or picking certain functional blocks from a cell library and dropping them into a design. It requires deep intimate knowledge of the semiconductor process upon which the chip will be produced, and an understanding of the possible interaction of these. These are skills that are not necessarily required by digital designers. A note on a web site claiming analog or mixed signal design skills is no guarantee that what’s inside is actually a complete skill set to get it right.


Hence, John’s blog should serve as a wakeup call to those seeking to have an Analog ASIC developed. Before you retain the services of Analog ASIC semiconductor company, do your homework. Get to know the manager or team leader that will be assigned to your project. Don’t be shy. Verify his/her analog capabilities as well as the skills of the rest of the team.


I read a website comment recently; the source will remain nameless to avoid embarrassment. “Our vision is to put analog knowledge into the hands of a larger population of engineers with an electronics background, thereby making that pool of resources more substantial and impactful. Second, by putting a ton of analog knowledge into a software-defined platform, it frees up those critical Analog Engineers for high-value activities such as the very complex integration work that is often required.”


Implications of this bizarre statement are mind shattering. You cannot just put analog knowledge into the hands of anyone. That would be the Holy Grail of semiconductor design. Unlike the 1967 Jefferson Airplane classic White Rabbit, sung by Grace Slick, there are no magic pills. Analog expertise comes from decades of doing it. Anyone who thinks otherwise is foolish or naive…or both.


Evidence abounds. Just look at the history of the world’s early analog leaders and some of the geniuses behind them. In the career chart (Table 1 below), you’ll recognize the company names and probably most if not all the names of the engineers who drove their analog success. Much has been written by and about them.


analog asic2


Now, let’s piece together their interwoven connections and how they made the analog semiconductor business what it is today. George A. Philbrick Research is credited with the commercialization of tube based operational amplifiers for analog computers so it isn’t surprising that some of the industry’s top analog IC designers came from this institution.  The ‘60s represented a transition of analog design from vacuum tubes to silicon. Fairchild was an early magnet, recruiting Bob Widlar from one of their customers to integrate these tube designs into silicon. Widlar’s  μA702 went into production in October 1964. The device set the direction for the industry for decades and at a price of whopping $300. In 1965, his μA709, which followed the μA702, became another technical and commercial success but by then Widlar had moved on the National Semiconductor along with several former Fairchild engineers.


Detailed Bios for each of these folks is available online and worth reading. The interesting thread here is the correlation of their employment history with the change in analog IC leadership. Fairchild almost had a chance but it’s inexperienced management lost critical mass early as resources were deployed to such things as a poorly defined F8 microprocessor video games and digital watches. In the late ‘70s National’s dominance emerged and remained well into the ‘80s. When their core analog team left in the early ‘80s to form Linear Tech, the handwriting was on the wall and National tried unsuccessfully to diversify into markets like memory and processors. Linear Technology built momentum through the ‘80s and ruled the ‘90s, ‘00s and ‘10s (and still does so today, under the guise of Analog Devices) with unparalleled innovation coupled with superior management. (See Table 2 below)


There is an inherent delay between a company’s buildup of an analog team and its emergence as a dominant contender. Standard product analog chips that must work perfectly in hundreds or thousands of different applications take time, upwards of 3-5 years to design and even more time to get designed into systems and ramped to volume production. It takes fortitude and staying power.


analog asic3


Throughout their careers, these Gurus mentored hundreds of engineers who themselves have mentored hundreds more. They are what made these Silicon Valley analog companies so great. It wasn’t UC Berkeley or Stanford University. It wasn’t putting a ton of analog knowledge into a software-defined platform. It was side by side collaboration in an employment environment that was highly mobile.  Knowledge spread rapidly. This is where analog is learned and perfected. The disciples of these heroes continue to drive analog IC innovation in Silicon Valley today.


The scope of analog is immense, covering everything that comes into contact with the physical world as either an input or an output and much of what goes on in between. Your application may encompass any combination of complex analog signal chain elements, each of which requires very specific skills and expertise to properly execute into an ASIC chip. When evaluating the analog skills needed for your project, be sure your intended supplier has the matching skilled resources. The more you understand about Analog ASICs the better prepared you will be in selecting the best supplier for your needs.


Occasionally Analog ASIC requirements evolve from existing products using off the shelf analog ICs. The customer is driven to ASIC implementation by a need to lower costs, improve reliability and shrink the size of his product and sometimes to hide his design and thus protect his Intellectual Property.  Since the existing solution uses products with public datasheets, preparing a proposal for their integration is somewhat straight forward. Even the pretenders can do that much. Performance parameters for the off-the-shelf components are published and understood. But if the customer desires some performance improvement where even minimal Analog design effort is needed to define and quote the new part, the pretenders often fall flat on their faces and if not during the quote process, then most certainly during execution.


More often, the Analog ASIC requirement is a completely new circuit, perhaps initially envisioned with some off the shelf components, but incorporating considerable invention on the part of the customer and also on the part of the ASIC supplier. Sometimes a spec or partial spec for the system is available as a starting point and maybe a combo schematic / block diagram. Just bidding on such a requirement may require several days or weeks of engineering evaluation, high level design and some simulation to determine if what the customer is requesting is even feasible as was the case with a recent project from a sensor manufacturer. The MEMS pressure sensor was extremely well designed and defined. But the calibration and signal conditioning implementation was a blank slate…almost. The customer’s requirement at first seemed unobtainable. In fact they told us that three of the seven companies they approached had told them so. Critical to the success of the design were the following:


  1. ADC: 22 bit
  2. Sensor accuracy: ± 40Pa over the range 0Pa to +200kPa
  3. Equivalent noise contribution from the ASIC: <10LSB
  4. Analog and digital compensation for gain and offset
  5. Digital compensation for linearity with 3rd order equation
  6. Digital compensation for temperature dependency of gain and offset with 3rd order equation
  7. internal temperature sensor
  8. FIFO for both of pressure and temperature data
  9. Variable output data rate
  10. Support I2C and SPI
  11. 64 byte OTPROM
  12. Standby Current 0.015uA
  13. Sub 1.0mm2 die size


Initially we had to determine if the customer’s request was violating any of the laws of physics, e.g. is it even possible to build such a chip on a single silicon wafer. The customer was skeptical since three companies had already told them so. The challenge was to identify a wafer fab process capable of meeting the design criteria (not all of which are shown above). There wasn’t one. However, thanks to the breadth of knowledge of the ASIC design team, they were able to define modifications that could be made to an existing process that would eliminate the barriers. Meetings were held with the fab’s process engineers who then went back and compiled characterization data to compare with the requests. In one case, noise, the characterization data showed that by using certain new architectures suggested by the design team and confirmed by the fab’s engineers, the tight noise figure could be met without process changes.


The takeaway here is that the ASIC design team knew enough about the fab processes to explore and identify not just the sources of noise, but architectural techniques to minimize its effect on the chip’s performance and thus meet the customer’s expectations.


Once this hurdle was eliminated, the team dismantled the requirements into manageable pieces.  A high level block diagram was created (see Figure 2 below) and then each critical section was simulated to verify feasibility.


analog asic4


Not only did each block have to be conceptualized at a high level but also an estimate of its area needed to be calculated to determine if the cost budget would be met. Overhead for interconnect area was considered. Architectures were rethought to squeeze out the last 0.001 mm2 of die size.  And much of this was done before any contractual development commitment was received from the customer. This is the hidden effort that goes behind a budgetary proposal. During the ensuing weeks, over a dozen conference calls and more than 150 emails were exchanges directly between the design teams (Customer, Foundry and ASIC supplier) to clarify, critique and confirm all aspects of the requirement and its Analog ASIC embodiment, resulting is a thorough and detailed development and production proposal to the customer.


Admittedly, this example was more complex than most, but it offers insight into the work that needs to be done when asked to quote an Analog ASIC.


Some ASIC companies will offer short cuts, suggesting the use of acquired IP and/or cell libraries to save time. Had IP from generic cell libraries been considered in this design, the chip would have failed, having not met many of the performance criteria; especially noise. (An excellent discussion of noise by Intersil can be found here: ) Signal noise can interfere with both analog and digital signals. However, the amount of noise necessary to affect a digital signal is much higher because digital signals communicate using a set of discrete electrical pulses to convey digital “ones and zeros” and those electrical pulses require a lot of noise in order to be confused with one another. Digital designers can’t ignore noise, but in the realm of a mixed signal ASIC or predominantly Analog ASIC, their expertise in sourcing and reducing it are often insufficient. Thoroughly understanding noise, its sources and the proper means of reducing it, comes from decades of experience, not text books.


It is not possible for every Analog ASIC company to have the depth of technical resources to cover 100% of all possible requirements. What is important is that the one you select has as much as possible and has the ability to easily contract the rest.  Thanks to the seven decades of analog history discussed above, Silicon Valley has the greater density of contract and semi-retired analog experts in the world that are readily available to supplement an ASIC team.


Two final cautionary thoughts about selecting your Analog ASIC supplier:


  1. If language is a barrier, walk away. If you can’t carry on a conversation directly with the people who will be designing your “critical path” chip, move on.
  2. If the company you are considering does not have a physical address on their website that you can visit, walk away.


It’s not difficult weeding out the analog imposters once you know what to look for.






This is a guest post by Bob Frostholm is VP, Marketing & Sales at JVD Analog ASIC Semiconductors. (San Jose, CA.)


analog asic5.png

Bob has held Sales, Marketing and CEO roles at established and startup Analog Semiconductor Companies for more than 45 years. Bob was one of the original marketers behind the ubiquitous 555 timer chip. After 12 years with Signetics-Phillips, Fairchild and National Semiconductor, he co-founded his first startup in 1984, Scottish based Integrated Power, which was sold to Seagate in 1987. He subsequently joined Sprague’s semiconductor operations in Massachusetts and helped orchestrate its sale to Japanese based Sanken Electric, creating what is now known as Allegro Microsystems. In 1999, as vp sales and marketing, he rejuvenated sales revenues and facilitated the sale of SEEQ Technology to LSI Logic. Bob is the author of several technical articles and white papers and in his spare time, an occasional screenplay. Other interests include Home Remodeling, Amateur Radio, Porsche Club Activities and Grandkids. Email: