1399 Views

The Future Of Embedded Monitoring Part 2

This is a guest post by Stephen Crosher, CEO of Moortec

 

The rate of product development is facing very real challenges as the pace of silicon technology evolution begins to slow. Today, we are squeezing the most out of transistor physics, which is essentially derived from 60-year-old CMOS technology. To maintain the pace of Moore’s law, it is predicted that in 2030 we will need transistors to be a sixth of their current size. Reducing transistor size increases density, which itself presents issues when considering the relative power for a given area of silicon will increase, as described through Dennard Scaling. When combined with the limitations of parallelism for multi-core architectures, our ability to develop increasingly energy efficient silicon is simply going the wrong way!

 

As we descend through the silicon geometries we see that the variability of the manufacturing process for the advanced nodes is widening. The loosening of our grip to control thermal conditions presents increasing challenges, this means we cannot simply assume a power reduction dividend by moving to the next new node. The dynamic fluctuation of voltage supply levels throughout the chip threatens to starve the very operation of the digital logic that underpins the chip’s functionality. These factors, combined with the increasing urgency to reduce the power consumption of super-scale data systems and seek efficiencies to reduce global carbon emissions in both the manufacture and the use of electronics, means that we must think smart and seek new approaches. We need to innovate.

 

C’mon, we’ve heard all this before!

 

I’m not the first to report our pending technological gloom and won’t be the last. The ‘gloom mongering’ over the silicon industry has happened since, well, the beginning of the silicon industry!

 

As a species we can be smart. We know that if we are able to see and understand something, we have a better chance of controlling it. The more data we have the more efficiencies can be gained.

 

The nature of monitoring systems has two phases and is a reflection of our inherent curiosity as humans. Firstly, there is ‘realisation.’ The discovery that upon introducing the ability to view within an entity, that was otherwise considered a black box, brings enlightenment and presents us with an opportunity. Secondly, there is the ‘evolution’ phase. Once data is being gathered from a system (that up until this point hadn’t been visible), we seek to improve the quality, accuracy and granularity of the data. Increasing the ‘data intelligence’ of the information we are gathering, contextualising the dynamic circuit conditions, aiming to identify trends and pull out signatures or patterns within a sea of data. See previous blog, ‘Talking Sense with Moortec – The Future of Embedded Monitoring Part 1′

 

What’s next?

 

Information of any value needs to be good to be effective. I have had many conversations outlining that the perfect embedded monitoring system must be infinitely accurate, infinitely small, zero latency and zero power! Although as a provider of embedded monitoring subsystems for the advanced nodes we’re not there yet, we are however trying! Until we reach that panacea, SoC developers need to be aware of the area overhead to sensor systems. Although sensors are relatively small, at their core they are often analog by design which doesn’t necessarily scale with reducing geometries, unlike the neighbouring logic circuits.

 

So, for this reason we must be aware and seek circuit topologies and schemes that reduce the silicon area occupied by the sensors themselves. To minimise area impact and best utilise in-chip sensors in terms of placement, quite often such matters are best discussed and considered during the architecting phases of SoC development, rather than as a floor-planning afterthought. Increasingly sensor subsystems are becoming the critical foundation to chip power management and performance optimisation, as getting it wrong can lead to existential device stress and potentially immense reputational damage to companies within the technological food chain that create the larger product or system used in today’s automotive, consumer and high performance computing products. Therefore, no longer can we consider monitoring as a low priority endeavour for development teams and they progress through the design flow.

 

So, in our attempts to continue Moore’s Law and limit Dennard scaling we need to innovate and of course we will. However, such innovative solutions will come from having a clearer view of the dynamic conditions deep within the chip rather than how the core function of the chip is implemented itself.

 

If you missed the first part of this blog you can read it HERE

 

Watch out for our next blog entitled Hyper-scaling of Data Centers – The Environmental Impact of the Carbon ‘Cloud’ which will be dropping mid March!

 

Read more about Moortec’s technology and products here: https://moortec.com/

Recent Stories