5235 Views

Introduction to Artificial Intelligence in ASICs

While it was considered to be merely a buzzword just a few years ago, artificial intelligence has truly become a force to be reckoned with in the present, and very significantly for the future. It is one of the leading technologies that is expected to dominate the coming decades and recent developments in the world of technology and electronics spell out the same as well.

 

Major organizations and corporations are looking to somehow incorporate AI into their systems and create innovative products with this revolutionary piece of tech at the very center. Semiconductor and chip manufacturers have started to express their interest in this regard as well, looking to create chips specifically dedicated to supporting and housing artificial intelligence.

 

What are ASICs?

 

Application Specific Integrated Circuits, or ASICs, are chips that are designed and manufactured for a specific purpose, task, or application. As opposed to Field Programmable Gate Arrays which are rewritable and reprogrammable, ASICs are permanent and cannot be modified. ASICs do tend to have the best efficiency, performance, as well as power as compared to FPGAs.

 

ASICs, FPGAs, CPUs, GPUs and AI

 

There are essentially four different silicon options that can be used for the training and development of artificial intelligence technology. These include CPUs (Central Processing Units), GPUs (Graphical Processing Units), FPGAs (Field Programmable Gate Arrays), and ASICs, as mentioned before. Where CPUs offer a great degree of programmability, they tend to provide less performance power than optimized and dedicated hardware chips. FPGAs are extremely flexible and have great performance, making them ideal for specialized applications that need a small volume of reprogrammable microchips. They are, however, quite difficult to create and expensive as well, not to mention that they still falter in terms of power and performance when compared to the likes of GPUs and ASICs. GPUs are ideal for graphics as well as their underlying matrix operations and scientific algorithms, as they are super fast and flexible. With ASICs, you get the best of all worlds as it is basically a customizable chip that can be designed to accomplish a very specific task at high power, efficiency, and performance.

 

ASICs and AI

 

ASICs are now increasingly being developed for the purpose of supporting artificial intelligence and associated technologies. One of the prime examples is Google’s very own TPUs or Tensor Processing Units, which are essentially a series of ASICs designed for machine learning, optimized to run open source machine learning software. Other leaders in the tech sphere are launching similar efforts, such as the DLU or Deep Learning Unit by Fujitsu. Google’s TPU is a good example to consider when thinking about how an ASIC can be used to address very specific and narrow functions and handle the workload in a parallel manner. Intel has also hinted that they will be releasing their AI ASICs commercially for the very first time sometime this year.

Recent Stories