Illustration by Undraw

The Google Edge TPU

The combination of custom hardware, open software, and state-of-the-art AI algorithms

Alex Moltzau
5 min readApr 5, 2020

--

In the recent days I have begun writing about artificial intelligence and hardware.

I realised that I knew nothing about one of the most interesting hardware inventions related to AI.

TPU is an acronym for Tensor Processing Unit. According to Wikipedia the definition is the following:

“A tensor processing unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google specifically for neural network machine learning, particularly using Google’s own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale.”

Google needs to analyse a vast amount of data so it makes sense for them to have hardware that can make this happen to a greater extent.

These have been used as well in some of the pivotal moments in the history of AI, particularly with the progress of Google DeepMind.

“Google has stated that they were used in the AlphaGo versus Lee Sedol series of man-machine Go games, as well as in the AlphaZero system which produced Chess, Shogi and Go playing programs from the game rules alone and went on to beat the leading programs in those games. Google has also used TPUs for Google Street View text processing, and was able to find all the text in the Street View database in less than five days. In Google Photos, an individual TPU can process over 100 million photos a day. It is also used in RankBrain which Google uses to provide search results.”

Google’s purpose-built application-specific integrated circuit (ASIC) designed to run inference at the edge.

Google’s Edge TPU does according to the company deliver high performance in a small physical and power footprint, enabling the deployment of high-accuracy AI at the edge.

The chip was made to be complemented by Google’s open sources software.

“The chip has been specifically designed for Google’s TensorFlow framework, a symbolic math library which is used for machine learning applications such as neural networks.”

Google did still use central processing units and graphics processing units for different types of machine learning.

As mentioned in a previous article other have begun to bring to market other types of devices with somewhat similar intentions.

Underneath is a picture of a tensor processing unit (TPU).

Picture of a tensor processing unit by Zinskauf.

The GPU and TPU are the same technology.

Before GPUs were invented the processing for displays were offloaded into a so-called GPU, a graphical processing unit, next to the CPU. In the start there were few things in the GPU hardware and the pipeline of a GPU is immensely complex.

TPU’s do the upscaling via AI which is needed to keep the calculations for ray tracing feasible.

What is ray tracing?

“ray tracing” in terms of computer graphics should not be confused with the term as it is applied to physics (calculating the paths of waves and particles through varying mediums).

Why is this important?

Well consider what happens in the identification of a vehicle in convulutional deep learning. As illustrated in the article by Prabhu.

Understanding an object — in programming or real context for the purposes of classification with a given machine learning task.

It is not always like this of course, but it can give an idea as to why it is important.

Representing or understanding reality at least to some extent, quantifying that information to convey information or make decision is what has been the case for Google.

That brings us back to the Edge TPU and how it is promoted by Google.

  • “End-to-end AI infrastructure. Edge TPU complements Cloud TPU and Google Cloud services to provide an end-to-end, cloud-to-edge, hardware + software infrastructure for facilitating the deployment of customers’ AI-based solutions.
  • High performance in a small physical and power footprint. Thanks to its performance, small footprint, and low power, Edge TPU enables the broad deployment of high-quality AI at the edge.
  • Co-design of AI hardware, software and algorithms. Edge TPU isn’t just a hardware solution, it combines custom hardware, open software, and state-of-the-art AI algorithms to provide high-quality, easy to deploy AI solutions for the edge.
  • A broad range of applications. Edge TPU can be used for a growing number of industrial use-cases such as predictive maintenance, anomaly detection, machine vision, robotics, voice recognition, and many more. It can be used in manufacturing, on-premise, healthcare, retail, smart spaces, transportation, etc.”

As mentioned previously it is how the combination of these mesh that is the selling point of Google.

It is the combination of custom hardware, open software, and state-of-the-art AI algorithms to provide high-quality, easy to deploy AI solutions for the edge.

It is mentioned that it can be deployed with Coral, and that is an aspect I will explore further another day.

This is #500daysofAI and you are reading article 307. I am writing one new article about or related to artificial intelligence every day for 500 days.

--

--

Alex Moltzau

AI Policy, Governance, Ethics and International Partnerships at www.nora.ai. All views are my own. twitter.com/AlexMoltzau