New AI Chip on the Block

54 billion transistors and 5 petaflops of performance

Alex Moltzau
3 min readMay 15, 2020

It is exciting covering hardware in artificial intelligence when new technology is developed. This May 2020 NVIDIA released their A100 Tensor Core GPU. It has 54 billion transistors and 5 petaflops of performance. It is also connected to the most powerful end-to-end AI and HPC data centre platform.

VentureBeat covered this release in an article posted on the 14th of May. It is on this article and the information on NVIDIA’s website that I will be basing mine.

The CEO Jensen Huang called it the ultimate instrument for advancing AI.

His focus was on the fight against COVID-19 ensuring that it supports effort to advance understanding of the virus.

It looks fascinating. An unbelievable amount of work must have gone into making this possible.

Nvidia’s A100 chip has 54 billion transistors. Image Credit: Nvidia.

With the amount of transistors (building blocks of all electronics) and the performance it promises 20 times more than the previous-generation chip Volta.

Take that in — 20 times the performance. That is immense.

They claim to put the capabilities of an entire datacenter into a single rack (although this has been said to be hyperbole) and the 7-nanometer chip, codenamed Ampere, can take the place of a lot of AI systems being used today.

“For instance, to handle AI training tasks today, one customer needs 600 central processing unit (CPU) systems to handle millions of queries for datacenter applications. That costs $11 million, and it would require 25 racks of servers and 630 kilowatts of power. With Ampere, Nvidia can do the same amount of processing for $1 million, a single server rack, and 28 kilowatts of power.” -VentureBeat

This could replace inference servers.

According to VentureBeat the first order for the chips is going to the U.S. Department of Energy’s (DOE) Argonne National Laboratory, which will use the cluster’s AI and computing power to better understand and fight COVID-19. According to NVIDIA several of the world’s largest companies have also placed orders. The University of Florida will be the first U.S. institution of higher learning to receive DGX A100 systems.

“It took only three weeks to build that SuperPod, Kharya said, and the cluster is one of the world’s fastest AI supercomputers — achieving a level of performance that previously required thousands of servers.”

Nvidia wants to help customers build their own A100-powered datacenters.

They revealed the DGX SuperPod, a cluster of 140 DGX A100 systems capable of achieving 700 petaflops of AI computing power.

Combining 140 DGX A100 systems with Nvidia Mellanox HDR 200Gbps InfiniBand interconnects, the company built its own next-generation DGX SuperPod AI supercomputer for internal research in areas such as conversational AI, genomics, and autonomous driving.

A DGX SuperPod – Image Credit: Nvidia

The cluster is one of the world’s fastest AI supercomputers — achieving a level of performance that previously required thousands of servers.

To help customers build their own A100-powered datacenters, Nvidia has released a new DGX SuperPod reference architecture.

Nvidia also launched the Nvidia DGXpert program, which brings DGX customers together with the company’s AI experts, and the Nvidia DGX-ready software program, which helps customers take advantage of certified, enterprise-grade software for AI workflows.

The chips are made by TSMC in a 7-nanometer process. Nvidia DGX A100 systems start at $199,000 and are shipping now through Nvidia Partner Network resellers worldwide.

This is #500daysofAI and you are reading article 347. I am writing one new article about or related to artificial intelligence every day for 500 days. My focus for day 300–400 is about AI, hardware and the climate crisis.

--

--

Alex Moltzau

AI Policy, Governance, Ethics and International Partnerships at www.nora.ai. All views are my own. twitter.com/AlexMoltzau