The AI Chip Market 2020
A range of actors on the AI chip market
Nvidia recently launched a new AI chip, and it has gotten a lot of attention.
ZDNet’s Tiernan Ray wrote an in-depth analysis of the chip architecture.
However, there are a variety of different AI chips out there.
George Anadiotis mapped out AI Chips in 2020 the 21st of May in ZD Net, and that is what this article is about. To sum up a few key points that he mentioned in his article.
In addition to the new AI chip from Nvidia there is Intel.
Intel and Graphcore are seen by Anadiotis as the main challengers.
He notes that Nvidia’s lead does not just lay in hardware. Additionally he argues the software and partner ecosystem may be the hardest part for the competition to match.
Nvidia may be challenged on economics, others on performance.
- “Intel has been working on its Nervana technology for a while. At the end of 2019, Intel made waves when it acquired startup Habana Labs for $2 billion. As analyst Karl Freund notes, after the acquisition Intel has been working on switching its AI acceleration from Nervana technology to Habana Labs.”
- “Another high profile challenger is GraphCore. The UK-based AI chip manufacturer has an architecture designed from the ground up for high performance and unicorn status. […] From Dell’s servers to Microsoft Azure’s cloud and Baidu’s PaddlePaddle hardware ecosystem, GraphCore has a number of significant deals in place. GraphCore has also been working on its own software stack, Poplar. In the last month, Poplar has seen a new version and a new analysis tool.”
According to Anadiotis both Intel and GraphCore are attempting to innovate on hardware while working on software stack and market present.
He also mentions Startup Run with its announcement of $13 million in funding. “Run:AI offers a software layer to speed up machine learning workload execution, on-premise and in the cloud.” The company works with Amazon Web Services (AWS).
They bundle workloads into a layer (that is likely more complex than it sounds).
They call it the following.
“Fractionalizing” GPU: running separate jobs within a single GPU.
Another startup mentioned is InAccel built around the premise of: “…providing an FPGA manager that allows the distributed acceleration of large data sets across clusters of FPGA resources using simple programming models.”
Field Programmable Gate Arrays (FPGAs): are semiconductor devices that are based around a matrix of configurable logic blocks (CLBs) connected via programmable interconnects. (A configurable logic block (CLB) is the basic repeating logic resource on an FPGA).
According to Anadiotis they can help solve scalable deployment of FPGA clusters. “InAccel’s orchestrator allows easy deployment, instant scaling, and automated resource management of FPGA clusters.”
Anadiotis argues that one should be hedging one’s bets in the AI chip market.
As described there are several options and possibilities, and they are well worth exploring, or at least keeping an eye on.
This is #500daysofAI and you are reading article 353. I am writing one new article about or related to artificial intelligence every day for 500 days. My focus for day 300–400 is about AI, hardware and the climate crisis.