Photo by — @lazycreekimages

Six Areas in AI Hardware Research

Areas Studied by IBM Research’s AI Hardware

Alex Moltzau
4 min readApr 4, 2020

--

The company that owns the most patents related to machine learning techniques, and thus arguably rather to a large extent some of the largest intellectual property in the field of artificial intelligence — that company is IBM.

IBM Research has been developing new devices and hardware architectures to support processing power and speed of AI.

So I thought it would be good to map the indications of research that is publicly available.

There is a section dedicated to this on the websites of IBM research:

Topmost banner of the hardware section of the IBM website

I cannot begin to map in-depth all the research that is being done, however I would like to list the main areas.

They state on the website that:

“Currently, GPUs and specialized digital CMOS accelerators are the state-of-the-art in DNN hardware. However, the ever-increasing complexity of DNNs and the data they process have led to a quest for the next quantum improvement in processing efficiency. The AI hardware team is exploring new devices, architectures and algorithms to improve processing efficiency as well as enable the transition from Narrow AI to Broad AI. Approximate computing, in-memory computing, machine intelligence and quantum computing are all part of the computing approaches being explored for AI workloads.”

The focus area is on improving the current cutting-edge.

“IBM Research is tacking this challenge with device, architecture, packaging, system design, and algorithm design.”

Six areas outlined by IBM within their AI hardware research.
  1. Digital AI Cores are new accelerators for existing semiconductor technologies that use reduced precision to speed computation and decrease power consumption.
    - Ultra-Low-Precision Training of Deep Neural Networks
    - DeepTools: Compiler and Execution Runtime Extensions for RaPiD AI Accelerator
    - 8-Bit Precision for Training Deep Learning Systems
    -Unlocking the Promise of Approximate Computing for On-Chip AI Acceleration
    -Deep Learning Training Times Get Significant Reduction
    -Unlocking the Promise of Approximate Computing for On-Chip AI Acceleration
  2. Analog AI Cores enable in-memory storage and processing of data to speed computation and yield exponential gains in computational efficiency.
    -Unveiling Analog Memory-based Technologies to Advance AI at VLSI
    -The Future of AI Needs Better Compute: Hardware Accelerators Based on Analog Memory Devices
    -IBM Scientists Demonstrate Mixed-Precision In-Memory Computing for the First Time; Hybrid Design for AI Hardware
    -IBM Scientists Demonstrate In-memory Computing with 1 Million Devices for Applications in AI
    -Steering Material Scientists to Better Memory Devices
    -Dual 8-Bit Breakthroughs Bring AI to the Edge
  3. Heterogenous integration. AI applications drive the need for a system level optimization of AI Hardware through Heterogeneous Integration of Accelerators, Memory and CPU. The AI Hardware center will focus on interconnect solutions to enable high speed, high bandwidth connectivity between the different components.
  4. Quantum computing for machine learning. Quantum computing has emerged as a new computing paradigm with the potential of addressing problems that are intractable for today’s classical computers. Recently, Havlicek demonstrated the use of a superconducting quantum processor to perform a traditional machine learning classification task. We believe there is significant potential in using quantum computing for machine learning by exploiting the exponentially large quantum state space.
    Researchers Put Machine Learning on Path to Quantum Advantage
    Test a quantum classifier
  5. Machine intelligence is very different from machine learning. Machine intelligence involves the use of fast associative reasoning to mimic human intelligence. IBM Research is exploring machine intelligence by using the brain’s neocortex as a model for developing flexible systems that learn continuously — and without human supervision. A recent paper discusses unique structures in the human brain called filopodia that may have a role in fast learning.
    Filopodia: A Rapid Structural Plasticity Substrate for Fast Learning
  6. AI optimized systems.
    Reaching the Summit: The next milestone for HPC
    Distributed Deep Learning with IBM DDL and TensorFlow NMT

This wide array of efforts into the combination of advanced efforts in hardware by the scientists at IBM is worth following and can be assessed going forward, although it must be expected that much research will not reach the public — and they will likely have gotten further than is displayed.

This is #500daysofAI and you are reading article 306. I am writing one new article about or related to artificial intelligence every day for 500 days.

--

--

Alex Moltzau

AI Policy, Governance, Ethics and International Partnerships at www.nora.ai. All views are my own. twitter.com/AlexMoltzau