Efficient AI Hardware and Compute
Optimising Hardware and Compute for AI Towards Increased Sustainability
What if AI applications could run incrementally more efficient and save carbon on each hardware platform it is run on due to tailored compute training one network and specialising for each platform?
Well, it might be a bridge to more AI applications on more hardware.
Despite the magnitude of the pandemic and a lot of sad or unfortunate events development in the field of artificial intelligence is still happening and the climate crisis is still here. That is why I am particularly excited for the workshop to be held at ICLR in collaboration with climatechange.ai.
The main event is on April 26th with additional events on April 27th–30th.
ICLR is The International Conference on Learning Representations (ICLR).
This year it is a virtual conference, formerly to be held in Addis Ababa Ethiopia.
As you can see there is a rich representation from various large companies within technology.
Particularly from Google, Nvidia and Facebook.
One thing that interested me was seeing a video prepared prior to the conference.
There is a challenge to get the right mix of computation and hardware for training.
This is a project from MIT HAN Lab.
MIT HAN Lab: Hardware, AI and Neural-nets Accelerate Deep Learning Computing https://hanlab.mit.edu.
It seems the lab is named after Song Han.
Song Han is an assistant professor at MIT EECS. Dr. Han received the Ph.D. degree in Electrical Engineering from…
According to MIT:
“The researchers built the system on a recent AI advance called AutoML (for automatic machine learning), which eliminates manual network design. Neural networks automatically search massive design spaces for network architectures tailored, for instance, to specific hardware platforms. But there’s still a training efficiency issue: Each model has to be selected then trained from scratch for its platform architecture.”
Training and then specialising for deployment is the main idea.
Because it can be challenging with inference on different hardware.
As such they can save compute and therefore save the environment.
To do this they do progressive shrinking.
If this type of computation can lessen carbon footprints it will be highly beneficial, yet it may also enable to a larger extent more AI applications to run better on different kinds of hardware – a step forward certainly.
This is #500daysofAI and you are reading article 325. I am writing one new article about or related to artificial intelligence every day for 500 days. My focus for day 300–400 is about AI, hardware and the climate crisis.