Quantifying the Carbon Emissions of Machine Learning
Measuring Emissions in the Field of Artificial Intelligence
Reading the summary of the papers submitted to the NeurIPS workshop I thought it was interesting to see if I could find any related to the certification or monitoring of CO2 emissions. If you want to examine other papers feel free to do so at:
Tackling Climate Change with Machine Learning
Announcements Video recordings of the workshop are linked under the Schedule below. Abstracts for accepted works are…
I found a paper called “Quantifying the Carbon Emissions of Machine Learning” and searched it up on arXiv, a free distribution service and an open-access archive. They list a few factors:
- The location of the server used for training.
- The energy grid that it uses.
- The length of the training procedure.
- The make and model of hardware on which the training takes place.
They have called their tool Machine Learning Emissions Calculator.
With this tool it is possible to compute the impact at least to some extent of the emissions pertaining to machine learning.
“Practically speaking, it is hard to estimate exactly the amount of CO2eq emitted by a cloud server in a given location because the information regarding the energy grid that it is connected to is rarely publicly available. However, if we assume that all servers are connected to local grids at their physical location, we are able to make an estimation of the amount of CO2eq that they emit using public data sources.”
Different regions have different emission:
They have a set of actionable items:
- Quantify Your Emissions. Being informed regarding the factors that impact the quantity of carbon emissions produced by ML research is the first step to making positive changes
- Choose Cloud Providers Wisely. Different cloud provides have a variety of priorities or ways of thinking about efficiency or energy.
- Select Data Center Location. While many cloud providers are carbon neutral, some of their data centers may still be carbon intensive due to the local grid that they are connected to, whereas others will be low carbon and powered solely by renewable energy sources.
- Reduce Wasted Ressources. Grid search is still often used in practice, in spite of its low efficiency both in terms of model performance and environmental impact. However, it has been shown that random search (and others) not only is a straightforward replacement but also has potential to significantly accelerate hyperparameter search.
- Choose More Efficient Hardware. The choice of computing hardware can also have a major impact on ML emissions.
These are elaborated on in the paper, a short read of 8-pages certainly worth your time.
This is #500daysofAI and you are reading article 278. I am writing one new article about or related to artificial intelligence every day for 500 days. My current focus for 100 days 200–300 is national and international strategies for artificial intelligence. I have decided to spend the last 25 days of my AI strategy writing to focus on the climate crisis.