There is growing concern about energy demand, water consumption and the carbon footprint of artificial intelligence. This is not couch-doom, but a reality that is putting increasing pressure on the power grid and has forced the International Energy Agency to call a global summit. Google proposes a four-pronged strategy to tackle the problem.
The four “M”s. In an investigation published by IEEEGoogle identifies four practices, which it calls “the 4Ms,” that it says can help large AI companies reduce the carbon footprint of their machine learning algorithms by 100 to 1,000 times:
- Model: Use more efficient machine learning architectures to reduce computational needs by 3-10x
- Machine: Using specialized AI hardware to improve its efficiency by 2 to 5 times
- Mechanization: prefer cloud computing to local computing to reduce energy demand by 1.4 to 2 times
- Mapping: Optimizing data center locations based on available clean energy to reduce emissions by 5-10x
David Patterson, a researcher at Google Research and lead author of the study, says the carbon footprint associated with AI training It would diminish instead of increasing by following these four practices.
M for Model. At the architectural level, new AI models are increasingly incorporating advances aimed at improving their efficiency. Google, Microsoft, OpenAI and Meta use the “knowledge distillation” technique to train smaller models that imitate a large model, the “master”, demanding less energy.
Larger and larger models are still being trained, many of which are not available to users, but in Google’s case, training these models accounts for 40% of the energy demand, while “inference” on the models available to users (processing the responses) accounts for 60%.
Counterintuitive as it may sound, the latest publicly released multimodal models, such as Gemini 1.5 Pro and GPT-4o, are also more efficient than their predecessors thanks to their ability to leverage different input modalities, such as images and code: they learn with less data and examples than text-only models.
M for Machine. The vast majority of companies developing AI buy their hardware from Nvidia, which has specialized chips. But more and more companies are opting for the “Google model” of developing their own hardware, such as Microsoft, OpenAI and in China, Huawei.
Google has been using its own “TPUs” (tensor processing units, specialized in AI) for years. The latest generation is called Trillium, It was announced in May and is 67% more energy efficient than its predecessor, meaning it can perform more calculations with less energy, both in training and tuning, as well as in AI inference in Google’s data centers.
M for Mechanization. Another counterintuitive idea. Cloud computing consumes less energy than computing in an on-premises data center. Cloud data centers, especially those dedicated to AI, contain tens of thousands more servers than a specific organization’s data centers, and are designed with better power distribution and cooling systems since they can pay for themselves.
With the drawback of entrusting data to large cloud companies, such as Amazon, Microsoft or Google, cloud data centers have another clear advantage: they are more modern, which means they have machines more specialized in AI training and inference.
M for Mapping. Another reason Google is calling for more cloud computing and less local computing is the companies’ commitment to renewable energy. Some of these large data centers already operate on 90% carbon-free energy.
Big tech companies are locating their new data centers in places where renewable resources are abundant, including the water used to cool servers, and as a result, companies like Google, Microsoft and Apple are matching 100% of their operations’ electricity with renewables, and aiming for net-zero emissions by the end of this decade.
Meanwhile, companies like Microsoft and OpenAI are not convinced that the supply of renewables can match the growing demand for energy and are already betting on expanding nuclear capacity, either with small modular reactors or by investing in fusion research.
Image | Google Cloud
At Xataka | The power grid is suffering, and not because of electric cars: because of the enormous demand for artificial intelligence
Add Comment