We use ChatGPT, Claude, Copilot or Gemini to AI generate answers to our questions as if nothing were happening. After all, in many cases it is free, right? It is for users at the moment, yes, but all those responses—let alone images or videos generated by AI—cost these companies a lot of money. How much? None says it.
burning money. We have already talked about how both OpenAI and the rest of the companies that have opted for AI are burning money as if there were no tomorrow. Sam Altman recently indicated that OpenAI won’t be profitable until 2029, but until then — if they get there — they will continue to lose money. A lot. An analysis by some analysts reveals how the management and operation of AI models has an enormous cost, and for example OpenAI will have estimated losses of 5000 million dollars…only in 2024!
Difficult to find out the real cost of AI. The companies have hardly provided data on the internal cost of operating these services, although in The Washington Post they indicated in June 2023 something that is probably true: AI chatbots they lose money every time you use them. Some independent studies They already pointed out in 2023 that the cost of a simple question on ChatGPT was 36 cents on the dollar, and that OpenAI was spent $700,000 a day to keep his LLM running.
And now the models want to reason (and that costs more). The launch of o1, the OpenAI model that “reasons”, has been the starting signal for an alternative that theoretically makes fewer mistakes and that, although it takes longer to give the answer, does so in a more complete and precise way. But also more expensive: these systems actually generate several responses, analyze and review them and keep the one they consider most appropriate to show to the user. And that is more expensive for the AI company and for the user.
That’s why the $200 a month subscription is just the beginning. That OpenAI has launched a $200-a-month subscription to be able to use its reasoning model without limitations, o1, is a good sign of how expensive it is for the company to offer an “open bar of artificial intelligence.” Certain users and scenarios take intensive advantage of AI chatbots, and it will be in those cases where companies will be able to advocate offering increasingly powerful… and expensive subscriptions.
But for images and videos, the cost is even higher. Generating text is the least resource-demanding task, yet it is expensive. It is also difficult to estimate this increase in cost because there are many factors at play: efficiency of the servers, complexity and resolution of the image or video, etc. Even so, it is likely that generating an image will cost several orders of magnitude more than generating text, and generating a video will also cost several orders of magnitude more than generating an image.
Lots of energy… John Hennessy, one of the executives of Alphabet, explained in Reuters that the responses generated by an LLM consume 10 times more energy than a conventional Google search. The implications are enormous – also for the environment – and in fact it is estimated that energy consumption could triple in 2030. Above all, due to the data centers that are being built and that will be in charge of managing all these requests. Google and Microsoft alone already consume more than 100 different countries.
…and water. There is also a problem with the water needed to cool those data centers. According to a study published by Washington Postasking the AI to write us a 100-word email translates into consuming 519 milliliters of water.
Seeking efficiency. Of course the hope is that these training and inference costs will be significantly reduced in the future. The development of new chips, more efficient data centers – and that do not waste water, like those that Microsoft has designed—can help with that.
AI on premises. The other alternative that can help will be the progressive adoption of “Edge AI” models that run locally. Apple Intelligence is a good example, and here the company proposes using certain functions – be careful, not all – directly on the device, which allows us not only to save energy, but also to guarantee the privacy of our requests and conversations with our mobile phone or computer. .
Image | Xataka with Grok
In Xataka | OpenAI has broken its ceiling. Their Pro plan is a jump to the ultra-premium that makes all the sense in the world
Add Comment