Science and Tech

NVIDIA’s A100 chips are the kings of AI: Google says its Tensor chips are faster and more efficient

Google Data Center1

We live in tremendously interesting times. It has been a long time since we have witnessed the emergence of a technology as promising, and at the same time worrying, as the artificial intelligence. Bill Gates, to cite just one example, speaks of the beginning of a “new era” and even points to a revolution that will affect various industries. Elon Musk, for his part, calls for a pause in the development of the most powerful systems.

During the last time we learned that NVIDIA became a key player in the world of artificial intelligence. The US manufacturer’s chips, which offer great performance, computational density and scalability, were used to train famous models such as DALL·E and GPT-4. But this company is not the only one in the sector. Google also has its chips, and ensures that they are better than those of NVIDIA.

Google also makes hardware for artificial intelligence

The Mountain View giant introduced its own processing units designed for artificial intelligence data centers in 2016. We are talking about the Google TPU (Cloud Tensor Processing Unit for its acronym in English), which have been improving over the years and are already used in 90% of the company’s machine learning tasks and are crucial for the operation of the search engine and YouTube .


Google data center

Also, like Microsoft’s Azure (which is powered by NVIDIA hardware), Google Cloud allows outside companies and organizations to train their AI models using its cloud infrastructure. In other words, instead of setting up their own data centers with highly expensive and hard-to-find hardware, they rent the necessary computing power from those of these companies.

Without going further, as explained in an official publicationthe popular Midjourney text-based image generator, which in its most recent version has surprised us with its precision and realism, has been trained using the Google Cloud infrastructure. In other words, the chips from the company led by Sundar Pichai have played a leading role in one of the most famous AI models of the moment.

So are Google’s chips really any good? according to a scientific article published this Wednesday by the company, the Google TPUv4 launched in 2021 are up to 1.7 times faster and 1.9 times more efficient in terms of energy consumption than the NVIDIA A100 launched in 2020. The comparisons, according to the authors of the article, correspond to tasks of training AI models of the same size.

New to Midjourney is the "reverse prompting': you give it an image, it tells you which prompt it was created with

It should be noted that while most of today’s data centers are powered by NVIDIA A100 chips, companies are already migrating to NVIDIA H100s, which improve performance.performance of its predecessor. Those of Mountain View have not said if they are working on a new version of their TPUs, although presumably they are doing so so as not to be left behind in the artificial intelligence race.

Images: (1, 2, 3)

In Xataka: “It will not solve the challenges”: Bill Gates also reacts to Elon Musk’s letter on the risk of AI

Source link