NVIDIA is not content to be one of the big winners of the artificial intelligence (AI) boom. The American company continues to bet on continuing to develop its business model beyond graphic solutions aimed at individual users like us. In other words, it aims to consolidate its role as a leading technology company in the field of high-performance computing (HPC).
Signs of what sets the pace for NVIDIA’s evolution as a company they come to us directly from Taipeiin Taiwan, where the COMPUTEX, one of the most important technological events of the year. There, the head of the American company, Jen-Hsun Huang, has announced important developments related to AI that have not gone unnoticed.
NVIDIA Grace Hopper Superchip on the way
It’s no secret that we live in a world that demands ever more powerful data centers. Now, since these complex computer systems are made up of a wide variety of components, there are several ways to improve your performance. One of them, for example, is to optimize communication between them by implementing improved interconnection architectures.
In this sense, NVIDIA has started production of the Grace Hopper Superchip GH200, a solution that combines the Grace architecture CPU (with 72 ARM cores and up to 480 GB of LPDDR5X memory) with the mammoth H100 GPU with Hopper architecture (with 528 Tensor cores). and 80 GB of HBM3 memory). All this, interconnected with the highly efficient NVLink-C2C system from the same manufacturer.
It is worth noting the promising progress of the aforementioned “superchip”. If we set up a configuration of a Grace CPU with an H100 GPU connected with traditional PCIe, the overall performance would be substantially lower than this combined approach. NVLink-C2C, as promised by the manufacturer, allows to increase the bandwidth between the CPU and the GPU by about seven times compared to PCIe.
But this is not all, as we say, NVIDIA’s commitment to the HPC world seems to be serious. Thus, those led by Jen-Hsun have announced the implementation of Grace Hopper Superchip GH200 in DGX solution which they have called “supercomputer”. And this definition makes sense, because this is not a standard data center node, but a true high-performance cluster.
We are talking about the DGX GH200 AI: Grace Hopper, a colossal piece of high technology (aimed at training “giant models” of AI) that is made up of 256 GH200 super chips interconnected through the NVLink system, which promises up to 48 times more bandwidth than the same configuration based on the previous generation. The result? 144TB of combined memory and 1 ExaFLOP of performance.
NVIDIA has already outlined who will be the main customers of the DGX GH200 AI system: Grace Hopper. Specifically, Google, Meta and Microsoft are waiting to start using the system when it becomes available for AI tasks later this year. Although of course, in theory any other company capable of paying for it could also access it. And while we don’t know its price, it certainly won’t be cheap.
Images: NVIDIA
In Xataka: After a calamitous 2022, the big tech companies are sweeping the stock market. And all thanks to one factor