NVIDIA has accelerators for servers and data centers like the H100 that even has a version with HBM2e memory. These cards are backed by a great company and NVIDIA software to deliver great performance. But this performance now has received a improvement of up to 54% thanks to the optimizations that the NVIDIA team has done in the last 6 months, according to the company’s recently released MLPerf 3.0 data.
These numbers show how the NVIDIA H100 is capable of obtaining better results from between 7% and up to 54% in specific workloads for which these cards are designed, among them we find:
- Image Classification with ResNet
- Natural Language Processing with BERT Large
- Speech recognition with RNN-T
- Medical Imaging with 3D U-Net
- Object detection with RetinaNet
- Recommendation with DLRM.
In this way, it not only surpasses the previous NVIDIA 100 with almost 4.5 times the performance, but with these figures also stands above of other solutions Intel or Qualcomm. Although it should also be mentioned that these MLPerf data they do not reflect the optimizations made by the rest of the competitors which, as we can see with this H100, can considerably increase performance.
End of Article. Tell us something in the Comments!
Juan Antonio Soto
I am a Computer Engineer and my specialty is automation and robotics. My passion for hardware began at the age of 14 when I gutted my first computer: a 386 DX 40 with 4MB of RAM and a 210MB hard drive. I continue to give free rein to my passion in the technical articles that I write at Geeknetic. I spend most of my free time playing video games, contemporary and retro, on the 20+ consoles I own, in addition to the PC.