AMD has expanded its CPU solutions for servers and data centers, adding new EPYC processors for more specific tasks such as native computing in the cloud, with the AMD EPYC Bergamo, or technical tasks with the AMD EPYC Genoa-X. In addition to these processors, which are already available, AMD also showed a preview of your new MI300X accelerator for generative AI based on CDNA 3. Together they offer a complete solution for today’s in-demand workloads, including scalable AI solutions.
The manufacturer has shown its most advanced MI300X accelerator for generative AI, as part of its MI300 series. This next-generation CDNA 3-based MI300X accelerator supports up to 192 GB of HBM3 memory that will offer the speed and capacity needed for training and inference on large language models. Thanks to this maximum memory capacity, customers will be able to use a single accelerator with large language models like Falcon-40. It also showcased its Instinct accelerator, comprised of 8 MI300X accelerators, on a standard platform and as the ultimate solution for AI inference and workloads.
Along with this MI300X has also announced that they are carrying out testing your Instinct MI300Aa APU throttle one-of-a-kind for HPC and AI workloads, and that it is already being tested together with selected clients.
Along with these hardware solutions, AMD has also introduced the ROCm software ecosystem for data center accelerators, such as open source AI platform. Network solutions were also presented, including the AMD DPU Thinking with ultra-low latency, along with software solutions for the smartest, highest-performing DPU. Besides, highlighted the next generation of DPUs, codenamed Giglio, which aims to increase the performance and energy efficiency of the current generation. This next generation Giglio is expected to be available at the end of this 2023.
End of Article. Tell us something in the Comments!