Science and Tech

Meta’s commitment to AI infrastructure: its own chips, an optimized data center and a supercomputer

Meta's commitment to AI infrastructure: its own chips, an optimized data center and a supercomputer

May 19. (Portaltic/EP) –

Goal has shared his plans for the construction of his artificial intelligence infrastructure (IA), which includes its own processors and an optimized data center, in order to support new applications based on said technology and the metaverse.

The vice president and head of infrastructure in Meta, Santosh Janardhan, has assured that the company is “executing an ambitious plan” to build the next generation of your AI infrastructure.

While AI already plays a prominent role in the global infrastructure that the company began rolling out in 2010 with its first data center, Meta’s plans for the future of AI and the metaverse include a new next-generation data center with ability to enable training and inference of future generations of AI hardware.

The company has highlighted the efficiency of this optimized data centerthanks to the liquid cooling system that will help cool the ‘hardware’ and the network of “thousands of high-performance AI chips”, which will allow training to scale.

Meta also works on proprietary accelerator chips (known as MTIA) designed for internal workloads, with greater computing power than CPUs. From the company they affirm that when combined with GPUs, they offer “better performance, lower latency and greater efficiency for each workload”, as stated in the official blog.

This work is complemented by Meta’s Research SuperCluster supercomputer, of which the construction of its second phase has been completed. He defends that it is “one of the fastest AI supercomputers in the world”, with a computing power that can be done at all 5 exaflops.

This supercomputer has 16,000 Nvidia A100 Tensor Core GPUsaccessible through the three-tier Clos network structure that provides full bandwidth to each of the 2,000 Nvidia DGX A100 training systems acting as compute nodesas explained from Meta.

Various projects are currently running on it, such as a universal voice translator and its own large language model, LLaMa, with 65 billion parameters.

Source link