Meta recently introduced proprietary processors, a data center design, and a supercomputer for AI workloads. The tech giant wants to get a piece of the pie when it comes to AI solutions and applications, but also to maintain control.
Meta feels the pressure of other tech giants when it comes to AI developments and wants to join in. Recently, the tech giant has therefore made a number of important announcements, especially in the field of hardware. In its own words, Meta wants to gain and maintain control over its entire AI infrastructure stack.
Processors and data center design
The most important announcement is the arrival of proprietary AI workload processors. The Meta Training and Inference Accelerator (MTIA) focuses specifically on inference workloads and must therefore offer more computing power, better performance, less latency and more efficiency.
In addition, this ASIC accelerator is fully adapted to the internal workloads within Meta. The MTIA accelerator will soon be used in combination with GPUs, probably Nvidia GPUs, that is the intention.
The tech giant is also introducing a new data center design. This design should better meet the use of AI hardware for both training and inference. Think of liquid-cooled AI hardware applications and a high-performance AI network environment that connects thousands of AI processors in large AI training clusters.
Last but not least, Meta announces the arrival of the Research SuperCluster (RSC) AI Supercomputer. According to the tech giant, this is one of the fastest AI training supercomputers in the world. The supercomputer consists of 16,000 GPUs and a 3-level Clos network fabric that provides bandwidth for 2,000 training systems.
Read also: Meta reduces water consumption in data centers