Tuesday, May 28, 2024
HomeTechnologyThis year, Meta will use proprietary processors developed in-house to power its...

This year, Meta will use proprietary processors developed in-house to power its AI drive.

According to an internal corporate document obtained by Reuters on Thursday, Facebook’s parent firm Meta Platforms (META.O) opens new tab aims to implement into its data centers this year a new version of a bespoke processor meant to boost its artificial intelligence (AI) drive.

As it races to release AI solutions, the chip, a second generation of an internal semiconductor line Meta unveiled last year, could help lessen Meta’s reliance on Nvidia (NVDA.O), open new tab chips that rule the market, and limit the skyrocketing costs associated with running AI workloads.

The largest social media company in the world has been rushing to increase its processing power for the energy-intensive generative AI products it is integrating into hardware like its Ray-Ban smartglasses and apps like Facebook, Instagram, and WhatsApp. To meet this demand, the company has been reconfiguring data centers and accumulating specialized chip arsenals at a cost of billions of dollars.

According to Dylan Patel, the founder of the silicon research group SemiAnalysis, a successful deployment of Meta’s own chip could potentially save hundreds of millions of dollars in annual energy expenditures and billions in chip purchasing prices at the scale at which the company works.

For IT businesses, the processors, infrastructure, and energy needed to run AI applications have become a massive money pit.

A representative for Meta confirmed that the improved chip will go into production in 2024 and stated that it will function in tandem with the hundreds of thousands of off-the-shelf graphics processing units (GPUs) that the company was purchasing, which are the preferred processors for artificial intelligence.

“We see our internally developed accelerators to be highly complementary to commercially available GPUs in delivering the optimal mix of performance and efficiency on Meta-specific workloads,” a spokeswoman said in a statement.

Last month, Mark Zuckerberg, the CEO of Meta, announced that the business intended to have about 350,000 flagship “H100” processors from Nvidia—the manufacturer of the most sought-after GPUs used in artificial intelligence—by the end of the year. According to him, Meta would amass the equivalent compute capacity of 600,000 H100s when combined with other providers.

After executives decided in 2022 to discontinue the chip’s initial generation, Meta’s internal AI silicon project is making progress with the deployment of its own chip as part of that plan.

Instead, the corporation decided to spend billions of dollars purchasing GPUs from Nvidia, which has a monopoly-like grip on the training process in artificial intelligence. This process entails feeding massive data sets into models to educate them how to accomplish tasks.

Like its predecessor, the new chip, internally termed “Artemis,” is limited to performing a process called inference, wherein models are asked to utilize their algorithms to rank objects and produce responses when prompted by the user.

According to a Reuters story from last year, Meta is also developing a more ambitious processor that would be able to conduct both training and inference, just like GPUs.

Details of the initial iteration of the Meta Training and Inference Accelerator (MTIA) initiative were released by the Menlo Park, California-based startup last year. That particular chip was presented in the announcement as a teaching tool.

Despite those early missteps, Patel believes that an inference chip could be far more effective than the energy-hungry Nvidia processors at processing Meta’s recommendation models.

“There is a lot of money and power being spent that could be saved,” he stated.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments