Saturday, July 27, 2024
HomeAINvidia CEO Jensen Huang announces new AI chips: ‘We need bigger GPUs’

Nvidia CEO Jensen Huang announces new AI chips: ‘We need bigger GPUs’

Nvidia unveiled new AI hardware and software for running models. The announcement was made at Nvidia’s developer conference in San Jose, as the chipmaker looks to reinforce its position as the go-to supplier for AI businesses.

Nvidia’s stock price has increased fivefold, and overall revenues have more than tripled since OpenAI’s ChatGPT sparked the AI boom in late 2022. Nvidia’s high-performance server GPUs are critical for training and deploying massive AI models. Microsoft and Meta have invested billions of dollars on the chips.

The new generation of AI graphics processors is called Blackwell. The first Blackwell chip, the GB200, will be released later this year. To encourage new orders, Nvidia is luring customers with more powerful chips. Companies and software developers, for example, are still trying to get their hands on the newest generation of “Hopper” H100s and related chips.

“Hopper is fantastic, but we need bigger GPUs,” Nvidia CEO Jensen Huang said on Monday during the company’s developer conference in California.

Nvidia’s stock plummeted more than 1% in extended trading on Monday.

The business also unveiled revenue-generating software called NIM, which will make it easier to deploy AI, providing clients another reason to choose Nvidia chips over a growing number of competitors.

According to Nvidia executives, the company is transitioning from a mercenary chip manufacturer to a platform provider, such to Microsoft or Apple, on which other companies can build their software.

“Blackwell’s not a chip, it’s the name of a platform,” Huang told reporters.

“The GPU was the sellable commercial product, and the software was all designed to help people use the GPU in different ways,” said Nvidia enterprise VP Manuvir Das in an interview. “Of course, we still do this. But the main change is that we now have a commercial software business.”

Das stated that Nvidia’s new software will make it easier to run programs on any of the company’s GPUs, including older ones that may be more suited for deployment but not for AI development.

“If you’re a developer, you’ve got an interesting model you want people to adopt, if you put it in a NIM, we’ll make sure that it’s runnable on all our GPUs, so you reach a lot of people,” Das went on to say.
Every two years, Nvidia upgrades its GPU architecture, resulting in a significant boost in performance. Many of the AI models produced in the past year were trained on the company’s Hopper architecture, which is utilized in chips like the H100 and was introduced in 2022.

Nvidia claims that Blackwell-based processors, such as the GB200, provide a significant performance boost for AI companies, with 20 petaflops of AI performance compared to 4 petaflops for the H100. Nvidia says that the increased processing capacity will allow AI companies to train larger and more complex models.

The chip incorporates what Nvidia refers to as a “transformer engine specifically built to run transformer-based AI, one of the core technologies underpinning ChatGPT.”

TSMC manufactures the Blackwell GPU, which is a big device that incorporates two distinct dies. It will also be available as a whole server, the GB200 NVLink 2, which combines 72 Blackwell GPUs and other Nvidia components for training AI models.

Amazon, Google, Microsoft, and Oracle will sell access to the GB200 via cloud services. The GB200 combines two B200 Blackwell GPUs and one Arm-based Grace CPU. Nvidia said that Amazon Web Services would develop a server cluster with 20,000 GB200 chips.

According to Nvidia, the system is capable of deploying a model with 27 trillion parameters. That’s significantly greater than even the most powerful models, such as GPT-4, which apparently has 1.7 trillion parameters. Many artificial intelligence researchers believe that larger models with more parameters and data will enable new capabilities.

Nvidia did not offer a price for the new GB200 or the devices it is used in. Analysts estimate that Nvidia’s Hopper-based H100 costs between $25,000 and $40,000 per processor, with complete systems costing up to $200,000.

Nvidia Inference Microservice
Nvidia also announced the addition of a new product, NIM (Nvidia Inference Microservice), to its Nvidia enterprise software subscription.

NIM makes it easier to use older Nvidia GPUs for inference, or the process of running AI software, and will help businesses to keep using the hundreds of millions of Nvidia GPUs they already have. Inference takes less computational power than initial training for a new AI model. NIM allows organizations to run their own AI models rather than purchasing access to AI findings as a service from companies such as OpenAI.

The objective is to convince clients that buy Nvidia-based servers to join up for Nvidia enterprise, which costs $4,500 per GPU per year to license.

Nvidia will collaborate with AI businesses such as Microsoft and Hugging Face to ensure that their AI models are optimized to run on all Nvidia CPUs. The model can then be operated efficiently on their local or cloud-based Nvidia servers utilizing a NIM, eliminating the need for a lengthy configuration process.

“In my code, where I was calling into OpenAI, I will replace one line of code to point it to this NIM that I got from Nvidia instead,” Das went on to say.

Nvidia said the software will also allow AI to work on GPU-equipped laptops rather than cloud servers.






RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments