By Jane Lanhee Lee and Chavi Mehta
(Reuters) – Nvidia Corp on Tuesday announced several new chips and technologies that it said will boost the computing speed of increasingly complicated artificial intelligence algorithms, stepping up competition against rival chipmakers vying for lucrative data center business.
Nvidia’s graphic chips (GPU), which initially helped propel and enhance the quality of videos in the gaming market, have become the dominant chips for companies to use for AI workloads. The latest GPU, called the H100, can help reduce computing times from weeks to days for some work involving training AI models, the company said.
The announcements were made at Nvidia’s AI developers conference online.
“Data centers are becoming AI factories — processing and refining mountains of data to produce intelligence,” said Nvidia Chief Executive Officer Jensen Huang in a statement, calling the H100 chip the “engine” of AI infrastructure.
Companies have been using AI and machine learning for everything from making recommendations of the next video to watch to new drug discovery, and the technology is increasingly becoming an important tool for business.
The H100 chip will be produced on Taiwan Manufacturing Semiconductor Company’s cutting edge four nanometer process with 80 billion transistors and will be available in the third quarter, Nvidia said.
The H100 will also be used to build Nvidia’s new “Eos” supercomputer, which Nvidia said will be the world’s fastest AI system when it begins operation later this year.
Facebook parent Meta announced in January that it would build the world’s fastest AI supercomputer this year and it would perform at nearly 5 exaflops. Nvidia on Tuesday said its supercomputer will run at over 18 exaflops.
Exaflop performance is the ability to perform 1 quintillion – or 1,000,000,000,000,000,000 – calculations per second.
Nvidia also introduced a new processor chip (CPU) called the Grace CPU Superchip that is based on Arm technology. It’s the first new chip by Nvidia that uses Arm architecture to be announced since the company’s deal to buy Arm Ltd fell apart last month due to regulatory hurdles.
The Grace CPU Superchip, which will be available in the first half of next year, connects two CPU chips and will focus on AI and other tasks that require intensive computing power.
More companies are connecting chips using technology that allows faster data flow between them. Earlier this month Apple Inc unveiled its M1 Ultra chip connecting two M1 Max chips.
[nL2N2VB1DI]Nvidia said the two CPU chips were connected using its NVLink-C2C technology, which was also unveiled on Tuesday.
Nvidia, which has been developing its self-driving technology and growing that business, said it has started shipping its autonomous vehicle computer “Drive Orin” this month and that Chinese electric vehicle maker BYD Co Ltd and luxury electric car maker Lucid Motors would be using Nvidia Drive for their next generation fleets.
Danny Shapiro, Nvidia’s vice president for automotive, said there was $11 billion worth of automotive business in the “pipeline” in the next six years, up from $8 billion that it forecast last year. The growth in anticipated revenue will come from hardware and from increased, recurring revenue from Nvidia software, said Shapiro.
Nvidia shares were relatively flat in midday trade.
(Reporting By Jane Lanhee Lee, additional reporting by Joseph White; Editing by Bernard Orr)
Horizon Technology Finance (HRZN) Q2 2022 Earnings Call Transcript
TP-Link Deco X90 Giveaway: It’s a Solid Mesh Steal!
Edifier NeoBuds S ANC earbuds review: Ahead of the curve