Nvidia Corp. Chief Executive Officer Jensen Huang said that the future of artificial intelligence will be services that can “reason,” but such a stage requires the cost of computing to come down first.
Nvidia will set the stage for these advances by boosting its chip performance every year by two to three times, at the same level of cost and energy consumption, Huang said. This will transform the way AI systems handle inference — the ability to spot patterns and draw conclusions.
“We’re able to drive incredible cost reduction for intelligence,” he said. “We all realize the value of this. If we can drive down the cost tremendously, we could do things at inference time like reasoning.”
The Santa Clara, California-based company has more than 90% of the market for so-called accelerator chips — processors that speed up AI work. It has also branched out to selling computers, software, AI models, networking and other services — part of a push to get more companies to embrace artificial intelligence.
Nvidia is facing attempts to loosen its grip on the market. Data center operators such as Amazon.com Inc.’s AWS and Microsoft Corp. are developing in-house alternatives. And Advanced Micro Devices Inc., already an Nvidia rival in gaming chips, has emerged as an AI contender. AMD plans to share the latest on its artificial intelligence products at an event Thursday.
First Published: Oct 09 2024 | 11:54 PM IST