In an August 2021 article, The Economist examined the role of Nvidia in the current AI Spring. The writers signaled their central idea in the title: Will Nvidia’s huge bet on artificial-intelligence chips pay off?
A fair number of people don’t know much about the role of graphics-processing hardware in the success of neural networks. A neural network is a collection of algorithms, but to crunch through the massive quantities of training data required by many AI systems — and get it done in a reasonable timeframe, instead of, say, years — you need both speed and parallelism. The term for this kind of computer-chip technology is accelerated computing, and Nvidia is the market leader.
Nvidia has ridden this wave to a current market value of $505 billion, according to the article. Five years ago, it was $31 billion. Nvidia both designs and manufactures the semiconductors for which it is famous. The original purpose of these chips was to run the graphics in modern computer games — the ones where characters race through immense, detailed 3D worlds. About half of Nvidia’s revenue still comes from chips designed for running game software.
“Huge, real-time models like those used for speech recognition or content recommendation increasingly need specialized GPUs to perform well, says Ian Buck, head of Nvidia’s accelerated-computing business.”— Will Nvidia’s huge bet on artificial-intelligence chips pay off?
So what’s the “huge bet”? Nvidia is in the midst of acquiring Arm, a designer of other kinds of fast chips, which also have the appeal of being energy efficient. The deal may or may not go through — there are European and U.K. hurdles to leap (Arm is based in the U.K.). Essentially Nvidia seeks to expand its microprocessor repertoire. The article discusses the competition among chip firms such as Intel and Advanced Micro Devices (AMD) — and increasingly, the biggest tech firms (e.g. Google and Amazon/AWS) are getting into the chip-design business as well.
The Economist also produced a podcast episode about Nvidia and GPUs around the same time it published the article summarized above: Shall we play a game? How video games transformed AI (38 min.). It provides a friendly, low-stress introduction to neural networks and deep learning, going back to the perceptron, and covering the dominance in AI research of symbolic systems until the late 1980s. That’s the first 10 minutes. Then video games come into focus, and how so much technology innovation has come from computer game developments. Difference between CPUs and GPUs: around 13:00. Details about Nvidia’s programmable GPUs. Initial resistance (from research scientists) to using GPUs for serious AI work: around 20:00. Skepticism toward neural networks in the early 2000s. Andrew Ng’s group at Stanford demonstrates amazing speed increases in training time, using Nvidia GPUs. ImageNet challenge, AlexNet, the new rise of neural networks. In the final minutes, Nvidia’s future, chip technologies, and stock prices are discussed.
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.