Prepare for Artificial Intelligence to Produce Less Wizardry

A new paper argues that the computing demands of deep learning are so great that progress on tasks like translation and self-driving is likely to slow.
men touching alphago cube
AI produces dazzling achievements such as defeating the best human players at Go, but progress is imperiled by the enormous compute demands.Photograph: Xu Yu/Xinhua News Agency/Getty Images

Early last year, a large European supermarket chain deployed artificial intelligence to predict what customers would buy each day at different stores, to help keep shelves stocked while reducing costly spoilage of goods.

The company already used purchasing data and a simple statistical method to predict sales. With deep learning, a technique that has helped produce spectacular AI advances in recent years—as well as additional data, including local weather, traffic conditions, and competitors’ actions—the company cut the number of errors by three-quarters.

It was precisely the kind of high-impact, cost-saving effect that people expect from AI. But there was a huge catch: The new algorithm required so much computation that the company chose not to use it.

“They were like, ‘Well, it’s not worth it to us to roll it out in a big way, unless cloud computing costs come down or the algorithms become more efficient,’” says Neil Thompson, a research scientist at MIT, who is assembling a case study on the project. (He declined to name the company involved.)

The story highlights a looming problem for AI and its users, Thompson says. Progress has been both rapid and dazzling in recent years, giving us clever game-playing programs, attentive personal assistants, and cars that navigate busy roads for themselves. But such advances have hinged on throwing ever-more computing resources at the problems.

In a new research paper, Thompson and colleagues argue that it is, or will soon be, impossible to increase computing power at the same rate in order to continue these advances. This could jeopardize further progress in areas like computer vision, translation, and language understanding.

AI’s appetite for computation has risen remarkably over the past decade. In 2012, at the beginning of the deep-learning boom, a team at the University of Toronto created a breakthrough image-recognition algorithm using two GPUs (a specialized kind of computer chip) over five days. Fast-forward to 2019, and it took six days and roughly 1,000 special chips (each many times more powerful than the earlier GPUs) for researchers at Google and Carnegie Mellon to develop a more modern image-recognition algorithm. A translation algorithm, developed last year by a team at Google, required the rough equivalent of 12,000 specialized chips running for a week. By some estimates, it would cost up to $3 million to rent this much computer power through the cloud.

“Deep neural networks are very computationally expensive,” says Song Han, an assistant professor at MIT who specializes in developing more efficient forms of deep learning and is not an author on Thompson’s paper. “This is a critical issue.”

Han’s group has created more efficient versions of popular AI algorithms using novel neural network architectures and specialized chip architectures, among other things. But he says there is a “still a long way to go” to make deep learning less compute-hungry.

Other researchers have noted the soaring computational demands. The head of Facebook’s AI research lab, Jerome Pesenti, told WIRED last year that AI researchers were starting to feel the effects of this computation crunch.

Thompson believes that, without clever new algorithms, the limits of deep learning could slow advances in multiple fields, affecting the rate at which computers replace human tasks. “The automation of jobs will probably happen more gradually than expected, since getting to human-level performance will be much more expensive than anticipated,” he says. “Slower automation might sound good from a jobs perspective,” he says, but it will also slow gains in productivity, which are key to raising living standards.

In their study, Thompson and his coauthors looked at more than 1,000 AI research papers outlining new algorithms. Not all of the papers detailed the computational requirements, but enough did to map out the cost of progress. The history suggested that making further advances in the same way will be all but impossible.

Improving the performance of an English-to-French machine-translation algorithm so that it only makes mistakes 10 percent of the time instead of the current rate of 50 percent, for example, would require an extraordinary increase in computational power—a billion billion times as much—if it were to rely on more computation power alone. The paper was posted to arXiv, a preprint server. It has yet to be peer-reviewed or published in a journal.

“We’ve already hit this wall,” says Thompson. In some recent talks and papers, he says, researchers working on particularly large and cutting-edge AI projects have begun to complain that they cannot test more than one algorithm design, or rerun an experiment, because the cost is so high.

To be sure, the idea that AI is nearing some limit could be upset by more powerful chips and more efficient software. Advances stemming from miniaturization of chip components continue despite the challenges of atomic-scale manufacturing. Meanwhile, specialized new AI chips can run deep-learning calculations more efficiently.

But Thompson says hardware improvements are unlikely to offset the striking rise in compute needed for cutting-edge advances in areas like self-driving cars and real-time voice translation. “There have been substantial improvements in algorithms, and of course lots of improvement in hardware,” he says, “but despite that there's been this huge escalation in the amount of computing power.”

This enormous rise in computation for AI also comes at an environmental cost, although in practice it can be difficult to measure the emissions produced by a project without details of on the efficiency of the computers. One recent study suggests that the energy consumption of data centers has grown little over the past decade due to efficiency improvements.

Sasha Luccioni, a postdoctoral researcher at the University of Montreal studying the environmental impact of AI, agrees that the field is using more computer power, and she says researchers can take steps to reduce the need for massive amounts of computation. She says it is important to choose cloud infrastructure and chip hardware carefully, consider an algorithm’s efficiency, and disclose both the computations and the emissions involved with a project.

Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, has previously called for less environmentally harmful forms of AI. “Compute power is an important ingredient in the recent success of AI,” he says. “But we are continually pushing the envelope in increasing efficiency,” he says.

Ultimately, Thompson is hopeful that improved deep-learning approaches won’t just consume less computational power. “Finding these new techniques won’t be easy,” he says, “but if we do find some broadly applicable ones, it will probably generate another wave of applications.”

Updated, 7-14-20, 10:45am ET: This article has been updated to reflect the publication of the paper on arXiv.


More Great WIRED Stories