AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference [Book Excerpt]
From the new book AI SNAKE OIL -- by Arvind Narayanan and Sayash Kapoor
“AI Snake Oil” is no anti-technology screed. The authors acknowledge the potential of some types of artificial intelligence, explain why AI isn’t an existential risk, and point out that the danger doesn’t come from the technology, but from those who use it. At the same time, they are frank about the harms AI is already causing and the disproportionate control of AI by mostly unaccountable big tech firms.
“By equipping readers with a basic understanding of the different flavors of AI,” Narayanan says, “we hope to make it easier to navigate the claims about AI developments that we encounter every day.”
IMAGINE AN ALTERNATE universe in which people don’t have words for different forms of transportation—only the collective noun “vehicle.” They use that word to refer to cars, buses, bikes, spacecraft, and all other ways of getting from place A to place B. Conversations in this world are confusing. There are furious debates about whether or not vehicles are environmentally friendly, even though no one realizes that one side of the debate is talking about bikes and the other side is talking about trucks. There is a breakthrough in rocketry, but the media focuses on how vehicles have gotten faster—so people call their car dealer (oops, vehicle dealer) to ask when faster models will be available. Meanwhile, fraudsters have capitalized on the fact that consumers don’t know what to believe when it comes to vehicle technology, so scams are rampant in the vehicle sector.
Now replace the word “vehicle” with “artificial intelligence,” and we have a pretty good description of the world we live in.
Artificial intelligence, AI for short, is an umbrella term for a set of loosely related technologies. ChatGPT has little in common with, say, software that banks use to evaluate loan applicants. Both are referred to as AI, but in all the ways that matter—how they work, what they’re used for and by whom, and how they fail—they couldn’t be more different.
Chatbots, as well as image generators like Dall-E, Stable Diffusion, and Midjourney, fall under the banner of what’s called generative AI. Generative AI can generate many types of content in seconds: chatbots generate often-realistic answers to human prompts, and image generators produce photorealistic images matching almost any description, say “a cow in a kitchen wearing a pink sweater.” Other apps can generate speech or even music.
Generative AI technology has been rapidly advancing, its progress genuine and remarkable. But as a product, it is still immature, unreliable, and prone to misuse. At the same time, its popularization has been accompanied by hype, fear, and misinformation.
In contrast to generative AI is predictive AI, which makes predictions about the future in order to guide decision-making in the present. In policing, AI might predict “How many crimes will occur tomorrow in this area?” In inventory management, “How likely is this piece of machinery to fail in the next month?” In hiring, “How well will this candidate perform if hired for this job?”
Predictive AI is currently used by both companies and governments, but that doesn’t mean it works. It’s hard to predict the future, and AI doesn’t change this fact. Sure, AI can be used to pore over data to identify broad statistical patterns—for instance, people who have jobs are more likely to pay back loans—and that can be useful. The problem is that predictive AI is often sold as far more than that, and it is used to make decisions about people’s lives and careers. It is in this arena that most AI snake oil is concentrated.
AI snake oil is AI that does not and cannot work as advertised. Since AI refers to a vast array of technologies and applications, most people cannot yet fluently distinguish which types of AI are actually capable of functioning as promised and which types are simply snake oil. This is a major societal problem: we need to be able to separate the wheat from the chaff if we are to make full use of what AI has to offer while protecting ourselves from its possible harms, harms which in many cases are already occurring.
Artificial intelligence, AI for short, is an umbrella term for a set of loosely related technologies. ChatGPT has little in common with, say, software that banks use to evaluate loan applicants. Both are referred to as AI, but in all the ways that matter—how they work, what they’re used for and by whom, and how they fail—they couldn’t be more different.
Chatbots, as well as image generators like Dall-E, Stable Diffusion, and Midjourney, fall under the banner of what’s called generative AI. Generative AI can generate many types of content in seconds: chatbots generate often-realistic answers to human prompts, and image generators produce photorealistic images matching almost any description, say “a cow in a kitchen wearing a pink sweater.” Other apps can generate speech or even music.
Generative AI technology has been rapidly advancing, its progress genuine and remarkable. But as a product, it is still immature, unreliable, and prone to misuse. At the same time, its popularization has been accompanied by hype, fear, and misinformation.
In contrast to generative AI is predictive AI, which makes predictions about the future in order to guide decision-making in the present. In policing, AI might predict “How many crimes will occur tomorrow in this area?” In inventory management, “How likely is this piece of machinery to fail in the next month?” In hiring, “How well will this candidate perform if hired for this job?”
Predictive AI is currently used by both companies and governments, but that doesn’t mean it works. It’s hard to predict the future, and AI doesn’t change this fact. Sure, AI can be used to pore over data to identify broad statistical patterns—for instance, people who have jobs are more likely to pay back loans—and that can be useful. The problem is that predictive AI is often sold as far more than that, and it is used to make decisions about people’s lives and careers. It is in this arena that most AI snake oil is concentrated.
AI snake oil is AI that does not and cannot work as advertised. Since AI refers to a vast array of technologies and applications, most people cannot yet fluently distinguish which types of AI are actually capable of functioning as promised and which types are simply snake oil. This is a major societal problem: we need to be able to separate the wheat from the chaff if we are to make full use of what AI has to offer while protecting ourselves from its possible harms, harms which in many cases are already occurring.
• • •
Excerpted from AI SNAKE OIL: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. Copyright © 2024 by Arvind Narayanan and Sayash Kapoor. Reprinted by permission of Princeton University Press.