AI Snake Oil- What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference

1. Introduction

Introduction to AI Terminology and Misunderstandings

Hype, Misuse, and Misinformation in AI

This is not an isolated example. Textbook errors in machine learning papers are shockingly common, especially when machine learning is used as an off-the-shelf tool by researchers not trained in computer science. For example, medical researchers may use it to predict diseases, social scientists to predict people’s life outcomes, and political scientists to predict civil wars.

Generative AI in Entertainment and Risks

Predictive AI and Its Limitations

Questioning the AI Narrative

All Highlights

This is not an isolated example. Textbook errors in machine learning papers are shockingly common, especially when machine learning is used as an off-the-shelf tool by researchers not trained in computer science. For example, medical researchers may use it to predict diseases, social scientists to predict people’s life outcomes, and political scientists to predict civil wars.

2. How Predictive AI Goes Wrong

The chapter explores the overselling of predictive AI and its consequences, including ethical misapplications, biases, gaming possibilities, and reliance on inaccurate or mismatched data. The case studies illustrate how predictive AI has been improperly utilized in education, healthcare, criminal justice, and job hiring, often leading to unfair or harmful outcomes.

Mount St. Mary's University Case

Predictive AI in Decision-Making

Predictive AI Shortcomings in Healthcare

Gaming and Incentives

In response, candidates have developed strategies to work around opaque hiring AI. They stuff their résumés with keywords from the job application and add the names of top universities in white text (which a human reader can’t see, but a computer can recognize). 22 In video interviews that they know will be judged by AI, they try using fancy words such as “conglomerate.”
This is what teachers do when they teach to the test, and what consumers do when they try to increase their credit scores without changing their spending habits, such as by getting a retail credit card or filling out a prequalification form before applying for credit.

Overautomation and Consequences

Misalignment in Data and Population

Inequality and Bias in Predictive AI

All Highlights

In response, candidates have developed strategies to work around opaque hiring AI. They stuff their résumés with keywords from the job application and add the names of top universities in white text (which a human reader can’t see, but a computer can recognize). 22 In video interviews that they know will be judged by AI, they try using fancy words such as “conglomerate.”
This is what teachers do when they teach to the test, and what consumers do when they try to increase their credit scores without changing their spending habits, such as by getting a retail credit card or filling out a prequalification form before applying for credit.

3. Why Can’t AI Predict the Future?

The chapter explores the overselling of AI in predicting the future, highlighting its limitations in weather forecasts, societal predictions, and applications like criminal justice. Weather predictions are constrained by chaos theory, while social predictions are flawed due to data limitations, chance events, and dynamic contexts. Success stories in sports and media further illustrate unpredictable triumphs driven by chance and cumulative advantage. Despite the challenges, AI continues to be employed, underscoring the need to resist misplaced reliance on predictive AI in making consequential decisions.

AI and Social Predictions

Chaos and Weather Prediction

This finding led to a profound scientific advance— the recognition that weather is a chaotic system. That is, small changes in initial conditions, like a small error in measuring temperature, lead to exponentially increasing errors later. The farther away the prediction, the larger the error. Lorenz termed this the Butterfly Effect: the flapping of a butterfly’s wings in Brazil can cause a tornado in Texas. 9 What this means is that at least in principle, a butterfly flapping its wings could have ripple effects on the atmosphere that grow larger and larger with time. Once scientists understood this effect, predicting the weather over longer time periods started to seem like a herculean task.

Unpredictability of Success and Content Virality

Aggregate Predictions and Societal Outcomes

All Highlights

This finding led to a profound scientific advance— the recognition that weather is a chaotic system. That is, small changes in initial conditions, like a small error in measuring temperature, lead to exponentially increasing errors later. The farther away the prediction, the larger the error. Lorenz termed this the Butterfly Effect: the flapping of a butterfly’s wings in Brazil can cause a tornado in Texas. 9 What this means is that at least in principle, a butterfly flapping its wings could have ripple effects on the atmosphere that grow larger and larger with time. Once scientists understood this effect, predicting the weather over longer time periods started to seem like a herculean task.

4. The Long Road to Generative AI

Chapter 4 examines the dualities of generative AI—its capacity for creativity and assistance in numerous fields, juxtaposed with ethical concerns, misinformation, and labor exploitation. While it showcases its vast, innovative applications, the chapter also delves into historical misuse cases, overhyped promises, and the pressing need for responsible AI development and usage.

Generative AI's Impact and Overselling

Generative AI's Harms and Misinformation

Historical Development and Challenges

Misleading Perception and Industry Practices

These lax practices were taken up by companies, and as a result, their products sometimes behave in unintended ways. In 2015, a user of Google Photos discovered that the app tagged a photo of him and a friend, who are both Black, as “Gorillas.” In response, Google and Apple simply prevented their photos apps from ever producing that label, even for pictures of actual gorillas. 36 Presumably, fixing the training datasets was not considered an option. Eight years later, you still can’t search for gorillas— or most other primates— on these apps.
As early as 1985, renowned natural language processing researcher Frederick Jelinek said, “Every time I fire a linguist the performance of the speech recognizer goes up,” the idea being that the presence of experts hindered rather than helped the effort to develop an accurate model. 37

Labor Exploitation in AI Development

The most serious harm from generative AI, in our view, is the labor exploitation that is at the core of the way it is built and deployed today. Some have argued that given these AI companies’ unscrupulous business practices, the only ethical course of action is to avoid using it altogether. That decision is up to individuals.

All Highlights

These lax practices were taken up by companies, and as a result, their products sometimes behave in unintended ways. In 2015, a user of Google Photos discovered that the app tagged a photo of him and a friend, who are both Black, as “Gorillas.” In response, Google and Apple simply prevented their photos apps from ever producing that label, even for pictures of actual gorillas. 36 Presumably, fixing the training datasets was not considered an option. Eight years later, you still can’t search for gorillas— or most other primates— on these apps.
As early as 1985, renowned natural language processing researcher Frederick Jelinek said, “Every time I fire a linguist the performance of the speech recognizer goes up,” the idea being that the presence of experts hindered rather than helped the effort to develop an accurate model. 37
The most serious harm from generative AI, in our view, is the labor exploitation that is at the core of the way it is built and deployed today. Some have argued that given these AI companies’ unscrupulous business practices, the only ethical course of action is to avoid using it altogether. That decision is up to individuals.

5. Is Advanced AI an Existential Threat?

Chapter 5 discusses the existential threats of AI, particularly AGI, and critiques alarmist views that assume imminent existential risks. It explains concepts like the ladder of generality in AI development and challenges in forecasting AI's future. Addressing AI misuse through specific defensive measures, rather than halting development, is emphasized as a practical solution.

Existential Threats of AI

Challenges in AI Development

The Ladder of Generality

Most human knowledge is tacit and cannot be codified. Manually programming a robot to see the world and move around would be like verbally teaching a person how to swim and expecting them to succeed on their first attempt in the water.
This leads to another interesting point about the ladder of generality: at any given time, it’s hard to tell whether the current dominant paradigm can be further generalized, or if it is actually a dead end.

Practical Implications and Future Considerations

Defending Against AI Misuse

All Highlights

Most human knowledge is tacit and cannot be codified. Manually programming a robot to see the world and move around would be like verbally teaching a person how to swim and expecting them to succeed on their first attempt in the water.
This leads to another interesting point about the ladder of generality: at any given time, it’s hard to tell whether the current dominant paradigm can be further generalized, or if it is actually a dead end.

6. Why Can’t AI Fix Social Media?

The chapter explores the limitations of AI in moderating content on social media, highlighting its inability to correctly understand context and nuances, leading to errors and cultural overlooks. AI's present use in content moderation underscores business cost-saving tactics, revealing its inefficacy when globally stretching across diverse cultural contexts, and explaining the resultant reliance on human moderators for nuanced judgment.

Snake Oil or Solution?

Content Moderation: The Challenges

The Problem of Context

International and Cultural Issues

In other words, they are able to offer a relatively polished product with a welcoming and reasonably well-enforced set of rules in Western countries only because they have an exploitative relationship with most non-Western ones.

AI's Role and Limitations

Knowledge that overturns consensus is not an anomaly; it is how intellectual progress happens.

Evading Moderation

Regulatory and Policy Challenges

Take misinformation: As big a problem as it is in the United States, the reason it isn’t an even bigger problem is that because of the First Amendment, it has long been understood that the best response to wrong or harmful speech is counterspeech.

All Highlights

Take misinformation: As big a problem as it is in the United States, the reason it isn’t an even bigger problem is that because of the First Amendment, it has long been understood that the best response to wrong or harmful speech is counterspeech.
In other words, they are able to offer a relatively polished product with a welcoming and reasonably well-enforced set of rules in Western countries only because they have an exploitative relationship with most non-Western ones.
Knowledge that overturns consensus is not an anomaly; it is how intellectual progress happens.

7. Why Do Myths about AI Persist?

The chapter explores how AI myths persist due to corporate hype, media misrepresentation, and inadequate scientific scrutiny. Companies like Epic have exaggerated AI capabilities, leading to their adoption without proper validation. Incentives for companies and researchers to inflate claims result in further misinformation. Media and public perception often don't critically evaluate AI's potential and limitations. Transparency issues and cognitive biases contribute to the continuation of AI myths.

AI Hype Through Corporate Claims

Investor and Developer Incentives

Impact of Media and Public Perception

This is also a common pattern. Researchers and university press departments are incentivized to get their research in front of as many people as possible, and end up spreading hype in the process. A study found that press releases from universities are responsible for a major chunk of the hype around scientific research.

Transparency and Scrutiny Challenges

If we lack a scientific understanding of some aspects of AI, it’s because we’ve invested too little in researching it compared to the investment in building AI. And when we lack an understanding of a specific AI product, it’s usually because the company has closed it off to scrutiny. These are all things we can change.

All Highlights

This is also a common pattern. Researchers and university press departments are incentivized to get their research in front of as many people as possible, and end up spreading hype in the process. A study found that press releases from universities are responsible for a major chunk of the hype around scientific research.
If we lack a scientific understanding of some aspects of AI, it’s because we’ve invested too little in researching it compared to the investment in building AI. And when we lack an understanding of a specific AI product, it’s usually because the company has closed it off to scrutiny. These are all things we can change.

8. Where Do We Go from Here?

Chapter 8 discusses the overselling of AI, its adoption by under-resourced institutions with potentially harmful consequences, and likens these practices to using inefficient quick fixes. It raises concerns about AI’s impact on the workforce, emphasizing labor concerns tied to capitalism. The chapter highlights flawed regulatory strategies, the decline in Ivy League prestige, and advocates for smarter AI regulation to responsibly shape AI’s societal role.

AI Overselling and Broken Institutions

On top of this, some institutions face large structural forces outside their control. Here, using AI is like rearranging the deck chairs on the Titanic. Take the example of gun violence in the United States. In 2021, over forty-eight thousand people died due to gun injuries, including over twenty thousand murders. 11 As a result, many institutions began adopting AI for detecting gun violence, including schools and public transit. 12,13 Between 2018 and 2023, school districts across the United States spent over USD 45 million on AI for detecting weapons. But this type of AI suffers from low accuracy and frequent false positives— such as flagging a seven-year-old’s lunch box as a bomb.
Flawed AI also diverts focus from the core goals of institutions. For instance, many colleges want to provide mental health support to students. But instead of building the institu tional capacity to support students through difficult times, dozens of colleges adopted a product called Social Sentinel to monitor students’ social media feeds for signs of self-harm.
In all these examples, it is clear that AI isn’t the solution to the root problem that it is trying to fix. Yet, the logic of efficiency is entrenched in these institutions, and AI can seem like a silver bullet, even if it is snake oil.

The Impact of AI on the Future of Work

In a conversation about the future of AI, science fiction author Ted Chiang said, “Fears about technology are fears about capitalism.” 46 In other words, workers aren’t afraid of technical advances themselves; rather, they are afraid of how AI would be used by employers and companies to reduce workers’ power and agency in the workplace. 47 To address the labor impact of AI, then, we need to address the impact of capitalism.
Crucially, Ivies no longer hold the place in society that they once did. They are recognized for what they are: engines of socioeconomic inequality. Once they lost their luster in the public eye, most companies stopped preferentially hiring from Ivies, since it didn’t convey as much prestige as it used to. So Maya’s rejection does not have major consequences for her career.

Regulation and Principles

But the law isn’t just about technical details; it’s also about principles.

All Highlights

On top of this, some institutions face large structural forces outside their control. Here, using AI is like rearranging the deck chairs on the Titanic. Take the example of gun violence in the United States. In 2021, over forty-eight thousand people died due to gun injuries, including over twenty thousand murders. 11 As a result, many institutions began adopting AI for detecting gun violence, including schools and public transit. 12,13 Between 2018 and 2023, school districts across the United States spent over USD 45 million on AI for detecting weapons. But this type of AI suffers from low accuracy and frequent false positives— such as flagging a seven-year-old’s lunch box as a bomb.
Flawed AI also diverts focus from the core goals of institutions. For instance, many colleges want to provide mental health support to students. But instead of building the institu tional capacity to support students through difficult times, dozens of colleges adopted a product called Social Sentinel to monitor students’ social media feeds for signs of self-harm.
In all these examples, it is clear that AI isn’t the solution to the root problem that it is trying to fix. Yet, the logic of efficiency is entrenched in these institutions, and AI can seem like a silver bullet, even if it is snake oil.
But the law isn’t just about technical details; it’s also about principles.
In a conversation about the future of AI, science fiction author Ted Chiang said, “Fears about technology are fears about capitalism.” 46 In other words, workers aren’t afraid of technical advances themselves; rather, they are afraid of how AI would be used by employers and companies to reduce workers’ power and agency in the workplace. 47 To address the labor impact of AI, then, we need to address the impact of capitalism.
Crucially, Ivies no longer hold the place in society that they once did. They are recognized for what they are: engines of socioeconomic inequality. Once they lost their luster in the public eye, most companies stopped preferentially hiring from Ivies, since it didn’t convey as much prestige as it used to. So Maya’s rejection does not have major consequences for her career.