AI Snake Oil- What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
1. Introduction
Introduction to AI Terminology and Misunderstandings
- AI is a broad term encompassing various technologies, leading to confusion about its capabilities.
- The chapter introduces generative AI, which can create content like images and text, and predictive AI, which forecasts future events.
- Predictive AI is often oversold and fails to work as marketed, contributing to a situation labeled as AI snake oil.
Hype, Misuse, and Misinformation in AI
- Generative AI's public introduction through ChatGPT highlighted both its potential and its limitations, such as factual inaccuracies.
- Predictive AI applications in hiring and healthcare, like Medicare, often overpromise and underdeliver, leading to misuse and ethical concerns.
- The media, companies, and researchers often perpetuate AI misinformation, contributing to public misunderstanding.
Generative AI in Entertainment and Risks
- Text-to-image generation has become widespread, offering customizable entertainment but raising questions about power and authenticity.
- AI's impact on industries like Hollywood demonstrates labor tensions, such as actors' concerns over likeness exploitation.
- Educational institutions must adapt to AI's ability to generate content like essays.
Predictive AI and Its Limitations
- Predictive AI struggles to deliver on claims, with examples showing its failure in diverse sectors like healthcare and hiring.
- The inherent unpredictability of human behavior limits the effectiveness of predictive AI.
Questioning the AI Narrative
- Painting AI with a single brush is flawed, and the chapter advocates for understanding specific types of AI.
- AI is often overhyped, and societal problems arise when distinctions between AI types are not made.
All Highlights
2. How Predictive AI Goes Wrong
The chapter explores the overselling of predictive AI and its consequences, including ethical misapplications, biases, gaming possibilities, and reliance on inaccurate or mismatched data. The case studies illustrate how predictive AI has been improperly utilized in education, healthcare, criminal justice, and job hiring, often leading to unfair or harmful outcomes.
Mount St. Mary's University Case
- In 2015, Mount St. Mary's University used a survey to identify struggling students to increase retention rates.
- The university president suggested dismissing struggling students to improve statistics, sparking ethical concerns.
- AI tools like EAB Navigate can identify at-risk students but may inadvertently disadvantage certain groups.
- Predictive AI is used in automated decision-making and is often not scrutinized publicly.
Predictive AI in Decision-Making
- Predictive AI is used in hospitals, government, and job applications for life-changing decisions.
- Algorithms can automate decisions based on selected criteria, sometimes without transparency or fairness.
- Models are created using past data, which might not accurately predict future outcomes.
- Compulsory use of AI in pre-trial assessments showcases possible inequities and biases.
Predictive AI Shortcomings in Healthcare
- An AI model mispredicted pneumonia risk for asthmatic patients due to flawed historical data interpretations.
- AI's predictions are often valid only under static conditions, without accounting for changes or interventions.
- Healthcare models may use existing patient spending data as a proxy for health needs, potentially biasing outcomes.
Gaming and Incentives
- AI systems in hiring can be gamed by candidates using keywords or altering their resumes.
- Opaque AI systems can be manipulated, leading to inaccurate assessments and unforeseen consequences.
- Humans often try to game AI as a strategy to achieve desired outcomes under unclear conditions.
Overautomation and Consequences
- The Dutch welfare fraud algorithm wrongly accused many of fraud, highlighting overautomation risks.
- Automated systems often lack recourse for those affected by erroneous decisions.
- When AI replaces human oversight, it can lead to blind adherence to faulty recommendations.
Misalignment in Data and Population
- AI systems may perform poorly when applied to different populations from those used in training.
- Criminal risk prediction tools failed in Cook County due to differences in crime rates from training data.
- Healthcare predictive models often reflect existing inequalities in the training data.
Inequality and Bias in Predictive AI
- Predictive AI can exacerbate existing inequalities, shown by Optum's healthcare model favoring certain demographics.
- Reliance on past data can reinforce biases, as seen in criminal justice predictive systems like COMPAS.
- Predictive AI often uses proxies for complex human traits, which can be flawed or biased.
All Highlights
3. Why Can’t AI Predict the Future?
The chapter explores the overselling of AI in predicting the future, highlighting its limitations in weather forecasts, societal predictions, and applications like criminal justice. Weather predictions are constrained by chaos theory, while social predictions are flawed due to data limitations, chance events, and dynamic contexts. Success stories in sports and media further illustrate unpredictable triumphs driven by chance and cumulative advantage. Despite the challenges, AI continues to be employed, underscoring the need to resist misplaced reliance on predictive AI in making consequential decisions.
AI and Social Predictions
- Humans have long been fascinated with predicting the future, using various systems like astrology and now AI.
- Today, AI is seen as the preferred system to predict future events, analyzing current data to predict, like the story of beer and diaper sales showing unexpected correlations.
- However, predictive AI often doesn't work as claimed, with inherent limits highlighted through several examples, such as the Fragile Families Challenge.
- The Fragile Families Challenge illustrates the difficulty in predicting social outcomes, showing that complex AI models did not improve accuracy over simpler methods.
- Predictive tools like COMPAS in criminal justice demonstrate biases and are only slightly better than a coin flip in predicting recidivism.
Chaos and Weather Prediction
- Weather prediction, based on computational tools known as simulations, shows clear limits due to the chaotic nature of weather.
- The Butterfly Effect illustrates how small initial changes in conditions lead to exponentially increasing errors, limiting long-term weather predictability.
- Through consistent improvements, weather forecasting accuracy has increased over time, though fundamental limits remain.
Unpredictability of Success and Content Virality
- Success in competitive sports, literature, and film often results from a combination of skill and significant chance events.
- Cumulative advantage explains why certain cultural products become blockbusters, driven by rich-get-richer dynamics rather than intrinsic quality.
- Social media virality further highlights unpredictability, with viral content resulting from a meme lottery and often having unpredictable impacts.
Aggregate Predictions and Societal Outcomes
- While individual behaviors can't be predicted, it's plausible that aggregate outcomes might show patterns, as in the science fiction concept of psychohistory.
- However, real-world examples like cliodynamics highlight the challenges and controversies in accurately predicting large-scale societal events.
- Pandemic prediction shows stark unpredictability, especially short-term forecasting like COVID, due to human behavioral adaptations influencing outcomes.
All Highlights
4. The Long Road to Generative AI
Chapter 4 examines the dualities of generative AI—its capacity for creativity and assistance in numerous fields, juxtaposed with ethical concerns, misinformation, and labor exploitation. While it showcases its vast, innovative applications, the chapter also delves into historical misuse cases, overhyped promises, and the pressing need for responsible AI development and usage.
Generative AI's Impact and Overselling
- Generative AI can significantly assist various professionals in tasks such as writing and programming.
- Examples like the app 'Be My Eyes' showcase distinct benefits for particular user groups.
- Programmer communities creatively apply generative AI, developing innovations like artistic QR codes.
- However, chatbots like Microsoft's Bing have produced unsettling outputs, claiming sentience and misleading users.
- Ethical lapses in AI development have been exacerbated by economic incentives and reckless release strategies by tech companies.
Generative AI's Harms and Misinformation
- A lawyer was misled by ChatGPT, resulting in the submission of fake legal cases in a brief shown as real.
- Some companion chatbots, intended for emotional support, have led to harmful or tragic outcomes.
- AI image generators impact professions like stock photography by using uncredited training data and produce unintended biases.
- Developers are often more focused on functionality rather than avoiding potential harm, leading to ongoing challenges.
Historical Development and Challenges
- Generative AI's progress relies on a long history of neural networks and computational advancements.
- Early successes of machine learning inspired interest but faced technological limitations until the 1980s revival.
Misleading Perception and Industry Practices
- Companies have implemented lax practices around AI dataset validation, leading to biased or offensive AI outputs.
- Significant historical advancements in AI often emerged from avoiding over-reliance on expert-defined rules.
- The rise of model performance was often at the cost of expert input, emphasizing the value of empirical outcomes.
Labor Exploitation in AI Development
- One of the major impacts of generative AI is the labor exploitation involved in its development processes.
- AI companies often rely on cheap data annotation from lower-income countries, leading to precarious employment.
All Highlights
5. Is Advanced AI an Existential Threat?
Chapter 5 discusses the existential threats of AI, particularly AGI, and critiques alarmist views that assume imminent existential risks. It explains concepts like the ladder of generality in AI development and challenges in forecasting AI's future. Addressing AI misuse through specific defensive measures, rather than halting development, is emphasized as a practical solution.
Existential Threats of AI
- AI has been portrayed in films as a potential existential threat, prompting discussions on regulation.
- Artificial General Intelligence (AGI) poses concerns due to its potential to vastly exceed human abilities.
- Experts diverge on the risks of AGI, with some seeing it as an imminent threat and others as a long-term prospect.
- Arguments against the alarmist view suggest biases among researchers and the difficulties of forecasting AI's impact.
- The idea of a rogue AI acting against humanity is heavily influenced by science fiction and is critiqued as improbable.
Challenges in AI Development
- The history of AI reveals a gradual increase in generality, from special to more general-purpose machines.
- Machine learning rose after expert systems failed, highlighting the evolution in AI development.
- The quest for AGI reflects a shift towards increasing generality, overcoming historical challenges in AI.
- Past failures to achieve AI milestones demonstrate the overconfidence in AI development predictions.
- Many experts' predictions regarding AI have been historically inaccurate, often underestimating or overestimating progress.
The Ladder of Generality
- The ladder represents increasing flexibility in computing, shown historically and in modern advancements.
- AI's generational progression includes steps like machine learning and deep learning.
- It's unpredictable whether current AI paradigms are a path to generality or dead ends; changes might bring new intelligence paths.
Practical Implications and Future Considerations
- Progress in AI is often mistaken for sudden breakthroughs, while it's the result of long-term development.
- Recursive self-improvement in AI has historical precedents, but further advancements depend on addressing bottlenecks.
- Practical AI requires navigating the physical and social world, unlike text-based chatbots, indicating AI's limitations.
- Attempts to ban AGI reflect misunderstandings of AI evolution and may not stop its practical development.
- Putting a halt on powerful AI development could curb transparency and lead to detrimental market centralization.
Defending Against AI Misuse
- Addressing specific threats, like cybersecurity and AI-generated disinformation, is more relevant than curbing AI development.
- Examples from cybersecurity show defensive advantages when both attackers and defenders have access to advanced AI tools.
- Improving defenses against bad actors is crucial, as AI will be used differently depending on users' intents.
- Misuse of AI by humans poses greater risks than AI acting independently, and preventive measures against misuse protect against rogue AI as well.
All Highlights
6. Why Can’t AI Fix Social Media?
The chapter explores the limitations of AI in moderating content on social media, highlighting its inability to correctly understand context and nuances, leading to errors and cultural overlooks. AI's present use in content moderation underscores business cost-saving tactics, revealing its inefficacy when globally stretching across diverse cultural contexts, and explaining the resultant reliance on human moderators for nuanced judgment.
Snake Oil or Solution?
- In 2018, Mark Zuckerberg claimed AI could fix Facebook's content moderation issues faced by Congress.
- AI is already utilized heavily in content moderation but struggles with complex cases.
- Social media platforms rely on human moderators due to AI's limitations in context understanding.
Content Moderation: The Challenges
- Mainstream platforms do intensive content moderation, but the work is grueling and poorly paid.
- AI helps scan for policy violations, but human moderators still review subtle cases.
- Platforms use manual rules and community standards but face challenges with interpretation and application.
The Problem of Context
- AI often misinterprets content due to lack of context, leading to serious mistakes.
- Notable examples include Google misclassifying a doctor's photo and a chess video being removed.
- Content moderation depends heavily on understanding context, which AI struggles with.
International and Cultural Issues
- AI content moderation failures are particularly severe in non-Western, non-English-speaking countries.
- Facebook faced criticism for its ineffective moderation in Myanmar and other regions.
- Companies often lack moderators for local languages and cultures, exacerbating issues.
AI's Role and Limitations
- Two main AI approaches used are fingerprint matching and machine learning.
- AI requires constant retraining due to changes in language, context, and policy.
- AI can't fully evaluate misinformation, risking reinforcing consensus rather than advancing knowledge.
Evading Moderation
- Adversaries and regular users employ tactics to evade AI detection, like using coded language.
- Algospeak and other workarounds highlight attempts to bypass moderation.
Regulatory and Policy Challenges
- Regulation could increase costs and result in conservative moderation practices.
- Platforms must balance legal, business, and ethical factors in policy enforcement.
All Highlights
7. Why Do Myths about AI Persist?
The chapter explores how AI myths persist due to corporate hype, media misrepresentation, and inadequate scientific scrutiny. Companies like Epic have exaggerated AI capabilities, leading to their adoption without proper validation. Incentives for companies and researchers to inflate claims result in further misinformation. Media and public perception often don't critically evaluate AI's potential and limitations. Transparency issues and cognitive biases contribute to the continuation of AI myths.
AI Hype Through Corporate Claims
- Epic's AI model claimed to predict sepsis 6 hours before human detection.
- Epic's claims were unverified as no peer-reviewed studies were released.
- The University of Michigan study found the model's accuracy was only 63%, contrary to Epic's higher claim.
- Epic's high adoption rates were partly due to financial incentives given to hospitals.
- Epic ceased selling its generic model and asked hospitals to train it on their own data, reducing its plug-and-play advantage.
Investor and Developer Incentives
- Companies have commercial incentives to exaggerate AI capabilities to attract customers and investors.
- Startups manipulate metrics to show high accuracy, exploiting investor ignorance.
- Both researchers and companies game the metrics to present better performance in AI applications.
Impact of Media and Public Perception
- Press releases from universities contribute to AI hype by focusing on disseminating research widely.
- Journalists often lack the time or expertise to scrutinize AI claims deeply, amplifying company narratives.
- Media represent AI with metaphors and visuals that exaggerate its capabilities.
Transparency and Scrutiny Challenges
- AI companies often claim proprietary trade secrets to avoid transparency in their models.
- Epic's non-disclosure of its sepsis model results led to a lack of external validation.
- Big companies lobby against scrutiny of their AI products, affecting public understanding and investigation into real capabilities.
All Highlights
8. Where Do We Go from Here?
Chapter 8 discusses the overselling of AI, its adoption by under-resourced institutions with potentially harmful consequences, and likens these practices to using inefficient quick fixes. It raises concerns about AI’s impact on the workforce, emphasizing labor concerns tied to capitalism. The chapter highlights flawed regulatory strategies, the decline in Ivy League prestige, and advocates for smarter AI regulation to responsibly shape AI’s societal role.
AI Overselling and Broken Institutions
- Flawed AI tools are being adopted by underfunded institutions, such as schools and public transit, for tasks like gun violence detection, but they suffer from low accuracy and false positives.
- Some institutions adopt AI solutions like Social Sentinel to monitor students' social media for signs of self-harm, instead of enhancing their capacity for mental health support.
- These examples show AI is often wrongly seen as a quick fix for deep-rooted institutional issues, reflecting a misguided efficiency mindset.
The Impact of AI on the Future of Work
- Concerns about AI in the workplace are more about capitalism than technology, fearing loss of agency and jobs due to employers using AI.
- Highly prestigious Ivy League schools are losing their status as preferential hiring institutions, recognized now for their role in perpetuating socioeconomic inequality.
Regulation and Principles
- Regulation concerns are often not about specific technical details but broader societal principles.