
"AI Snake Oil" exposes the AI hype machine, revealing what AI can and can't do. Endorsed by tech luminaries like Kate Crawford, this revelatory guide has sparked fierce debates in Silicon Valley. Can you spot the AI snake oil in your own life?
Arvind Narayanan, author of AI Snake Oil, is a Princeton University computer science professor and a leading voice in AI ethics, data privacy, and algorithmic accountability. As director of Princeton’s Center for Information Technology Policy, his research exposes the societal risks of emerging technologies, including groundbreaking work on machine learning biases and the limits of data anonymization.
Narayanan is the co-author of the widely adopted textbooks Bitcoin and Cryptocurrency Technologies and Fairness in Machine Learning, bridging academic rigor with real-world applications. His "AI Snake Oil" newsletter, followed by 50,000 researchers and policymakers, critiques AI hype while advocating for responsible innovation.
Recognized in TIME’s inaugural list of 100 Most Influential People in AI, Narayanan’s work has shaped policy debates and corporate practices through projects like the Princeton Web Transparency initiative. His insights on algorithmic fairness are cited in over 100 media features, including The Atlantic and WIRED.
AI Snake Oil exposes exaggerated claims about artificial intelligence, distinguishing between functional applications and harmful hype. The book critiques AI's misuse in education, hiring, criminal justice, and healthcare while explaining technical limitations and societal risks. Authors Arvind Narayanan and Sayash Kapoor emphasize that AI often amplifies biases, enables corporate exploitation, and cannot solve structural problems like social media toxicity.
This book is essential for policymakers, tech professionals, and anyone impacted by AI decisions (e.g., job applicants, patients). It helps readers identify misleading AI marketing, understand algorithmic biases, and advocate for accountability. Students and educators will benefit from its accessible breakdown of AI’s technical and ethical boundaries.
Yes—it combines rigorous research with real-world examples to demystify AI’s capabilities. The book received praise from The New Yorker and Kirkus for its clear-eyed analysis of AI’s pitfalls, making it a critical resource for navigating AI-driven systems responsibly.
The term refers to AI products that fail to deliver promised results, often due to technical flaws or intentional deception. Examples include hiring algorithms that reinforce discrimination, predictive policing tools with racial biases, and educational software that replaces human oversight with unreliable automation.
The authors argue that risk-assessment tools and facial recognition systems disproportionately harm marginalized communities. These technologies rely on biased historical data, worsen over-policing, and lack transparency, often serving as a veneer of objectivity for flawed human decisions.
Narayanan and Kapoor advocate for regulatory oversight, public literacy campaigns, and ethical auditing of AI systems. They stress that AI should augment—not replace—human judgment, particularly in high-stakes domains like healthcare and finance.
The book dismisses apocalyptic scenarios as distractions from urgent, real-world harms like biased algorithms and labor exploitation. It argues that corporate control of AI—not rogue superintelligence—poses the greatest threat to democracy and equity.
Readers learn to question:
These criteria help identify snake oil versus legitimate tools.
Unlike speculative works, it focuses on present-day harms with actionable critiques. While books like Weapons of Math Destruction highlight similar issues, AI Snake Oil uniquely dissects technical limitations (e.g., why chatbots can’t truly "understand") and corporate hype cycles.
These emphasize AI’s reliance on human data and its inability to transcend programmed patterns.
Yes—it acknowledges ChatGPT’s utility for tasks like drafting emails but warns against overtrusting its outputs. The book explains how generative AI often produces plausible-sounding inaccuracies and erodes content creators’ rights through data scraping.
Some reviewers argue it underestimates AI’s future potential, though most praise its evidence-based approach. Critics note the book focuses more on debunking hype than proposing systemic alternatives, but its core warnings about corporate control remain widely endorsed.
Feel the book through the author's voice
Turn knowledge into engaging, example-rich insights
Capture key ideas in a flash for fast learning
Enjoy the book in a fun and engaging way
AI snake oil: technologies that simply don't work as advertised.
Predictive AI reflects correlation, not causation.
People naturally try to game algorithms they don't understand.
Success often depends more on luck than merit.
AI systems generate content without any genuine understanding of truth or falsity.
Break down key ideas from AI Snake Oil into bite-sized takeaways to understand how innovative teams create, collaborate, and grow.
Distill AI Snake Oil into rapid-fire memory cues that highlight key principles of candor, teamwork, and creative resilience.

Experience AI Snake Oil through vivid storytelling that turns innovation lessons into moments you'll remember and apply.
Ask anything, pick the voice, and co-create insights that truly resonate with you.

From Columbia University alumni built in San Francisco
"Instead of endless scrolling, I just hit play on BeFreed. It saves me so much time."
"I never knew where to start with nonfiction—BeFreed’s book lists turned into podcasts gave me a clear path."
"Perfect balance between learning and entertainment. Finished ‘Thinking, Fast and Slow’ on my commute this week."
"Crazy how much I learned while walking the dog. BeFreed = small habits → big gains."
"Reading used to feel like a chore. Now it’s just part of my lifestyle."
"Feels effortless compared to reading. I’ve finished 6 books this month already."
"BeFreed turned my guilty doomscrolling into something that feels productive and inspiring."
"BeFreed turned my commute into learning time. 20-min podcasts are perfect for finishing books I never had time for."
"BeFreed replaced my podcast queue. Imagine Spotify for books — that’s it. 🙌"
"It is great for me to learn something from the book without reading it."
"The themed book list podcasts help me connect ideas across authors—like a guided audio journey."
"Makes me feel smarter every time before going to work"
From Columbia University alumni built in San Francisco

Get the AI Snake Oil summary as a free PDF or EPUB. Print it or read offline anytime.
In Silicon Valley boardrooms and tech policy circles, a sobering counterpoint to AI hype has emerged. Princeton professor Arvind Narayanan's critical examination of artificial intelligence has become essential reading, with industry leaders calling it "the most important book on AI this decade." What makes this perspective particularly compelling is Narayanan's insider knowledge as a former Google researcher-his critique comes not from technophobia but deep expertise. Imagine if we used the word "vehicle" to describe everything from bicycles to spacecraft without distinction. That's our current predicament with "artificial intelligence"-a term so broad it confuses meaningful discourse. Narayanan brilliantly separates AI into two distinct categories: generative AI (like chatbots) and predictive AI (systems attempting to forecast human outcomes). While generative AI has made remarkable strides despite its immaturity, the real concern lies with predictive AI-technologies claiming to forecast human behavior that simply don't work as advertised. This distinction matters because predictive AI's failures have real consequences when deployed in hiring, criminal justice, or healthcare. Why does this matter? Because the AI industry is selling solutions to problems that technology fundamentally cannot solve, and we're buying the hype without demanding evidence.