Artificial Intelligence Snake Oil: What AI Can and Can’t Do – Featuring Professor Arvind Narayanan | Episode #0010
- 2025-05-16
- Posted by: Brad Groux
- Category: Podcast

In this episode, Brad Groux, Carrie Hundley, and Robert Groux break down Princeton Professor Arvind Narayanan’s influential framework on how to separate genuine AI innovation from empty hype. From predictive models in hiring and criminal justice to the misuse of public large language models (LLMs) in so-called “proprietary” SaaS tools, the hosts explore where AI is truly delivering value and where it’s selling snake oil.
With real-world case studies, red flags to watch for, and discussion around new AI regulations and ethics, this episode offers a pragmatic lens for leaders, builders, and buyers to make smarter decisions in the age of generative and predictive AI.
What we cover:
- Predictive AI’s failure in hiring & justice
- The “public model” myth in SaaS tools
- How to spot overpromised solutions
- Why your AI pilot needs purpose, not polish
If you’re a leader, builder, or buyer navigating AI, this episode will sharpen your radar.
New episodes are released every Friday across all major podcast platforms (direct links below), with video available on Spotify and YouTube – follow along to empower your AI journey!
Start Small, Think Big: Episode 0010 – Show Notes
Summary
In this episode of the Start Small Think Big podcast, the hosts discuss the complexities and nuances of artificial intelligence, particularly focusing on the concept of ‘AI snake oil’—the misleading promises made by some AI solutions. They explore current events in the tech industry, the ethical implications of AI in hiring practices, and the importance of human oversight in AI applications. The conversation also touches on the cultural shift regarding AI, including its recognition by influential figures like the Pope. The hosts emphasize the need for discernment in AI deployment, the human element in AI training, and the importance of asking the right questions when considering AI solutions.
Keywords
AI, artificial intelligence, ethics, hiring, employment, technology, generative AI, human oversight, predictive analytics, business decisions
Takeaways
- AI is a nuanced topic that requires careful consideration.
- Current events in tech highlight the rapid changes in the industry.
- Ethical implications of AI in hiring practices are significant.
- Human oversight is crucial in AI applications to avoid errors.
- Generative AI can be a powerful tool but comes with risks.
- Cultural shifts regarding AI are being recognized at high levels.
- Understanding AI’s diverse applications is essential for businesses.
- The importance of discernment in AI deployment cannot be overstated.
- Asking the right questions is key to effective AI implementation.
- Failure can lead to valuable learning experiences in AI development.
Titles
- Unpacking AI Snake Oil: What You Need to Know
- The Ethics of AI: Navigating Hiring Practices
Sound Bites
- “AI is such a nuanced topic.”
- “Trust but verify.”
- “Don’t be afraid to fail.”
Resources
- AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (MIT YouTube)
- AI as Normal Technology (Columbia University)
- OBJECTIVE OR BIASED On the questionable use of Artificial Intelligence for job applications (BR24)
- Machine Bias There’s software used across the country to predict future criminals (ProPublica)
- 1 in 10 Minors Say Their Friends Use AI to Generate Nudes of Other Kids, Survey Finds (404 Media)
- High-level summary of the AI Act (European Union)
- OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic (Time)’
- Training AI takes heavy toll on Kenyans working for $2 an hour (60 Minutes)
- When sensing defeat in a match against a skilled chess bot, AI models don’t always concede, instead sometimes opting to cheat by hacking their opponent so that the bot automatically forfeits the game (Palisade Research)
- Pope Leo XIV names AI one of the reasons for his papal name (The Verge)
- AI use damages professional reputation, study suggests (ARS Technica)
- AI firms warned to calculate threat of super intelligence or risk it escaping human control (The Guardian)
- AI Company Asks Job Applicants Not to Use AI in Job Applications (404 Media)
Chapters
00:00 Introduction to AI Snake Oil
01:26 The Impact of AI on Employment
04:32 Misrepresentation of AI Capabilities
08:06 The Importance of Human Oversight
11:42 Navigating Job Applications in the AI Era
15:13 Cultural Shifts in AI Acceptance
18:58 The Role of AI in Professional Development
22:29 The Pope and AI: A Unique Perspective
25:28 AI’s Societal Impact and Recognition
27:24 Understanding AI Snake Oil
28:14 Diverse Nature of AI
30:12 Navigating AI Implementation
33:17 The Role of LLMs in Business
36:57 Predictive AI in High-Stakes Areas
40:30 Generative AI: Promise and Perils
47:45 Human Element in AI Training
52:45 The Importance of Discernment in AI Deployment
53:29 Internal Communication and Stakeholder Engagement
55:28 Assessing AI Effectiveness and Application
59:20 Steering AI Towards Positive Outcomes
01:03:27 Caution Against Extreme Opinions in AI
01:06:14 Identifying Hype and Misleading Claims
01:11:15 The Reality of Predictive Analytics and Data Capture
01:16:33 Final Thoughts on Responsible AI Implementation
Follow and subscribe to SSTB on your favorite platform!
Have show ideas or want to be featured or take part? Let us know in the comments.