Ep. 6 | Why AI Projects Really Fail | Graeme McDermott | The AI Values Podcast
Failed to add items
Add to basket failed.
Add to wishlist failed.
Remove from wishlist failed.
Adding to library failed
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
Summary
What if 95% of AI projects aren't really failing and the actual problem is that no one defined what success looked like in the first place?
Graeme McDermott, Chief Data Officer at Tempcover, with two decades leading data and analytics functions across the AA, Addison Lee and Tempcover joins Edosa Odaro and Lindley Gooden on Episode 6 of The AI Values Podcast for one of our most pragmatic boardroom-level conversations to date. On AI accountability, the rise of "AI told me" decision-making, and what really happens to graduate jobs when companies cut their next generation of leaders to fund the LLM bill.
Graeme unpacks why every C-suite is quoting the same 85–95% AI failure stat and why most of that "failure" is actually a definition-of-success and data-foundations problem, not a technology problem. Bad questions have no context. Lazy AI use, trains lazy thinking. And cutting the entry-level rung doesn't save money it deletes your future bench.
WHAT WE COVER:
► Why 95% of AI projects "fail" and the leadership reframe (you didn't define success)
► "This is correct because ChatGPT said so" when AI becomes the decision
► Bad questions have no context the prompting skill C-suites are skipping
►AI accountability in the boardroom: who is for it, who is resisting, who is responsible
►Cognitive offloading the MIT research on why AI users disengage their brains
►Entry-level collapse: the 25% drop in UK graduate jobs and the bench-strength crisis
►Apprenticeships, electricians, and rebuilding the skills ladder for the AI era
►What every board should take away, one thing the C-suite must own
⏰ EPISODE TIMESTAMPS:
00:00 — Cold open & welcome
06:40 — Excited or concerned? An insider's view of AI today (Graeme McDermott)
10:49 — "AI told me" when ChatGPT becomes the cited authority
13:54 — Trust, prompts, and the AI-lazy five-word problem
17:00 — Why 85–95% of AI projects "fail" and what success actually means
22:40 — The boardroom: who's for AI, who's resisting, who's accountable
27:45 — One thing every board should take away learn a trade
30:15 — Debrief & sign-off
ABOUT THE AI VALUES PODCAST:
The AI Values Podcast is where leaders come to think clearly about the trade-offs behind AI adoption not just the opportunities. Hosted by Edosa Odaro (author, 'The Values of AI') and Lindley Gooden (author, 'The Future of Truth'), with weekly conversations at the intersection of AI, trust, governance, and the future of work.
🎙 SUBSCRIBE to The AI Values Podcast for honest, rigorous conversations at the intersection of AI ethics, AI governance, and business leadership.
◼ Find out more: https://www.theaivalues.org
◼ Reach out: podcast@theaivalues.org
◼ Get the Weekly AI Values Dispatch → https://pages.theaivalues.org
◼ Edosa Odaro: https://www.linkedin.com/in/edosa/
◼ Lindley Gooden: https://www.linkedin.com/in/lindleygooden/
◼ Guest: Graeme McDermott: https://www.linkedin.com/in/chiefdataanalyticsofficerlondon/