Episodes

  • #334 Abhishek Singh: The $1.2 Billion Plan to Turn India Into an AI Superpower
    Apr 16 2026

    What if the country that trained the world's engineers finally decided to keep them?

    In this episode of Eye on AI, Craig Smith sits down with Abhishek, the civil servant leading India's $1.2 billion national AI Mission, to explore how one of the world's largest and most diverse nations is mounting a serious challenge to US and Chinese dominance in artificial intelligence.

    Abhishek breaks down the honest story behind India's late start. World-class talent, but no research ecosystem to retain it. Digitization without AI-usable data. Compute so scarce that the entire country had fewer than 500 GPUs just two years ago. And a brain drain so severe that the engineers India trained are now running the biggest tech companies in the world, just not from India.

    Abhishek walks through exactly how the mission is tackling each of those gaps. A subsidized compute program that gives researchers and startups access to 38,000 GPUs at under a dollar per hour. AI Kosh, a national data platform pulling public and private sector datasets into a single AI-ready repository. Centers of Excellence connecting IITs around domain-specific research in agriculture, healthcare, education and mobility. And a sovereign LLM program, with four models already in development and eight more on the way, built specifically for India's languages, voices and needs.

    We also get into the geopolitics. Where India stands as the US and China carve out competing AI spheres of influence. Why Abhishek is pushing for a UN-led governance framework rather than aligning with either bloc. And what it would actually take for a country of 1.4 billion people to not just catch up, but leapfrog.

    Subscribe for more conversations with the people shaping the future of AI and emerging technology.

    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) Introduction and Abhishek's Background

    (02:43) What the India AI Mission Is and How It Started

    (04:53) The $1.2 Billion Budget and Total AI Investment in India

    (06:36) Data Center Build-Out and the Road to 7 Gigawatts

    (08:11) AI Kosh: India's National Data Platform

    (10:50) Subsidized GPUs and How Researchers Access Compute

    (12:41) Brain Drain, Reverse Migration and Retaining Top Talent

    (17:24) Centers of Excellence Across IITs and Key Sectors

    (19:21) Expanding Fellowships and Training the Next Generation

    (20:11) Why India Started Late and What Changed

    (21:48) Sovereign LLMs Built for Indian Languages and Needs

    (22:42) The Diversity Challenge and Culturally Relevant AI

    (23:16) Government Funding for Foundation Model Development

    (24:12) The AI Impact Summit and India's Role on the Global Stage

    (24:52) India, China, the US and the Battle for AI Governance

    (29:37) The UN Framework and India's Third Way

    (31:16) India-China Relations and New AI Partnerships

    (32:01) How the $1.2 Billion Budget Was Decided

    (33:31) Can India Actually Catch Up With the US and China

    Show More Show Less
    35 mins
  • #333 Adi Kuruganti: Why Your AI Pilot Is Failing and What It Takes to Reach Production
    Apr 15 2026

    Most enterprises are excited about agentic AI. But very few are actually deploying it in production.

    In this episode of Eye on AI, Craig Smith sits down with Adi Kuruganti, Chief AI and Development Officer at Automation Anywhere, to break down why agentic AI is so hard to get right in the enterprise and what it actually takes to move from a promising pilot to a mission-critical deployment.

    Adi explains why the future of enterprise automation is not agentic AI alone, but the combination of deterministic and agentic systems working together, and why companies that treat AI as a technology problem instead of a business outcomes problem are setting themselves up to fail.

    They dig into how Automation Anywhere is orchestrating agents across legacy systems, healthcare platforms, and financial services workflows, why governance and compliance are the first questions every enterprise asks, and how their Process Reasoning Engine is continuously improving agent performance using metadata from over 400 million running processes.

    The conversation also covers the real timeline to a fully autonomous enterprise, why the POC to production gap is the biggest failure point in enterprise AI today, and what companies that wait too long risk losing to competitors who started the journey earlier.

    If you want to understand where enterprise AI actually stands today and what it takes to deploy it responsibly at scale, this episode gives you a clear and grounded perspective.

    Subscribe for more conversations with the people building the future of AI and emerging technology.

    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) Why Enterprises Are Struggling With Agentic AI

    (02:39) What Automation Anywhere Does and the APA Category Explained

    (08:01) Deterministic vs Agentic AI: Why You Need Both

    (10:59) How Human in the Loop Works in Enterprise AI

    (17:16) The Mozart Orchestrator and Process Reasoning Engine

    (23:50) How AI Is Upgrading and Replacing Classic RPA

    (27:31) How Automation Anywhere Works With Enterprise Customers

    (31:53) The Biggest Challenges of Scaling Agentic AI

    (41:10) The OpenAI Partnership and What It Means

    (47:06) Training Staff and Building AI Literacy at Scale

    (51:39) Staying Close to Customers as the Technology Shifts

    (53:17) Is the Autonomous Enterprise Actually Coming

    Show More Show Less
    58 mins
  • #332 Dan Faulkner: The Code Is Clean. The App Is Broken. Why AI Development Has an Integrity Problem
    Apr 14 2026

    What happens when AI writes code faster than anyone can test it?

    In this episode of Eye on AI, Craig Smith sits down with Dan Faulkner, CEO of SmartBear, to explore one of the most underappreciated risks of the AI coding boom. As tools like Claude Code and Codex push software development to unprecedented speed, the systems built to validate that software are being left behind. Dan makes a distinction that every engineering leader needs to hear: clean code passing unit tests is not the same as an application that actually works.

    Dan introduces the concept of application integrity, continuous and measurable assurance that your software does everything it was intended to do and nothing it was not. He explains why the gap between what AI builds and what teams actually validate is already creating hidden risk in production, and why that risk compounds the faster you ship.

    We also get into the new failure modes that agentic AI is introducing. Slop squatting, instruction inversion, cascading errors. These are not theoretical. They are happening now, at scale, in codebases that no human has fully read.

    Dan also walks through SmartBear's autonomy ladder framework and their newest product BearQ, a team of AI agents that explores your application, builds a knowledge graph, authors tests, runs them, and updates everything as your app evolves. The key distinction: it is built to augment human teams, not replace them.

    Finally, Dan shares his honest take on the future of software engineering. The fallacy was always that coding was the hard part. The hard part is knowing what to build. That skill is not going anywhere.

    Subscribe for more conversations with the people shaping the future of AI and emerging technology.

    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) Introduction and Dan Faulkner's Background

    (01:05) What SmartBear Does: Testing and API Lifecycle Management

    (03:27) AI Is Outpacing Application Testing

    (07:51) Slop Squatting, Instruction Inversion and New AI Failure Modes

    (17:31) Black Boxes, Technical Debt and the Expertise Crisis

    (22:00) How to Avoid Self-Validating AI Systems

    (24:11) The Autonomy Ladder and BearQ

    (31:30) Why Testing Must Be Continuous and Everywhere

    (36:31) Infrastructure Risk and Automation Bias

    (44:11) The Future of QA and New Specialist Roles

    (50:44) How Teams Use SmartBear Tools Today

    (58:57) The Future of Software Engineering and Human Roles

    Show More Show Less
    55 mins
  • #331 Sergey Levine: The Robot Revolution Nobody Is Talking About
    Apr 12 2026

    What does it actually mean to build a foundation model for robots?

    In this episode of Eye on AI, Craig Smith sits down with Sergey Levine, co-founder of Physical Intelligence and professor at UC Berkeley, to explore a fundamentally different approach to building robots, one inspired not by programming a single perfect machine, but by training AI on the broadest and most diverse data possible so robots can learn, adapt, and operate in the unpredictable real world.

    Sergey explains why the secret to general-purpose robots isn't perfecting one single machine, but training on massive, diverse data from all kinds of robots and even humans. The more variety the model sees, the better it gets. Just like ChatGPT learned from all the text on the internet, robotic foundation models learn from every robot that has ever moved, grabbed, or interacted with the real world.

    We also get into the big humanoid robot debate. Are they the future, or is it mostly hype? Sergey gives an honest and technical take on why the form factor conversation is changing now that foundation models exist, and why that actually opens the door for more creativity, not less.

    Finally, Sergey shares what he's most excited about next, building a true data flywheel where robots get smarter the more they are deployed, creating a continuous learning cycle that could change everything.

    Subscribe for more conversations with the people building the future of AI and emerging technology.


    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) Introduction: What Are Foundation Models for Robots?

    (01:44) Meet Sergey Levine: Physical Intelligence and UC Berkeley

    (02:51) Breaking Down Foundation Models for Non-Technical People

    (06:46) Why Real World Data Beats Simulation

    (15:00) Building a Broad Robotics Foundation From Scratch

    (24:00) The Open World Problem in Robotics

    (40:00) Generalist vs Specialist Robots: Which Wins?

    (47:00) Humanoid Robots: Real Innovation or Just Hype?

    (55:10) The Future: Continuous Learning and the Data Flywheel

    (56:23) Guilty Pleasure: Sci Fi and Thinking Beyond the Limits

    Show More Show Less
    59 mins
  • #330 Sebastian Risi: Why AI Should Be Grown, Not Trained
    Apr 6 2026

    AI has been trained like software.

    But what if it should be grown like life?

    In this episode of Eye on AI, Craig Smith sits down with Sebastian Risi, professor and leading researcher in neuroevolution and artificial life, to explore a fundamentally different approach to building intelligence, one inspired by how nature evolves, grows, and adapts.

    Sebastian explains why traditional AI systems are limited by fixed architectures and one-time training, and how evolutionary algorithms can create systems that continuously learn, self-organize, and even grow their own neural structures over time.

    They dive into concepts like plastic neural networks that keep updating during their lifetime, AI systems that can recover from damage, and models that develop from a single "cell" into complex structures, similar to biological organisms.

    The conversation also explores how combining large language models with evolutionary search could unlock more creative and open-ended problem solving, from merging specialized models to building AI systems capable of generating and testing scientific ideas.

    If you want to understand where AI is headed beyond today's transformer models, and why the future may look more like living systems than software, this episode offers a clear and thought-provoking perspective.

    Subscribe for more conversations with the people building the future of AI and emerging technology.

    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) Why copy nature's evolution for AI

    (01:20) What neuroevolution actually means

    (05:52) How evolutionary search replaces gradients

    (08:03) Plastic neural networks and continuous learning

    (11:53) Growing neural networks like living systems

    (18:08) Scaling challenges and limits of growth

    (23:16) Can evolving systems replace LLM training

    (27:28) Continual learning and model merging

    (30:27) Artificial life, self-repair, and resilience

    (35:10) AI scientists and evolution with LLMs

    Show More Show Less
    1 hr and 1 min
  • #329 Izhar Medalsy: How AI Solves Quantum Computing's Biggest Problem
    Mar 31 2026

    Quantum computing has been "5 years away" for decades.

    So what's actually holding it back?

    In this episode of Eye on AI, Craig Smith sits down with Izhar Medalsy, Co-founder & CEO of Quantum Elements, to break down the real bottleneck in quantum computing today and why the future of the industry may depend more on classical systems and AI than quantum hardware itself.

    Izhar explains how digital twins of quantum systems are being used to simulate real hardware, generate massive amounts of training data, and solve one of the biggest challenges in the field: noise and error correction.

    They dive into how his team improved Shor's Algorithm from 80% to 99% accuracy on IBM hardware, without changing the hardware itself, and what that means for the future of quantum performance.

    The conversation also explores how AI is being used to optimise quantum systems, why classical computing will continue to play a central role in quantum development, and what milestones to watch as the industry moves closer to real-world applications.

    If you want to understand where quantum computing actually stands today and what will unlock its next phase, this episode gives you a clear, grounded perspective.

    Subscribe for more conversations with the people building the future of AI and emerging technology.


    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI



    (00:00) The 99% Accuracy Breakthrough (Quantum's Turning Point)
    (01:03) Why Quantum Hardware Alone Isn't Enough
    (03:50) Digital Twins Explained (The Missing Layer)
    (08:09) The Real Problem: Noise, Instability & Environment
    (15:43) From 80% to 99% on Shor's Algorithm
    (26:36) How AI Is Fixing Quantum's Biggest Bottleneck
    (33:53) Inside the Platform: From Circuit to Optimization
    (40:51) Logical Qubits & Scaling Quantum Systems
    (43:34) The Limits of Simulation vs Real Quantum Hardware
    (54:29) When Quantum Becomes Useful (Real Timeline)

    Show More Show Less
    1 hr and 1 min
  • #328 Kevin Tian: Exploring Doppel's AI-Native Social Engineering Defense Platform
    Mar 27 2026

    AI is changing more than just productivity.

    It's changing what we can trust.

    In this episode, Kevin Tian, Co-founder and CEO of Doppel, breaks down how AI is enabling a new wave of social engineering attacks—from deepfake phone calls to impersonation across LinkedIn, YouTube, and search engines.

    The reality is this:
    Deepfakes are just one part of a much bigger problem.

    Attackers are now operating across multiple channels at once, using AI to manipulate people, not just systems. And as these attacks scale, the real risk isn't just fraud or data loss—it's the erosion of trust in everything we see online.

    Kevin explains how Doppel is building an AI-native defense platform to detect, map, and shut down these attacks in real time, and why the future of cybersecurity will be defined by AI vs AI.

    If you're thinking about AI, security, or the future of trust online—this conversation is essential.


    Stay Updated:
    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) AI Deepfakes & The Collapse of Trust
    (01:56) Why "Social Engineering" Is Bigger Than Phishing
    (05:20) Deepfakes, Misinformation & Multi-Channel Attacks
    (09:16) The Rise of Deepfake Phone Calls
    (12:43) How Attackers Manipulate AI & Search Results
    (14:39) The Origin Story Behind Doppel
    (18:55) How Doppel Detects & Stops Attacks in Real Time
    (22:55) Can Attackers Misuse AI Defense Tools?
    (24:26) How to Tell What's Real vs Fake Online
    (28:20) What Is Human Risk Management?
    (30:36) AI vs AI: The Future of Cyber Defense
    (34:04) What CEOs Must Do About AI Threats
    (37:18) Working with Platforms Like YouTube & LinkedIn
    (39:52) Can We Ever Fully Stop Deepfakes?
    (44:40) How Doppel Works for Enterprises

    Show More Show Less
    48 mins
  • #327 Baris Gultekin: The Next Phase of AI - Agents That Understand Your Company's Data
    Mar 19 2026

    This episode is sponsored by Modulate.

    Meet Velma, voice AI that detects tone, intent, and stress:http://preview.modulate.ai

    Baris Gultekin, Head of AI at Snowflake, breaks down how enterprise AI is actually being built, deployed, and scaled today. From running AI directly inside governed data environments to enabling natural language access across entire organizations, this conversation explores the shift from experimentation to real-world impact.

    You'll learn why Snowflake's core philosophy centers around bringing AI to the data, how data agents are transforming decision-making across teams, and what it takes to build trustworthy AI systems with governance, guardrails, and high-quality retrieval at the core.

    Baris also shares how leading companies are already saving thousands of hours through AI-driven automation, why culture and leadership determine AI success, and what the future looks like as agents move from pilots to full-scale production.

    If you want to understand where enterprise AI is actually headed and what separates hype from real execution, this episode breaks it down.

    (00:00) The Evolution of Snowflake AI

    (01:40) Baris Gultekin: Background & AI Mission

    (02:59) Why AI Must Run Next to Data

    (04:29) Inside Snowflake's AI Infrastructure

    (09:08) Model Choice vs Product Layer Strategy

    (12:16) Building Trust: Governance, Guardrails & Quality

    (16:01) How Enterprise Agents Are Built & Orchestrated

    (20:10) AI Adoption Across the Entire Organization

    (24:39) Reasoning vs Retrieval: What Matters More

    (27:43) Real Use Case: Faster Decision-Making with AI

    (31:44) AI as a Co-Pilot for Leaders

    (36:52) Preparing Data for AI at Scale

    (38:46) What the AI Data Cloud Really Means

    Show More Show Less
    42 mins