Episodes

  • Spec-Driven Development and the AI Unified Process — with Simon Martinelli
    Apr 14 2026
    Simon Martinelli is a Java Champion, Vaadin Champion, and Oracle ACE Pro with over three decades of experience building enterprise software. In this episode, he introduces the AI Unified Process (AIUP) — a methodology he created that combines the rigor of the Rational Unified Process with modern AI-assisted development, and makes a compelling case for why specifications, not code, should be the source of truth. We explore the difference between system use cases and user stories, and why use cases — with their actors, preconditions, main flows, alternative flows, and business rules — give AI agents far better structure to generate working code. Simon walks through the four phases of AIUP: Inception, Elaboration, Construction, and Transition, showing how specs, code, and tests evolve together iteratively while staying in sync. On the architecture side, Simon advocates for Self-Contained Systems over microservices — vertical slices that include UI, backend, and database together, reducing cognitive load for both developers and AI agents. His tech stack of choice is Vaadin for full-stack Java UI, jOOQ for type-safe explicit SQL, and Spring Boot as the application framework — a combination he argues is uniquely well-suited for AI-driven development because it keeps everything in one language with no hidden behavior. We also dig into testing strategies with Karibu Testing for browserless Vaadin tests and Playwright for end-to-end coverage, how teams of two working on bounded contexts with trunk-based development are shipping faster than ever, and why the era of AI is bringing back the Renaissance developer — the generalist who understands the full stack from business requirements to production deployment.
    Show More Show Less
    1 hr and 11 mins
  • Neurosymbolic AI: Combining GenAI with Mathematical Proof — with Danilo Poccia
    Apr 8 2026
    What if you could combine the creative power of generative AI with the mathematical certainty of formal verification? In this episode, Danilo Poccia — Principal Developer Advocate at AWS — breaks down automated reasoning, a field of AI that has been quietly powering critical AWS services for years and is now becoming essential for production AI systems. We explore why generative AI alone is not enough for high-stakes applications, and how automated reasoning provides mathematical proof — not probabilistic guesses — that your AI agents are following the rules. Danilo traces the roots of automated reasoning back to the 'symbolist' branch of AI, explains how AWS has used it internally for years to verify S3 bucket policies, encryption algorithms, and network configurations, and shows how it now converges with neural networks in what researchers call neurosymbolic AI. On the practical side, we dig into Amazon Bedrock Guardrails with Automated Reasoning checks — the first and only generative AI safeguard that uses formal logic to verify response accuracy. Danilo walks through how developers can use policy verification for agentic systems and tool access control with Cedar, and how AgentCore Gateway fits into the picture for managing MCP-based tool interactions at scale. We also cover the open source landscape: Dafny for verification-aware programming, Lean as a theorem prover, Prolog for logic programming, and the growing ecosystem of MCP servers that bring these capabilities into everyday development workflows. Whether you are building AI agents for production or just curious about what comes after prompt engineering, this conversation will change how you think about AI reliability.
    Show More Show Less
    1 hr and 8 mins
  • Agent-Native Serverless Development with Shridhar Pandey
    Apr 1 2026
    In this episode, we sit down with Shridhar Pandey, Principal Product Manager on AWS Serverless Compute, to explore how the serverless team is pioneering agent-native development. Shridhar walks us through a remarkable March 2026 where the team shipped three major capabilities in just three weeks — a Kiro Power for Durable Functions, a Kiro Power for SAM, and a serverless agent plugin now available in Claude Code and Cursor. We trace the journey from 18 months of traditional developer experience improvements — local testing, remote debugging, LocalStack integration — to the realization that AI agents are fundamentally changing how developers build, deploy, and operate serverless applications. The serverless MCP server, now approaching half a million downloads, laid the foundation, and the new agent plugin builds on it with four specialized skills covering Lambda functions, operational best practices, infrastructure as code with SAM and CDK, and durable functions. Shridhar shares his thinking on agent personas — developer agents, operator agents, and platform owner agents — and how the team is applying an 'AX' (agent experience) lens to every feature they ship. We also take a candid detour into how AI has transformed his own work as a product leader: research that took weeks now takes hours, document cycles that spanned days now wrap up in a single sitting, and a fleet of agents handles daily digests and data analysis for the team. Open source runs through everything — the MCP server, the plugin, the public Lambda roadmap on GitHub — and Shridhar invites the community to shape what comes next.
    Show More Show Less
    47 mins
  • The Hard Lessons of Cloud Migration: inDrive's Path from Monolith to Microservices
    Mar 25 2026
    Join us for a fascinating conversation with Alexander 'Sasha' Lisachenko (Software Architect) and Artem Gab (Senior Engineering Manager) from inDrive, one of the global leaders in mobility operating in 48 countries and processing over 8 million rides per day. Sasha and Artem take us through their four-year transformation journey from a monolithic bare-metal setup in a single data center to a fully cloud-native microservices architecture on AWS. They share the hard-earned lessons from their migration, including critical challenges with Redis cluster architecture, the discovery of single-threaded CPU bottlenecks, and how they solved hot key problems using Uber's H3 hexagon-based geospatial indexing. We dive deep into their migration from Redis to Valkey on ElastiCache, achieving 15-20% cost optimization and improved memory efficiency, and their innovative approach to auto-scaling ElastiCache clusters across multiple dimensions. Along the way, they reveal how TLS termination on master nodes created unexpected bottlenecks, how connection storms can cascade when Redis slows down, and why engine CPU utilization is the one metric you should never ignore. This is a story of resilience, technical problem-solving, and the reality of large-scale cloud transformations — complete with rollbacks, late-night incidents, and the eventual triumph of a fully elastic, geo-distributed platform serving riders and drivers across the globe.
    Show More Show Less
    1 hr and 14 mins
  • Episode 200: Java & Spring AI Are Winning the Enterprise AI Race — with James Ward & Josh Long
    Mar 18 2026
    It's a milestone — episode 200! And to mark the occasion, we're doing something we've never done before: hosting two guests at the same time. James Ward (Principal Developer Advocate at AWS) and Josh Long (Spring Developer Advocate at Broadcom, Java Champion, and host of 'A Bootiful Podcast') join Romain for a wide-ranging conversation about why Java and Spring AI are becoming the go-to stack for enterprise AI development. We kick off with Spring AI's rapid evolution — from its 1.0 GA release to the just-released 2.0.0-M3 milestone — and why it's far more than an LLM wrapper. James and Josh break down how Spring AI provides clean abstractions across 20+ models and vector stores, with type-safe, compile-time validation that prevents the kind of string-typo failures that plague dynamically typed AI code in production. The numbers back it up: an Azul study found that 62% of surveyed companies are building AI solutions on Java and the JVM. James and Josh explain why — enterprise teams need security, observability, and scalability baked in, not bolted on. We dive into the Agent Skills open standard from Anthropic and James's SkillsJars project for packaging and distributing agent skills via Maven Central. We also cover Spring AI's official Java MCP SDK (now at 1.0) and how MCP and Agent Skills complement each other for building capable, composable agents. The performance story is striking: Java MCP SDK benchmarks show 0.835ms latency versus Python's 26.45ms, 1.5M+ requests per second versus 280K, and 28% CPU utilization versus 94% — with even better numbers using GraalVM native images. Josh and James also walk us through Embabel, the new JVM-based agentic framework from Spring creator Rod Johnson, featuring goal-oriented and utility-based planners with type-safe workflow definitions built on Spring AI foundations. We close with a look at running Spring AI agents on AWS Bedrock AgentCore — memory, browser support, code interpreter, and serverless containers for agentic workloads.
    Show More Show Less
    52 mins
  • AWS Hero Linda Mohamed: Juggling Cloud, Community & Agentic AI
    Mar 11 2026
    Some guests make you want to close your laptop and go build something. Linda Mohamed is one of them. In this episode, Romain sits down with Linda — AWS Community Hero, User Group Leader, Chairwoman of the AWS Community DACH Association, and independent cloud consultant based in Vienna. Linda started as a Java developer in on-premises enterprise environments. Her first AWS touch point? Building an Alexa skill for a smart home product — discovering Lambda almost by accident, and never looking back. Today she's building multi-agent AI systems, running an AI-powered video pipeline with five media customers, and doing it all while being one of the most energetic and generous contributors in the AWS community. Discover Linda's journey from Java developer in telecom to cloud and AI consultant, conference-driven development as a forcing function to ship, building Otto — a multi-agent Slack bot using Crew AI, LoRA fine-tuning, and Amazon Bedrock Agent Core Runtime. Learn about the AI-powered video analysis pipeline she built to solve her own problem and ended up selling to five media customers, vibe coding vs spec-driven development and when each makes sense, and why Clean Code principles still apply when designing agent architectures.
    Show More Show Less
    1 hr and 5 mins
  • Evolving Lambda: from ephemeral compute to durable execution
    Mar 4 2026
    In this episode, Romain sits down with Michael Gasch, Product Manager at AWS for Lambda Durable Functions, to explore one of the most exciting launches in the Serverless space in recent years. Michael shares the full story: from the early days of Lambda and the evolution of the serverless developer experience, to the challenges developers face when building multi-step, stateful workflows — and how Durable Functions addresses them natively within Lambda. Discover the evolution of AWS Serverless and why last year was 'the year of Lambda', key launches including IDE integrations, Lambda Managed Instances, and Lambda Tenant Isolation. Learn what Lambda Durable Functions are and what they are not, the checkpoint-replay model and how it enables resilient, long-running executions, and wait patterns including simple wait, wait for callback, and wait for condition. Explore real-world use cases: distributed transactions, LLM inference orchestration, ECS task coordination, and human-in-the-loop workflows. Michael shares unexpected feedback from customers about architectural simplification, how coding agents like Kiro dramatically accelerate writing Durable Functions, and when to choose Durable Functions vs. Step Functions vs. SQS/SNS. Plus, what's coming next: more regions, and the Java SDK (now available!).
    Show More Show Less
    1 hr and 7 mins
  • Mike Chambers: From OpenClaw to AI Functions — What's Next for Agentic Development
    Feb 25 2026
    Mike Chambers is back — calling in from the other side of the globe — and he brought a lot to unpack. We pick up threads from our first conversation and follow them into genuinely exciting (and occasionally mind-bending) territory. We start with OpenClaw, the open-source agentic framework that took the developer world by storm. Mike shares his take on why it happened now — not just what it is — and why the timing was almost inevitable given how developers had been quietly experimenting with local agents for the past year. Then we go deep on asynchronous tool calling — a project Mike has been working on since mid-2024 that finally works reliably, thanks to more capable models. The idea: let your agent kick off a long-running task, keep the conversation going naturally, and have the result arrive without interrupting the flow. Mike walks through how he built this on top of Strands Agents SDK and why he's planning to propose it as a contribution to the open-source project. We also explore Strands Labs and its freshly released AI Functions — a genuinely new way to think about embedding generative capability directly into application code. Is this Software 3.1? Mike makes the case, and Romain pushes back in the best way. The episode closes with a look ahead: agent trust, observability with OpenTelemetry, and a thought experiment about what software might look like in five years if the execution environment itself becomes a model.
    Show More Show Less
    1 hr and 19 mins