The AI Governance Briefing cover art

The AI Governance Briefing

The AI Governance Briefing

By: Dr. Tuboise Floyd
Listen for free

About this listen

The AI Governance Briefing is an independent AI governance and strategy podcast for operators navigating institutions disrupted by artificial intelligence. Hosted by Dr. Tuboise Floyd, PhD — founder, researcher, and principal analyst at Human Signal. The market has split in two. The consumption economy trades in noise, checklists, and compliance theater. The investment economy trades in signal infrastructure, physics, and sovereignty. The AI Governance Briefing is the intelligence feed for the investment economy. We do not trade in content. We trade in leverage. Each episode applies the TAIMScore™ framework, GASP™ and the L.E.A.C. Protocol™ to reverse-engineer real institutional AI failures — and build governance infrastructure before autonomous systems break the institution. Hosted alongside Creative Director Jeremy Jarvis, the show covers asymmetric strategy, critical infrastructure, and the physics of risk for government contracting and builder sectors. New episodes, visual briefs, and honest playbooks at humansignal.io/podcast © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™. The AI Governance Briefing is an independent media and research platform. All episode content — including analysis, case studies, and framework application — is provided for educational and informational purposes only. Nothing in any episode constitutes legal, regulatory, compliance, financial, or professional advice. No advisory or consulting relationship is created by listening to or engaging with this content. Guest opinions are those of the guest alone and do not represent the positions of Human Signal or Dr. Tuboise Floyd. Case studies and institutional failure analyses are based on publicly available information and are presented as pedagogical tools — not legal findings or regulatory determinations. This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy581592 Economics Leadership Management Management & Leadership Political Science Politics & Government
Episodes
  • AI Accountability Is Broken. Here's Why
    Apr 11 2026
    Episode SummaryIn this episode of Human Signal, Dr. Tobias Floyd delivers a pointed analysis of why enterprise AI governance is failing at the structural level. The problem isn't a lack of policy — it's that governance was designed for a world that no longer exists. Distributed AI — running across edge devices, vendor stacks, and multi-agent pipelines — has dissolved the single point of control that traditional compliance frameworks depend on.Key Takeaway 1: Distributed AI Is a Governance Condition, Not a Technology TrendThe shift to distributed AI isn't just an infrastructure evolution — it's a fundamental change in where accountability lives. When AI executes across multiple nodes, devices, or third-party systems without unified oversight, you're no longer in a governance framework. You're in a governance gap. Every edge deployment, every federated model, every multi-agent workflow is an accountability question first, a technology question second.Key Takeaway 2: The Architecture of Blame Is Predictable — and AvoidableThe pattern behind every major AI failure in recent years is the same: the vendor says the output was within spec; the integrator says the client configured the workflow; the client says legal approved the policy; legal says the policy covered the old system. Nobody owns the failure. The reason isn't bad actors — it's structural ambiguity. When no one owns the decision at the node, blame distributes as efficiently as the AI does.Key Takeaway 3: "Permitted" Is Not the Same as "Admissible"A policy that allows a model to run is not the same as governance that can see what the model is doing. This visibility gap — between what is authorized on paper and what is observable in execution — is where accountability collapses. Functional governance requires audit trails, intervention triggers, and independence from vendor contracts built into the architecture itself, not appended to it.Dr. Floyd's 3 Diagnostic Questions1. Who owns the decision at the node — not the system, the decision? If the answer is vague, you have a gap.2. What is the escalation path? A single risk officer cannot handle fifty simultaneous failures across fifty nodes. The architecture must match the distribution.3. What accountability exists without the vendor? If your governance breaks when the vendor changes the API, you don't have governance — you have vendor dependency.Dr. Floyd's 3 Requirements for Functional GovernanceVisibility at every execution point. If you cannot see the node, you cannot govern the node.Accountability without humans in every loop. Humans cannot scale to distributed AI. Audit trails and intervention triggers must be designed into the system.Independence. The governance structure must survive vendor changes and contract terminations.Closing ReflectionThe winners in the AI era won't be the organizations with the best technology. They'll be the ones with the structural discipline to govern it. This week, ask yourself three things: Can you name every device where your AI is making decisions? If your vendor changed the model tonight, how long would it take you to find out? And who is responsible when failure happens inside a workflow you don't control? Architect for reality — or discover reality when the system fails.Subscribe to Human Signal for weekly AI governance briefings from Dr. Tuboise Floyd.5. Chapters / Timestamps0:00 - The Illusion of Governance0:32 - Distributed AI Outruns Policy1:10 - The Architecture of Blame1:52 - The Trust Gap Framework2:18 - Permitted ≠ Admissible2:45 - Redesigning Accountability Architecture3:28 - 3 Diagnostic Questions4:10 - What Functional Governance Actually RequiresABOUT THE HOSTDr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.PRODUCTION NOTESHost & Producer: Dr. Tuboise Floyd Creative Director: Jeremy JarvisRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.CONNECTLinkedIn: linkedin.com/in/drtuboisefloyd Email: tuboise@humansignal.ioTRANSCRIPTFull transcript available upon request at support@humansignal.ioLEGAL© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.TagsAI governance, AI accountability, distributed AI, AI policy, responsible AI, AI compliance, AI risk management, AI at the edge, federated learning, multi-agent systems, edge computing AI, AI governance framework, AI accountability gap, AI oversight, trust gap framework, AI leadership, AI regulation, AI vendor risk, governance architecture, AI decision making, AI audit trail, AI...
    Show More Show Less
    5 mins
  • AI Governance Open Forum: Critical Thinking, Risk, and “Never Blindly Trust—Always Verify”
    Mar 29 2026
    EPISODE DESCRIPTIONAt a laid-back campus event, students are invited to ask questions about AI governance to Taiye Lambo, founder of the Holistic Information Security Practitioner Institute (HISPI), and Dr. Tuboise Floyd of Human Signal, an AI governance researcher and podcast host. Speakers emphasize that AI literacy is a civic and professional survival skill: employers expect workers to critically evaluate AI outputs, frame AI literacy as risk awareness, and focus on asking the right questions rather than becoming data scientists. Discussion covers deepfakes and short-form media, overreliance on AI (including a lawyer citing fabricated ChatGPT case law), "never blindly trust, always verify," and the need for continuous auditing, accountability, and an "honest human in the loop," especially in clinical and environmental contexts. Students are advised to build strong domain knowledge, think critically, pursue internships, and invest in AI governance and risk certifications over tool-specific training.⏱️ Chapters 00:00 Welcome and Setup 00:52 Meet the Experts 01:57 Taiye on Governance Focus 02:53 Dr. Floyd Background and Podcast 04:39 Open Forum Begins 05:02 AI Literacy for Careers 07:23 Threat or Opportunity Poll 10:01 AI Literacy Beyond STEM 10:49 Spotting Deepfakes in Shorts 15:35 Using AI Without Replacing Learning 16:14 Lawyer Case and Overtrusting AI 18:08 Never Blindly Trust — Verify 19:06 Wikipedia Analogy and Real Risks 20:31 Business Ethics Reality Check 21:06 Continuous Audits in Clinics 21:28 Human in the Loop Matters 22:04 Environmental AI Data Gaps 23:13 Public Trust and Accountability 23:33 Honest Human Oversight 25:28 Tokens and Hallucinations 26:51 Bias in Training Data 27:56 Interviewing in the AI Era 30:28 AI Disruption and Generational Shift 33:21 High-Stakes AI Blind Spots 36:02 Rapid Fire Career Advice 41:03 Closing and Next StepsGUESTTaiye Lambo, Founder & Chief Artificial Intelligence Officer Holistic Information Security Practitioner Institute (HISPI) 🔗 https://www.hispi.org 🔗 https://projectcerebellum.com LinkedIn: linkedin.com/in/taiyelamboTAIMScore™ Assessor Workshop 🔗 https://humansignal.io/taimscore_assessor_workshopSUBSCRIBE & SUPPORTSubscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Support Human Signal — help fuel six months of new episodes, visual briefs, and honest playbooks. 🔗 https://humansignal.io/supportEvery contribution sustains the signal.ABOUT THE HOSTDr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.PRODUCTION NOTESHost & Producer: Dr. Tuboise Floyd Creative Director: Jeremy JarvisRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.CONNECTLinkedIn: linkedin.com/in/drtuboisefloyd Email: tuboise@humansignal.ioTRANSCRIPTFull transcript available upon request at support@humansignal.ioTAGSAI Governance, Risk Management, Innovation, Project Cerebellum, CIO Leadership, AI Ethics, Military Technology, Cybersecurity, AI Policy, Enterprise AI, Government AI, Technology Leadership#AIGovernance #RiskManagement #Innovation #AIPolicy #CIOLeadership #AIEthics #HumanSignal #MilitaryTech #Cybersecurity #ProjectCerebellumLEGAL© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.Takeaways:The importance of AI governance is paramount for ensuring ethical decision-making and risk mitigation.Employers now expect candidates to critically evaluate AI outputs rather than merely utilize them without scrutiny.AI literacy transcends technical skills, emerging as a vital competency for professional survival in today's market.The integration of human oversight in AI systems is essential to prevent unintended consequences and ensure accountability.Understanding the implications of AI data training is crucial, especially in high-stakes environments like healthcare.Being proactive in seeking knowledge through books and mentorship is vital for navigating the evolving landscape of AI.This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy
    Show More Show Less
    43 mins
  • When Your Vendor Becomes Your Vulnerability
    Mar 26 2026

    EPISODE DESCRIPTION

    In this episode, Dr. Tuboise Floyd breaks down the Korean Air / KC&D supply chain breach — a forensic autopsy of what happens when data governance doesn’t travel with the data.

    In December 2025, Korean Air disclosed that 30,000 employee records were stolen. The breach didn’t come through Korean Air’s systems. It came through KC&D Service — a catering subsidiary spun off and sold to private equity in 2020. Five years later, KC&D was still holding Korean Air employee data on an unpatched Oracle ERP server. The Cl0p ransomware group exploited CVE-2025-61882 — CVSS 9.8 — and published 500GB on a dark web leak site.

    Six TAIMScore™ controls failed simultaneously. Three domains. All because the data moved out of sight — not out of risk.

    This is a Failure File. Not a warning. A forensic record.

    Key Topics:

    ∙ Supply chain governance and third-party vendor risk

    ∙ What happens when a divestiture doesn’t include data governance

    ∙ The Oracle EBS zero-day and its 100+ organizational victims

    ∙ TAIMScore™ forensic: GOVERN, MAP, and MANAGE domain failures

    ∙ The one question every institution needs to ask today

    GUESTS

    No guests. Solo episode.

    TAIMScore™ Assessor Workshop

    https://humansignal.io/taimscore_assessor_workshop

    SUBSCRIBE & SUPPORT

    Subscribe now to lock in the feed. This isn’t just content — it’s a continuing briefing for the Builder Class.

    Support Human Signal:

    Help fuel six months of new episodes, visual briefs, and honest playbooks.

    🔗 https://humansignal.io/support

    Every contribution sustains the signal.

    ABOUT THE HOST

    Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.

    PRODUCTION NOTES

    Host & Producer: Dr. Tuboise Floyd

    Creative Director: Jeremy Jarvis

    Tech Specs:

    Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.

    CONNECT

    LinkedIn: linkedin.com/in/tuboise

    Email: tuboise@humansignal.io

    TRANSCRIPT

    Full transcript available upon request at hello@humansignal.io

    TAGS/KEYWORDS

    AI Governance, Supply Chain Risk, Third-Party Vendor Risk, Data Breach, Korean Air, KC&D, Cl0p Ransomware, Oracle EBS, CVE-2025-61882, TAIMScore, TAIM Framework, Failure File, Institutional Risk, Dr. Tuboise Floyd, Human Signal

    HASHTAGS

    #AIGovernance #SupplyChainRisk #DataBreach #TAIMScore #FailureFile #ThirdPartyRisk #CyberSecurity #InstitutionalRisk #HumanSignal #AIGovernanceBriefing

    LEGAL

    © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.



    This podcast uses the following third-party services for analysis:

    OP3 - https://op3.dev/privacy
    Show More Show Less
    6 mins
No reviews yet