• AI Accountability Is Broken. Here's Why
    Apr 11 2026
    Episode SummaryIn this episode of Human Signal, Dr. Tobias Floyd delivers a pointed analysis of why enterprise AI governance is failing at the structural level. The problem isn't a lack of policy — it's that governance was designed for a world that no longer exists. Distributed AI — running across edge devices, vendor stacks, and multi-agent pipelines — has dissolved the single point of control that traditional compliance frameworks depend on.Key Takeaway 1: Distributed AI Is a Governance Condition, Not a Technology TrendThe shift to distributed AI isn't just an infrastructure evolution — it's a fundamental change in where accountability lives. When AI executes across multiple nodes, devices, or third-party systems without unified oversight, you're no longer in a governance framework. You're in a governance gap. Every edge deployment, every federated model, every multi-agent workflow is an accountability question first, a technology question second.Key Takeaway 2: The Architecture of Blame Is Predictable — and AvoidableThe pattern behind every major AI failure in recent years is the same: the vendor says the output was within spec; the integrator says the client configured the workflow; the client says legal approved the policy; legal says the policy covered the old system. Nobody owns the failure. The reason isn't bad actors — it's structural ambiguity. When no one owns the decision at the node, blame distributes as efficiently as the AI does.Key Takeaway 3: "Permitted" Is Not the Same as "Admissible"A policy that allows a model to run is not the same as governance that can see what the model is doing. This visibility gap — between what is authorized on paper and what is observable in execution — is where accountability collapses. Functional governance requires audit trails, intervention triggers, and independence from vendor contracts built into the architecture itself, not appended to it.Dr. Floyd's 3 Diagnostic Questions1. Who owns the decision at the node — not the system, the decision? If the answer is vague, you have a gap.2. What is the escalation path? A single risk officer cannot handle fifty simultaneous failures across fifty nodes. The architecture must match the distribution.3. What accountability exists without the vendor? If your governance breaks when the vendor changes the API, you don't have governance — you have vendor dependency.Dr. Floyd's 3 Requirements for Functional GovernanceVisibility at every execution point. If you cannot see the node, you cannot govern the node.Accountability without humans in every loop. Humans cannot scale to distributed AI. Audit trails and intervention triggers must be designed into the system.Independence. The governance structure must survive vendor changes and contract terminations.Closing ReflectionThe winners in the AI era won't be the organizations with the best technology. They'll be the ones with the structural discipline to govern it. This week, ask yourself three things: Can you name every device where your AI is making decisions? If your vendor changed the model tonight, how long would it take you to find out? And who is responsible when failure happens inside a workflow you don't control? Architect for reality — or discover reality when the system fails.Subscribe to Human Signal for weekly AI governance briefings from Dr. Tuboise Floyd.5. Chapters / Timestamps0:00 - The Illusion of Governance0:32 - Distributed AI Outruns Policy1:10 - The Architecture of Blame1:52 - The Trust Gap Framework2:18 - Permitted ≠ Admissible2:45 - Redesigning Accountability Architecture3:28 - 3 Diagnostic Questions4:10 - What Functional Governance Actually RequiresABOUT THE HOSTDr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.PRODUCTION NOTESHost & Producer: Dr. Tuboise Floyd Creative Director: Jeremy JarvisRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.CONNECTLinkedIn: linkedin.com/in/drtuboisefloyd Email: tuboise@humansignal.ioTRANSCRIPTFull transcript available upon request at support@humansignal.ioLEGAL© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.TagsAI governance, AI accountability, distributed AI, AI policy, responsible AI, AI compliance, AI risk management, AI at the edge, federated learning, multi-agent systems, edge computing AI, AI governance framework, AI accountability gap, AI oversight, trust gap framework, AI leadership, AI regulation, AI vendor risk, governance architecture, AI decision making, AI audit trail, AI...
    Show More Show Less
    5 mins
  • AI Governance Open Forum: Critical Thinking, Risk, and “Never Blindly Trust—Always Verify”
    Mar 29 2026
    EPISODE DESCRIPTIONAt a laid-back campus event, students are invited to ask questions about AI governance to Taiye Lambo, founder of the Holistic Information Security Practitioner Institute (HISPI), and Dr. Tuboise Floyd of Human Signal, an AI governance researcher and podcast host. Speakers emphasize that AI literacy is a civic and professional survival skill: employers expect workers to critically evaluate AI outputs, frame AI literacy as risk awareness, and focus on asking the right questions rather than becoming data scientists. Discussion covers deepfakes and short-form media, overreliance on AI (including a lawyer citing fabricated ChatGPT case law), "never blindly trust, always verify," and the need for continuous auditing, accountability, and an "honest human in the loop," especially in clinical and environmental contexts. Students are advised to build strong domain knowledge, think critically, pursue internships, and invest in AI governance and risk certifications over tool-specific training.⏱️ Chapters 00:00 Welcome and Setup 00:52 Meet the Experts 01:57 Taiye on Governance Focus 02:53 Dr. Floyd Background and Podcast 04:39 Open Forum Begins 05:02 AI Literacy for Careers 07:23 Threat or Opportunity Poll 10:01 AI Literacy Beyond STEM 10:49 Spotting Deepfakes in Shorts 15:35 Using AI Without Replacing Learning 16:14 Lawyer Case and Overtrusting AI 18:08 Never Blindly Trust — Verify 19:06 Wikipedia Analogy and Real Risks 20:31 Business Ethics Reality Check 21:06 Continuous Audits in Clinics 21:28 Human in the Loop Matters 22:04 Environmental AI Data Gaps 23:13 Public Trust and Accountability 23:33 Honest Human Oversight 25:28 Tokens and Hallucinations 26:51 Bias in Training Data 27:56 Interviewing in the AI Era 30:28 AI Disruption and Generational Shift 33:21 High-Stakes AI Blind Spots 36:02 Rapid Fire Career Advice 41:03 Closing and Next StepsGUESTTaiye Lambo, Founder & Chief Artificial Intelligence Officer Holistic Information Security Practitioner Institute (HISPI) 🔗 https://www.hispi.org 🔗 https://projectcerebellum.com LinkedIn: linkedin.com/in/taiyelamboTAIMScore™ Assessor Workshop 🔗 https://humansignal.io/taimscore_assessor_workshopSUBSCRIBE & SUPPORTSubscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Support Human Signal — help fuel six months of new episodes, visual briefs, and honest playbooks. 🔗 https://humansignal.io/supportEvery contribution sustains the signal.ABOUT THE HOSTDr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.PRODUCTION NOTESHost & Producer: Dr. Tuboise Floyd Creative Director: Jeremy JarvisRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.CONNECTLinkedIn: linkedin.com/in/drtuboisefloyd Email: tuboise@humansignal.ioTRANSCRIPTFull transcript available upon request at support@humansignal.ioTAGSAI Governance, Risk Management, Innovation, Project Cerebellum, CIO Leadership, AI Ethics, Military Technology, Cybersecurity, AI Policy, Enterprise AI, Government AI, Technology Leadership#AIGovernance #RiskManagement #Innovation #AIPolicy #CIOLeadership #AIEthics #HumanSignal #MilitaryTech #Cybersecurity #ProjectCerebellumLEGAL© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.Takeaways:The importance of AI governance is paramount for ensuring ethical decision-making and risk mitigation.Employers now expect candidates to critically evaluate AI outputs rather than merely utilize them without scrutiny.AI literacy transcends technical skills, emerging as a vital competency for professional survival in today's market.The integration of human oversight in AI systems is essential to prevent unintended consequences and ensure accountability.Understanding the implications of AI data training is crucial, especially in high-stakes environments like healthcare.Being proactive in seeking knowledge through books and mentorship is vital for navigating the evolving landscape of AI.This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy
    Show More Show Less
    43 mins
  • When Your Vendor Becomes Your Vulnerability
    Mar 26 2026

    EPISODE DESCRIPTION

    In this episode, Dr. Tuboise Floyd breaks down the Korean Air / KC&D supply chain breach — a forensic autopsy of what happens when data governance doesn’t travel with the data.

    In December 2025, Korean Air disclosed that 30,000 employee records were stolen. The breach didn’t come through Korean Air’s systems. It came through KC&D Service — a catering subsidiary spun off and sold to private equity in 2020. Five years later, KC&D was still holding Korean Air employee data on an unpatched Oracle ERP server. The Cl0p ransomware group exploited CVE-2025-61882 — CVSS 9.8 — and published 500GB on a dark web leak site.

    Six TAIMScore™ controls failed simultaneously. Three domains. All because the data moved out of sight — not out of risk.

    This is a Failure File. Not a warning. A forensic record.

    Key Topics:

    ∙ Supply chain governance and third-party vendor risk

    ∙ What happens when a divestiture doesn’t include data governance

    ∙ The Oracle EBS zero-day and its 100+ organizational victims

    ∙ TAIMScore™ forensic: GOVERN, MAP, and MANAGE domain failures

    ∙ The one question every institution needs to ask today

    GUESTS

    No guests. Solo episode.

    TAIMScore™ Assessor Workshop

    https://humansignal.io/taimscore_assessor_workshop

    SUBSCRIBE & SUPPORT

    Subscribe now to lock in the feed. This isn’t just content — it’s a continuing briefing for the Builder Class.

    Support Human Signal:

    Help fuel six months of new episodes, visual briefs, and honest playbooks.

    🔗 https://humansignal.io/support

    Every contribution sustains the signal.

    ABOUT THE HOST

    Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.

    PRODUCTION NOTES

    Host & Producer: Dr. Tuboise Floyd

    Creative Director: Jeremy Jarvis

    Tech Specs:

    Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.

    CONNECT

    LinkedIn: linkedin.com/in/tuboise

    Email: tuboise@humansignal.io

    TRANSCRIPT

    Full transcript available upon request at hello@humansignal.io

    TAGS/KEYWORDS

    AI Governance, Supply Chain Risk, Third-Party Vendor Risk, Data Breach, Korean Air, KC&D, Cl0p Ransomware, Oracle EBS, CVE-2025-61882, TAIMScore, TAIM Framework, Failure File, Institutional Risk, Dr. Tuboise Floyd, Human Signal

    HASHTAGS

    #AIGovernance #SupplyChainRisk #DataBreach #TAIMScore #FailureFile #ThirdPartyRisk #CyberSecurity #InstitutionalRisk #HumanSignal #AIGovernanceBriefing

    LEGAL

    © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.



    This podcast uses the following third-party services for analysis:

    OP3 - https://op3.dev/privacy
    Show More Show Less
    6 mins
  • AI Governance: Balancing Innovation With Risk Management
    Mar 25 2026

    EPISODE DESCRIPTION

    In this episode, Dr. Tuboise Floyd is joined by Col. Kathy Swacina (USA Ret.), CIO of Sherpawerx, and Taiye Lambo, Founder and Chief Artificial Intelligence Officer of Holistic Information Security Practitioner Institute (HISPI), to discuss Project Cerebellum, AI Governance, and balancing innovation with risk management.

    We delve into the critical need for a holistic control layer in AI development. Without appropriate checks and balances, the rapid race to be first with AI could lead to dire consequences. The discussion touches on the role of CIOs, the moral compass of AI systems, and the potential risks of operating without proper oversight.

    This is a cautionary tale for executives and developers alike, emphasizing the importance of a balanced approach to AI innovation and governance.

    Key Topics:

    • ​Project Cerebellum and holistic AI control layers
    • ​The race to AI deployment vs. responsible governance
    • ​The evolving role of CIOs in AI oversight
    • ​Building moral compasses into AI systems
    • ​Risk management frameworks that actually work

    GUESTS

    Col. Kathy Swacina (USA Ret.)

    CIO, Sherpawerx

    🔗 https://sherpawerx.com/

    Taiye Lambo

    Founder & Chief Artificial Intelligence Officer

    Holistic Information Security Practitioner Institute (HISPI)

    🔗 https://www.hispi.org/

    🔗 https://projectcerebellum.com

    TAIMScore™ Assessor Workshop

    https://humansignal.io/taimscore_assessor_workshop

    SUBSCRIBE & SUPPORT

    Subscribe now to lock in the feed. This isn't just content; it's a continuing briefing for the Builder Class.

    Support Human Signal:

    Help fuel six months of new episodes, visual briefs, and honest playbooks.

    🔗 https://humansignal.io/support

    Every contribution sustains the signal.

    ABOUT THE HOST

    Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.

    PRODUCTION NOTES

    Host & Producer: Dr. Tuboise Floyd

    Creative Director: Jeremy Jarvis

    Tech Specs:

    Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.

    CONNECT

    • ​LinkedIn: linkedin.com/in/tuboise
    • ​Email: tuboise@humansignal.io
    • ​GoFundMe: https://gofund.me/117dd0d3d

    TRANSCRIPT

    Full transcript available upon request at hello@humansignal.io

    TAGS/KEYWORDS

    AI Governance, Risk Management, Innovation, Project Cerebellum, CIO Leadership, AI Ethics, Military Technology, Cybersecurity, AI Policy, Enterprise AI, Government AI, Technology Leadership

    HASHTAGS

    #AIGovernance #RiskManagement #Innovation #AIPolicy #CIOLeadership #AIEthics #HumanSignal #MilitaryTech #Cybersecurity #ProjectCerebellum

    LEGAL

    © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.



    This podcast uses the following third-party services for analysis:

    OP3 - https://op3.dev/privacy
    Show More Show Less
    51 mins
  • When Your GPS Happily Drives You Into The Sea
    Mar 6 2026

    EPISODE DESCRIPTION

    An Amazon delivery van reportedly got stranded on the Broomway — one of Britain's most dangerous tidal tracks in Essex — after blindly following GPS directions toward Foulness Island. No alert. No override. No human in the loop.

    This isn't a story about bad technology. It's a story about ungoverned automation making context-free decisions about human movement in the physical world. And it's exactly the kind of incident the HISPI Project Cerebellum AI Incidents database exists to document — so organizations can stop repeating the same failures.

    🗺️ The Incident

    🔬 TAIMScore™ Failure Analysis

    Running this incident through a TAIMScore™ lens reveals failure across three critical dimensions:

    ❌ Safety — FAIL

    No guardrails for hazardous geographic areas. The routing system had no awareness of tidal zones, flood-risk roads, or environmental danger conditions. A system operating in the physical world with zero environmental context is an unacceptable safety liability.

    ❌ Trust — FAIL

    When workers discover that guidance systems can route them into danger, trust collapses — not just in that system, but in all automated guidance. The second-order effect is that workers either override systems entirely (defeating the purpose) or follow blindly (accepting the risk). Neither is acceptable.

    ❌ Responsibility — FAIL

    Who owns the risk when an algorithm routes a human into danger? The driver? The dispatcher? The software vendor? The organization deploying the tool? Without clear accountability architecture, no one owns it — until someone gets hurt.

    🎯 The Core Thesis

    The technology works exactly as designed. The governance around it does not exist.

    🔗 Resources & Links

    Referenced Tools & Projects

    - HISPI Project Cerebellum — AI Incidents Database tracking real-world AI failures across sectors

    - TAIMScore™ — Structured scoring framework for AI governance and risk assessment

    - TAIMScore™ Assessor Workshop — Live sessions where teams score real incidents and design controls

    Workshop Registration

    🔗 humansignal.io/taimscore_assessor_workshop

    The Broomway

    - One of the oldest roads in England, dating to the 1600s

    - Runs across tidal mudflats in the Thames Estuary

    - Floods rapidly and without visible warning

    - Has claimed numerous lives historically

    - Considered one of the most dangerous roads in the United Kingdom

    PRODUCTION NOTES

    Host & Producer: Dr. Tuboise Floyd

    Creative Director: Jeremy Jarvis

    Tech Specs:

    Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.

    CONNECT

    LinkedIn: linkedin.com/in/tuboise

    Email: tuboise@humansignal.io

    GoFundMe: https://gofund.me/117dd0d3d

    TRANSCRIPT

    Full transcript available upon request at support@humansignal.io

    TAGS/KEYWORDS

    AI Governance, Risk Management, AI Policy, Tech Leadership, Institutional AI, Future of Work, AI Ethics, Governance Failure, Enterprise AI, Government AI, Spokane Transit, Amazon Hiring Bias, Workflow Design

    HASHTAGS

    #HumanSignalFailureFile #AIGovernance #TAIMScore #ProjectCerebellum #AIFailure #UngoverneAutomation #GPSFailure #LogisticsAI #AIRisk #ResponsibleAI #AIAccountability #HumanInTheLoop #AIIncidents #HISPI #AIEthics

    LEGAL

    © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™..



    This podcast uses the following third-party services for analysis:

    OP3 - https://op3.dev/privacy
    Show More Show Less
    2 mins
  • Making Digital Accessibility Work In The AI Era
    Mar 2 2026
    Dr Tuboise Floyd hosts Dr Michele A Williams to explain why digital accessibility failures present across 97 percent of the web create equity resilience and trust risks that AI can magnify at scale. Williams contrasts medical vs social models of disability addresses ableism and language person first vs identity first and argues checklists cannot replace lived experience or disabled participation in UX research and leadership. They discuss how inaccessible code tools and AI trained on inaccessible data produce issues like missing labels, broken keyboard paths and poor semantic structure. And warn against disability dongles that add tech instead of removing systemic barriers. Dr. Williams outlines a practical 90 day plan establish a baseline with scans and process mapping.GUESTDr Michele A WilliamsMaking Accessibility Work UX and Accessibility Consultant Author of Accessible UX Research Smashing Mediahttps://mawconsultingllc.comhttps://www.linkedin.com/in/micheleawilliams1Accessible UX ResearchPublisher Smashing Magazinehttps://www.smashingmagazine.com/2025/06/accessible-ux-research-pre-release/YouTubehttps://youtu.be/pxXLNsbyJhc?si=Dt9mf2HK4AtyCx6_00:00 Accessibility Wake Up Call00:57 Meet Dr Michele Williams02:07 Equity Resilience Trust04:01 Disability Mindset Shift05:59 Why Lived Experience Matters07:14 Person First vs Identity First13:01 AI Promise and Harm15:23 Social Model In Practice19:58 Beyond Screen Readers25:02 Exclusion Inside Real Teams26:58 Semantic Code Chaos28:32 Standards Lag Tech29:12 Siri Zoom Panic31:23 Disability Dongles33:36 AI Hype Reality37:25 Beyond Checklists40:32 90 Day Baseline42:30 Change Defaults44:17 Normalize Inclusion46:47 Nothing About Us49:13 One Action This Week50:35 Closing CreditsSUBSCRIBE & SUPPORTSubscribe now to lock in the feed. This isn't just content; it's a continuing briefing for the Builder Class.Support Human Signal: Help fuel six months of new episodes, visual briefs, and honest playbooks. 🔗 https://humansignal.io/supportEvery contribution sustains the signal.ABOUT THE HOSTDr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.PRODUCTION NOTESHost & Producer: Dr. Tuboise Floyd Creative Director: Jeremy JarvisTech Specs: Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.CONNECTLinkedIn: linkedin.com/in/tuboise Email: tuboise@humansignal.io https://humansignal.io/TRANSCRIPTFull transcript available upon request at hello@humansignal.ioTAGS/KEYWORDSAI Governance, Risk Management, Innovation, Project Cerebellum, CIO Leadership, AI Ethics, Military Technology, Cybersecurity, AI Policy, Enterprise AI, Government AI, Technology LeadershipHASHTAGS#AIGovernance #RiskManagement #Innovation #AIPolicy #CIOLeadership #AIEthics #HumanSignal #MilitaryTech #Cybersecurity #ProjectCerebellumLEGAL© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.Companies mentioned in this episode:Smashing MagazineAccessibeTakeaways:The staggering statistic reveals that 97% of the web remains rife with accessibility barriers that hinder disabled individuals.Accessibility is not merely a compliance issue but a vital consideration that impacts user experience and organizational culture.To create truly inclusive products, it is essential to incorporate the perspectives and experiences of disabled individuals throughout the design process.Artificial intelligence must not be viewed as the sole solution for accessibility; rather, it should be integrated thoughtfully with human expertise and oversight.This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy
    Show More Show Less
    52 mins
  • Digital Accessibility In An AI World
    Feb 23 2026

    Digital Accessibility In An AI World 2026

    As a podcast host exploring the intersection of humanity and technology I keep asking: Are we really including everyone in our digital transformation?

    Dr Michele A Williams, Owner & Accessibility Consultant, new book Accessible UX Research challenges us to move beyond checklists and truly design with not for disabled users.

    Coming soon to the Human Signal podcast: Dr Michele A Williams PhD joins us to break down how to make digital accessibility work in an AI world.

    https://mawconsultingllc.com/

    Accessibility is not just about digital spaces. Accessibility is about fundamental human rights.

    What Gen X leaders and professionals need to know:

    ✓ How to spot invisible exclusion in UX research and code

    ✓ Moving beyond compliance checklists to build truly inclusive systems

    ✓ Using AI for captions and alt text without creating new barriers

    ✓ 90 day accessibility practices your team can sustain

    Because real inclusion means ensuring everyone has access to the places and systems they need whether digital or physical.

    Production notes:

    Tech Specs:

    Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.

    📧 Contact & Subscribe

    LinkedIn: linkedin.com/in/tuboise

    Email: tuboise@humansignal.io

    https://humansignal.io/

    Support Human Signal:

    Help fuel six months of new episodes, visual briefs, and honest playbooks.

    🔗 https://humansignal.io/support

    Every contribution sustains the signal.

    Transcript

    Full transcript available upon request at hello@humansignal.io

    Tags

    #HumanSignal #DigitalAccessibility #ArtificialIntelligence #InclusiveDesign #UXResearch #GenXLeaders #TechLeadership #Accessibility

    Legal

    © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.



    This podcast uses the following third-party services for analysis:

    OP3 - https://op3.dev/privacy
    Show More Show Less
    2 mins
  • AI Activism For Insiders This Is Not Ethics Work
    Feb 20 2026

    AI Activism For Insiders This Is Not Ethics Work Brief 2026


    🧠 About Human Signal


    Human Signal monitors governance patterns across frontier AI labs, tracking the gap between stated safety commitments and operational reality. Through the L.E.A.C. Protocol and tools, like Noise Discipline and Workflow Thesis, we identify where governance erodes under capital pressure and where external oversight needs to be applied.


    Production notes:


    Tech Specs:

    Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.


    📧 Contact & Subscribe


    LinkedIn: linkedin.com/in/tuboise

    Email: tuboise@humansignal.io

    https://humansignal.io


    Support Human Signal:

    Help fuel six months of new episodes, visual briefs, and honest playbooks.

    🔗 https://humansignal.io/support


    Every contribution sustains the signal.


    📜 Transcript


    Full transcript available upon request at hello@humansignal.io


    🏷️ Tags


    #AI #FutureOfWork #ResponsibleAI #AIGovernance #AIActivism



    This podcast uses the following third-party services for analysis:

    OP3 - https://op3.dev/privacy
    Show More Show Less
    2 mins