This website uses cookies

Read our Privacy policy and Terms of use for more information.

What you'll learn

  • ISO 42001 is the start of an AI governance program, not the destination — Colorado already cites it as potential safe harbor against alleged violations.

  • The four classic risk treatments — mitigate, transfer, accept, avoid — are the tools every CISO needs to navigate the patchwork of state AI law.

  • The point-in-time security questionnaire is dead in the AI era; continuous monitoring of vendor terms, indemnification, and AI feature changes is the new bar. 

Description

Walter Haydock is the founder and CEO of StackAware, the AI governance, security, and compliance firm that prepares AI-powered companies in healthcare, financial services, and B2B SaaS for ISO 42001 certification. His résumé runs through PTC, US Capitol Hill staff work, the National Counterterrorism Center, and the Marine Corps as a recon and intelligence officer — and the reason every CISO running an AI governance program ends up being told "go talk to Walter" is that he's spent three years operationalizing what most leaders are still trying to define.

This conversation works through the actual landscape security and risk leaders need to navigate. California alone has at least four overlapping AI-relevant laws — the Transparency in Frontier AI Act, AB 2013 on training-data transparency, the Office of Administrative Law's automated decision systems regulations, and the new CCPA rules on automated decision-making technology. Federal preemption keeps failing. Operators have to apply the four classic risk treatments — mitigate, transfer, accept, avoid — without the cover of a single unified framework. Walter's frame is that ISO 42001 is the only standard that's beginning to get cited in legislation as a potential safe harbor, which makes it the right starting point for most programs even if it's not the end of the work.

The harder portion of the episode pushes into vendor risk, continuous safety testing, and shadow AI. Security questionnaires don't work — most go unread, the AI-feature change cadence outpaces annual reviews, and the right defense is continuous monitoring of vendor terms and conditions. The shadow AI playbook is three things: training that explains the why, an approval process simple enough that compliance is the easy path, and technical guardrails to block or redirect the unsafe choices. The closing Monday morning advice from Walter — go build an MCP server yourself if you want to govern MCP servers — is the consistent message from the show's recent guests. Security leaders who haven't gotten their hands dirty on the underlying primitives are going to be giving advice they don't fully understand. 

What we cover

  • "the patchwork of state AI law" — California's four overlapping regulations and why federal preemption keeps failing

  • "mitigate, transfer, accept, avoid" — the four risk treatments applied to actual AI governance scenarios

  • "ISO 42001 as the beginning, not the end" — and why Colorado now cites it as potential safe harbor

  • "the unified control framework" — Credo AI's paper, StackAware's approach, and what comes after the risk register

  • "there is no security enough in a vacuum" — healthcare AI as the highest-risk, highest-reward case study

  • "hold technology to an unfair standard" — Cruise, Waymo, and what the bar should actually be

  • "continuous monitoring of vendor terms" — the death of the annual security questionnaire in the AI era

  • "shadow AI in three moves" — training the why, simple approval, technical guardrails

Thank you to our Sponsors:

Hampton North is the premier US based cybersecurity search firm. Start building your security team with Hampton North

Sysdig is the leader in AI-powered real-time cloud defense; stop watching and start defending 

The conversation

The patchwork of state AI law and the four risk treatments

The US AI regulatory landscape is fragmenting state by state, and federal efforts to preempt have failed. California alone now has at least four overlapping AI-relevant laws: the Transparency in Frontier AI Act (TFAIA, the new whistleblower-protection regime), AB 2013 on generative AI training-data transparency, the Office of Administrative Law's regulations on automated decision systems (ADS), and the new CCPA rules on automated decision-making technology (ADMT) that are distinct from ADS. New York is rolling out AI rules for courts and government use. Local Law 144 in NYC regulates automated employment decision tools. The list is long and the cadence of change is fast.

Walter's working frame for navigating this is the four classic risk treatments — mitigate, transfer, accept, avoid — applied with discipline. He gave clean operating examples for each. Avoid: an HR vendor disabling its AI scoring tool inside NYC because of Local Law 144. Transfer: indemnification clauses from hyperscaler AI providers and traditional cyber insurance. Mitigate: AI governance programs, technical controls, training. Accept: pressing forward despite known residual risk, which is necessary because anyone who claims they accept zero risk should not have gotten out of bed. The wrong answer for a security leader is to default to "no." The right one is to put each risk into the appropriate bucket with the right controls behind it.

ISO 42001 is the beginning, not the end

The single most useful framing in the episode is Walter's correction of a common misread. Most teams treat ISO 42001 as the AI governance answer. It isn't.

ISO 42001 is the beginning of the conversation when it comes to regulatory compliance, not the end of it.

 — Walter Haydock

ISO 42001 gives an organization a way to ingest internal and external issues — including legal and regulatory obligations — and use them to drive an AI management system. What makes it strategically interesting right now is that it's the only AI standard cited in legislation as potentially providing safe harbor — Colorado's Artificial Intelligence Act references a risk management program built around ISO 42001 as evidence that can shield against alleged violations. The NIST AI RMF was the early mover but is less prescriptive, so ISO 42001 is winning enterprise adoption. The catch is that the standard sets up the system; the controls and the regulatory mapping still have to be built on top of it.

The unified control framework and what comes after the risk register 

Credo AI published a unified control framework paper in March that maps a risk taxonomy of about 15 risk types and 50 risk scenarios to a catalog of mitigating controls. Walter's view is that the conceptual approach is right and matches how StackAware structures its own programs. The first layer is generic high-level AI risks like unintended training or unwitting undesired output. The second is system- or model-specific risks for known offerings — for example, free-tier ChatGPT's right to train on submitted data triggers an unintended-training risk. The third is customer-specific use-case risk, scoped to environment, jurisdiction, and intended use, which is where the actual control selection happens. 

The implication for security leaders building programs is that GRC patterns from the last 15 years still translate — clear policy statements, mapped risk taxonomy, an inventory of controls with traceable mappings — but the control library has to be AI-specific. Importing the SOC 2 control catalog and calling the AI governance work done is not going to satisfy regulators or boards. 

Healthcare AI and the false promise of "secure enough"

The healthcare segment of the conversation is where Walter's framing on risk got sharpest. There is no "secure enough" in a vacuum, just like there is no abstract vulnerability without context. Healthcare AI has serious data security, accuracy, and regulatory risk — but the US already pays the most per capita in the world on healthcare and produces some of the worst outcomes among developed economies. People are dying from the legacy system. The risk-reward calculus has to weigh both sides honestly. The Embold Health work StackAware did — an AI-powered chatbot that recommends physicians based on effectiveness ratings and insurance coverage, processing PHI — got the most rigorous AI risk assessment and pen testing because that's what the use case demanded. Less sensitive use cases get proportionally less.

Stuart's Cruise / Waymo aside is the pattern most security leaders need to internalize. Humans are objectively bad drivers. Cruise had one bad incident and got effectively dismantled. The standard the industry holds machines to is materially higher than the standard it holds humans to, and that asymmetry is structurally slowing AI adoption in domains where it would deliver real benefit. Walter's grounding of this in prospect theory is the punchline — humans value potential losses roughly twice as much as potential gains, even when that's irrational. The result is an AI governance debate that often runs hot with loss aversion and cold on opportunity. 

The death of the security questionnaire and the rise of continuous monitoring 

The vendor risk segment is where the episode got the most operational. Walter has been a long-time critic of security questionnaires, on the grounds that nobody reads the answers, and the AI era makes that worse. Vendors are turning on AI features, modifying terms of service, changing indemnification language, and adjusting training-data clauses on a cadence that makes annual review structurally inadequate.

We basically have people writing questionnaires, sending them to AI and then not reading them. So no one ever reads the answers really is what's happening in that case.

— Walter Haydock

The replacement is continuous monitoring — generative AI tools watching vendor terms in machine-readable form and surfacing material changes (new training rights, removed indemnifications, scope creep) in time to act on them. The more advanced version is AI-powered red teaming against vendor surfaces under explicit authorization, with responsible disclosure into bug bounty programs. Dropbox's practice of including vendors inside their bug bounty program is a working precedent that scales well with AI tooling.

Three moves on shadow AI, and the technical-fluency mandate 

The shadow AI playbook from Walter is concise. First, employee training that explains the why — frame intellectual property risk in business language, not security jargon. Second, an approval process simple enough that compliance is the path of least resistance — pre-approve tools that meet specific risk criteria, reserve heavy review for genuinely sensitive use cases. Third, technical capability to detect, block, or redirect unauthorized tool use. Stuart's emphasis on the cultural side — that an employee told daily by the news that AI is the only path to job survival is going to use AI somehow — is the right reminder. Make the safe path the easy path or the easy path will be unsafe.

The episode closed with a recurring theme from this season's recent guests. Walter's Monday morning advice for security and GRC leaders was to get technically into the weeds — when MCP started getting hyped, he built his own MCP server, integrated it into the StackAware platform, and only then felt he understood the security model. Conor noted that Jake Bernardes gave essentially the same advice the prior week. Two senior practitioners landing on the same point in consecutive episodes is signal: security leaders who haven't gotten hands-on with the AI primitives they're trying to govern are giving advice they don't fully understand.

 

Show notes

Guests — Walter Haydock, Founder and CEO of StackAware; previously at PTC (industrial automation, IoT cybersecurity), US Capitol Hill (Homeland Security Committee staff), National Counterterrorism Center (intelligence analyst), and US Marine Corps (reconnaissance and intelligence officer)

Books mentioned — none

Frameworks / models / tools named — ISO 42001 (AI management systems standard); NIST AI RMF; the Colorado Artificial Intelligence Act (cites ISO 42001 as potential safe harbor); California Transparency in Frontier AI Act (TFAIA); California AB 2013 (training-data transparency); California Office of Administrative Law ADS regulations; CCPA ADMT regulations; NYC Local Law 144 (automated employment decision tools); EU AI Act; the four risk treatments (mitigate, transfer, accept, avoid); Credo AI's Unified Control Framework paper (March 2025); the StackAware risk-stack model (high-level → system-specific → customer-specific); Anthropic Economic Index (referenced re: US-vs-rest-of-world AI adoption gap); MCP (Model Context Protocol)

Other people / shows / resources referenced — Embold Health (StackAware customer, AI-powered physician chatbot processing PHI); Disesdi Susanna Cox (prior Zero Signal guest, referenced re: the 25-dimensional attack surface of LLMs); Eric Schmidt (former Google CEO, CNBC quote on AI models being hackable); Sandy Dunn / SPLX (referenced as commercial AI red-teaming pioneer); Jake Bernardes (prior Zero Signal guest, prior week's "go build it yourself" advice); Daniel Kahneman (prospect theory referenced); Cruise / Waymo (autonomous-driving comparison); Chevrolet dealership chatbot incident (un-guardrailed GPT-4 deployment); Dropbox (bug bounty / vendor inclusion model); Apache Foundation / Log4Shell disclosure (Alibaba Cloud engineer reportedly punished after disclosure)

Hosted by Conor Sherman and Stuart Mitchell.

Keep Reading