What you'll learn
The right opening question for any board-level AI risk conversation is "what's our nuclear meltdown?" — work backward from the worst possible day to define what actually deserves investment.
The HR application of AI is the highest-scrutiny area for federal and state regulators right now — and most security programs aren't prioritizing it as the urgent compliance surface it has become.
The CISO's primary partnership for the AI era is the legal team — executives respond to litigation risk, not breach risk, and the security leader who frames risk in legal-exposure terms gets heard.
Description
Sandy Dunn is the Chief Information Security Officer at SPLX (the AI cybersecurity red teaming company), a 20-year cybersecurity veteran, the creator and project leader of the OWASP Top 10 for LLM Applications, the OWASP AI Security Governance Checklist, and the OWASP Gen AI Compass. She's also an adjunct professor at Boise State University and on the board of Agentech.org. This conversation is the most operationally grounded AI-governance material the show has aired, and it works through how a CISO should actually communicate AI risk to a board, where the regulatory pressure is landing first, and the architectural framing every security leader needs to adopt before the next budget cycle.
The opening segment lays out Sandy's framing for board-level risk communication. The CEO of a nuclear power plant, asked what their worst day looks like, will say "a nuclear meltdown." Every CISO should be able to answer the equivalent question for their organization with the same clarity. Work backward from there. What are the second-worst days? What investments meaningfully reduce the probability of those scenarios? The CISO who can frame the conversation that way moves from being the Officer of No to being the executive who's actually helping the business make risk-informed decisions.
The middle of the episode walks through the HR-AI compliance surface that most security programs are underinvesting in, the legal partnership as the highest-leverage relationship for any CISO right now, and the structural shift required to communicate AI risk in statistical and probabilistic terms rather than the binary yes/no answers that worked for traditional infrastructure. The closing segment goes after the trust-as-the-real-objective framing — security plus safety plus reliability is the equation that produces board-level credibility, and the CISOs who internalize that are the ones building durable executive influence.
What we cover
"what's our nuclear meltdown?" — the right opening question for board-level risk conversations
"adversarial first, HR second" — where the regulatory pressure is actually landing
"the statistical shift" — moving from binary yes/no risk answers to probabilistic communication
"the smokey-the-bear sign" — Sandy's framing for how risk should actually be communicated to executives
"telling them what they wanted to hear" — the structural CISO failure mode that needs to end
"the legal team is the first relationship" — executives respond to litigation, not to breach probability
"safety plus security plus reliability equals trust" — the equation that builds executive credibility
"the AI Threat Defense Compass" — Sandy's OWASP-released tooling for attack-surface threat analysis
Thank you to our Sponsors:
Hampton North is the premier US based cybersecurity search firm. Start building your security team with Hampton North
Sysdig is the leader in AI-powered real-time cloud defense; stop watching and start defending
The conversation
What's our nuclear meltdown?
The most exportable opening framing in the episode is Sandy's nuclear-meltdown question. Walk into any nuclear power plant CEO's office and ask them what their worst day looks like, and the answer is immediate and unambiguous — a nuclear meltdown. The investment, the regulatory regime, the operational discipline, and the safety culture across the entire industry are organized around preventing that scenario and managing it if it happens. Every other security and safety practice flows from that anchoring question.
Most CISOs cannot give the same clear answer for their own organization. They can list a hundred risks. They can show a heat map of likelihood and impact. They can present the latest compliance posture. What they often can't do is articulate, in a single sentence, what the worst possible day looks like for their specific business, and then defend the investment portfolio against that scenario. The CISOs who can do this — work backward from the nuclear meltdown to the controls that prevent or mitigate it, and frame budget conversations in those terms — get heard at the board level. The ones who can't are competing for budget cycles in a language the board doesn't share.
The HR-AI compliance surface is the underinvested area
The most operationally surprising point Sandy made was that the highest-pressure regulatory surface for AI right now isn't the LLM chatbot or the agentic SOC tool — it's the HR application of AI. Workday and similar platforms have been embedding AI in hiring decisions, performance management, and personnel processes for years. Federal and state regulators (Colorado, NYC Local Law 144, the broader EU AI Act surface) are scrutinizing fairness, bias, and explainability in those systems harder than they're scrutinizing most other AI use cases. Most security programs are underinvested in this area because it doesn't feel like "their" problem — it feels like an HR or legal concern.
That framing is wrong. The compliance and security risk landing on HR-AI applications is real, the consequences include both regulatory action and class-action exposure, and the security team is the function with the cross-functional credibility to coordinate the response. The CISOs who get out ahead of this — partnering with HR and legal to define the AI-in-HR governance framework before the regulatory enforcement action lands — will save their organizations meaningful exposure. The ones who treat it as someone else's problem will get pulled in as the cleanup function after the first lawsuit hits.
From binary risk answers to statistical risk communication
The structural shift Sandy walked through on how risk gets communicated is the one most CISO programs haven't made yet. Traditional security programs answered risk questions in binary terms. Have you tested for privilege escalation? Yes, on 70% of our applications. We found five instances, three are remediated. The frame works for traditional infrastructure where the attack surface is bounded.
For AI systems, the frame breaks. The attack surface is mathematically unbounded — you cannot test every possible adversarial input, you cannot verify every possible model behavior, you cannot demonstrate complete coverage. The honest answer is statistical. We are 95% confident based on this evidence that the system is behaving within expected parameters. The deeper truth Sandy surfaces is that traditional security was always statistical too — third-party assessments only sampled a fraction of the surface, static analysis only ran on a small percentage of the code, fourth- and fifth-party risk was rarely measured. We just pretended otherwise because the binary framing was convenient. The AI era forces honesty about the statistical nature of what security teams have always been doing.
The implication for board communication is that the CISO needs to learn to communicate uncertainty credibly. The "smokey the bear sign" Sandy described is the right model — here's the likelihood of fire today, here's what we can do to reduce it, here's what the residual risk costs to live with. The executives can either accept the residual risk or invest to reduce it. The CISO's job is to give them the information in a form they can decide against, not to pretend the binary answer they used to give was real.
Stop telling the board what they want to hear
The most candid moment in the episode is Sandy's admission about how the CISO function has historically operated.
I told them what they wanted to hear, not what I wanted to say
For years, the CISO would walk into the board meeting knowing the executives didn't want to hear bad news. The political incentive was to present a slightly-better-than-last-year narrative regardless of what the data actually showed. The result is a generation of CISOs who built careers on the green-green-green dashboard while the underlying risk surface kept expanding underneath them. The framework was being built for the executives' comfort, not for the actual decision-making the executives needed to do.
The reset Sandy advocates for is the honest one. The CISO who walks into the board with "actually, here's the risk we're carrying, here's what's changed, here's what we recommend doing about it" — and presents it as a smokey-the-bear-sign decision the executives can act on — does the job better and earns the credibility that compounds. The CISO who keeps presenting the comfortable narrative is going to get caught when the underlying risk surfaces in a way that can't be papered over, and at that point the comfortable-narrative pattern becomes the career-ending one.
Don't make decisions based on fear
The corollary Sandy added is the right counter to the industry's overuse of fear-based marketing.
You should not be making decisions based on fear
The security industry built its early reputation by being the people who knew the things to be afraid of. The fear-based pitch worked to get attention and capture budget. It also produced a CISO function that the rest of the executive team learned to tune out, because the fear narrative was unrelenting and most of it didn't translate into business consequences the executives could act on. The reset is to communicate risk in terms of business impact, not in terms of how scary the threat sounds. The CISO who can put a credible dollar number on the residual risk and let the executives decide whether to invest against it is the CISO who gets invited back to the next board meeting.
The legal team is the first relationship
The most counterintuitive operational advice in the episode is on which executive relationship matters most. Sandy's argument is that the CISO's first and most important relationship is with the legal team — not with the CEO, not with the CTO, not with the board. The reason is straightforward: executives are not actually motivated by breach risk in isolation. They're motivated by litigation risk. The legal team is the function that translates breach risk into litigation exposure in terms the rest of the executive team understands. The CISO who has a strong working relationship with general counsel can frame every security investment in terms of what it does to the company's legal risk, and that framing carries more weight than any breach-probability presentation.
The HR-AI compliance surface is one example of where this lands. The class-action exposure from biased hiring decisions made by AI systems is a legal-team problem first and a security-team problem second. The CISO who walks the legal team through the technical reality of AI bias and helps general counsel quantify the litigation exposure becomes the indispensable partner in defending the company. The CISO who just runs an OWASP scan and files a report doesn't.
Safety plus security plus reliability equals trust
The closing conceptual frame Sandy and Conor converged on is the equation for what the next-generation CISO is actually building toward. Security is whether the system is free of adversarial compromise. Safety is whether the system operates within its intended boundaries when no adversary is present. Reliability is whether the system is present and operating under adversarial circumstances. The combination of all three produces trust — and trust is the actual deliverable the CISO function is building toward in the AI era.
The Levi's-CISO Genes story Conor referenced — the CISO who reframed the security program around "how does my work help us sell more genes?" — is the right operating posture. Levi's is by traditional metrics a boring company. Levi's is also a company where the security program that aligned itself with business outcomes outperformed the security program that aligned itself with frameworks. The CISOs who can answer the equivalent question for their own businesses — how does my security program help us achieve the thing this company actually exists to do — are the ones building durable executive influence. The ones who can't are still presenting their NIST CSF maturity to a board that stopped reading slides three quarters ago.
The closing OWASP work Sandy is bringing to market — the AI Threat Defense Compass, the OWASP Gen AI Compass, the AI Security Governance Checklist — is the practitioner toolkit that should be sitting on every CISO's desk right now. The frameworks won't solve the AI risk problem on their own, but they give the CISO a structured way to map their own organization's threat surface, communicate it credibly, and have the right conversations with legal, with HR, with engineering, and with the board. The CISOs who pick up these tools and use them will be materially ahead of the curve. The ones who wait for the regulatory enforcement action to force the conversation will be playing catch-up.
Show notes
Guests — Sandy Dunn, Chief Information Security Officer at SPLX (AI cybersecurity red teaming); 20-year cybersecurity veteran; creator and project leader of the OWASP Top 10 for LLM Applications; project lead for the OWASP AI Security Governance Checklist and the OWASP Gen AI Compass; adjunct professor at Boise State University; board member at Agentech.org
Books mentioned — Thinking, Fast and Slow by Daniel Kahneman (referenced re: System 1 / System 2 thinking and human rationality); referenced more broadly the cognitive behavioral and behavioral economics literature
Frameworks / models / tools named — OWASP Top 10 for LLM Applications (Sandy as creator/project lead); OWASP AI Security Governance Checklist; OWASP Gen AI Compass; AI Threat Defense Compass (Sandy's forthcoming OWASP release with Attack Surface Threat Analysis); the "what's our nuclear meltdown" board-communication frame; the smokey-the-bear sign for risk communication; the safety-plus-security-plus-reliability equals trust equation; OODA loop (referenced); System 1 vs System 2 thinking
Other people / shows / resources referenced — the original AI paper from 1940 (referenced as the historical origin of the field); CrowdStrike (referenced re: the unpredictable big-impact event); Daniel Kahneman (Thinking, Fast and Slow); Bruce Schneier (referenced); Daniel Woods, the UK insurance researcher (referenced re: insurance data showing many traditional security recommendations are not cost-effective); the Levi's CISO genes article (referenced as the right framing for what a security program is for); Workday (the canonical HR AI platform under regulatory scrutiny); Anthropic's Claude (referenced as self-sacrificing in the prisoner's-dilemma study); Google's Gemini (referenced as more mercenary in the same study); the opiate-crisis manufacturer pattern (referenced as the human-actor downside that makes AI ethics relatively defensible); SPLX (Sandy's company, the AI cybersecurity red teaming firm)
Hosted by Conor Sherman and Stuart Mitchell.