What you'll learn
The adversarial attack space against any AI model is potentially 25-dimensional — which means complete defensive coverage is mathematically not possible, only systematic risk management is.
Purple teaming for AI is structurally different from traditional red/blue teaming — it requires the security and data teams operating as a single function, with MLSecOps maturity matching MLOps maturity.
Aerospace-style design assurance cases (DO-178B/C) are the right model for assigning AI risk thresholds and engineering rigor — and the industry needs to adopt them before the first life-critical AI failure.
Description
Disesdi Susanna Cox is an AI architect, patent holder, consulting security researcher, and one of the principal voices behind the OWASP AI Exchange — the framework that's been adopted by organizations globally to think through AI security. This conversation is the most technically grounded AI-security conversation Zero Signal has aired this season, and it works through three threads: the mathematics of why AI security is structurally different from anything that came before, the organizational and team-design implications of that difference, and the engineering disciplines (purple teaming, MLSecOps, design assurance cases, failure modes and effects analysis) that should already be in place at any organization deploying AI in production.
The opening segment introduces the subspace framing every CISO needs to internalize. Every AI model — predictive, generative, classical ML — has attached to it a subspace of possible adversarial attacks. That subspace is potentially 25-dimensional. Complete defensive coverage is not mathematically achievable. The work is to manage the risk inside an unbounded attack space, which requires the same kind of engineering discipline aerospace built around DO-178B/C — design assurance cases, failure modes and effects analysis, and an explicit acceptance that 99.999% accuracy is not enough when human lives are involved.
The middle of the episode pivots to the team-design implications. Disesdi's argument is that purple teaming for AI requires uniting the security team and the data team — not as collaborators but as a single function. MLSecOps maturity has to match MLOps maturity, and the organizations that have one without the other are not going to be defensible at scale. The closing segments work through the case for a Chief AI Security Officer role, the dangers of LLM-based therapy use cases, the difference between predictive and generative AI from a security standpoint, and the OWASP AI Exchange as the practitioner's starting point.
What we cover
"the 25-dimensional attack subspace" — the mathematics of why complete defensive coverage isn't achievable
"data is the new attack vector, not the new oil" — the reframe every CISO needs to make
"purple teaming as the new structural answer" — uniting the security team and the data team
"three steps to MLSecOps" — data flows, data/model provenance, data governance
"the case for a Chief AI Security Officer" — and why it's not a Chief Data Officer reshuffle
"design assurance cases" — borrowing the DO-178B/C aerospace pattern for AI risk thresholds
"failure modes and effects analysis" — the discipline that focuses red teaming and tightens scope
"predictive AI is what runs nuclear and military" — and what we miss by over-indexing on LLMs
Thank you to our Sponsors:
Hampton North is the premier US based cybersecurity search firm. Start building your security team with Hampton North
Sysdig is the leader in AI-powered real-time cloud defense; stop watching and start defending
The conversation
The 25-dimensional attack subspace
The conceptual framing Disesdi opens with is the most important takeaway from the episode. Every AI model — predictive AI, classical deep learning neural networks, generative LLMs — has attached to it a subspace of adversarial attacks that exist somewhere in the wild and will be effective against that specific model. That subspace is enormous. The mathematical evidence suggests it's potentially 25-dimensional. Trying to visualize a 25-dimensional space is a useful intellectual exercise; trying to defend it completely is not.
The implication is that the traditional security model — find the vulnerabilities, patch them, assert coverage — does not apply. The attack space is unbounded for practical purposes. The work shifts from "achieve complete coverage" to "manage risk inside an unbounded attack space," which requires a different engineering posture and a different organizational structure than most security teams currently have.
You're never going to be able to apply guardrails for all the bad things that can happen. You're just not going to find them all
The corollary is that the LLM red teaming hype cycle — where someone discovers they can convince ChatGPT to say something it shouldn't and posts the screenshot — is mostly noise. Real adversarial security work in this space requires mathematical discipline and systematic exploration of failure modes, not casual prompt fiddling. The "AI gambling addict" framing Disesdi uses for casual red teamers is a useful corrective. If you're not approaching the testing with professional rigor, you're not securing the system; you're just collecting dopamine hits when you find a hole.
Data is the new attack vector
The data-centric reframe Disesdi offered is the single most useful sentence for any CISO communicating AI security risk to a board.
Data has gone from being the new oil to being a new attack vector
For two decades the industry has talked about data as an asset — the thing organizations accumulate, refine, and monetize. That framing still holds for value capture. But for security purposes, data is now also the principal attack surface. Adversarial inputs, training-data poisoning, prompt injection through document content, hidden text in images that the model decomposes differently than a human reads — every one of these uses data as the vector. The CISO who walks into 2026 still treating data security as a privacy-and-DLP problem rather than as an attack-surface problem is going to be defending against last year's threat model.
Purple teaming as the new structural answer
The team-design implication of the data-as-attack-vector reframe is the structural shift Disesdi advocates for. Traditional purple teaming bridges red and blue — offense and defense — inside the security organization. AI security purple teaming has to bridge the security team and the data team. The data scientists, ML engineers, and data engineers who own the data pipelines and model training are operating in a domain the security team historically hasn't owned, and the data-quality and provenance work that determines model behavior is now a security control surface, not just an operational concern.
The three-step start Disesdi recommends for any CISO beginning this work is the operating answer. First, know your data flows. Second, know your data and model provenance — the latter is increasingly important because organizations now use third-party models whose training data and behavioral history they don't directly control. Third, know your data governance. With those three in place, the OWASP AI Exchange gives a structured way to map controls onto data flow diagrams and process flow diagrams, which gets a security program moving faster than most of the industry. The security leaders who do this work this quarter are well ahead of where the field currently is.
Design assurance cases — the aerospace borrow
The most exportable engineering framework in the episode is the DO-178B/C design-assurance-case borrow from aerospace. Aerospace software discipline is calibrated against the consequence of failure. The higher the potential impact — loss of life, loss of mission — the more rigorous the development, documentation, and testing requirements. Low-impact systems get less ceremony. High-impact systems get formal proof obligations.
The same framing applies cleanly to AI. Disesdi's argument is that the industry needs to define risk thresholds for AI systems and assign development discipline accordingly. A chatbot recommending books at low confidence is not a high-risk system. An AI agent making lending decisions or triaging medical alerts is. The development practices, testing rigor, and governance overhead should match the consequence. The 99.999% accuracy claim that sounds impressive in a marketing deck does not pass the aerospace bar — at airline scale that's thousands of failures, which is exactly why generative AI is not embedded in flight-critical systems today.
The companion practice Disesdi keeps preaching is failure modes and effects analysis (FMEA), adapted for software. Before testing, identify the failure modes. Categorize them by effect. Prioritize the testing against the highest-consequence failures rather than running infinite tests in an infinite attack space. The result is more focused red teaming, better-targeted defensive investment, and a security program that can credibly answer "what could go wrong" with specifics rather than handwaves.
The Chief AI Security Officer role
Disesdi's case for a dedicated Chief AI Security Officer is the most thoughtful version of that argument the show has aired. The objection isn't that current security leaders are incapable. It's that the workload of running an enterprise security program — perimeter, identity, compliance, incident response — already saturates the executive bandwidth, and bolting AI security on as "another thing for the CISO to learn" produces underinvestment in both. A dedicated CAISO with the authority and bandwidth to focus on AI-specific risk would be a structural improvement over the current pattern, especially in larger organizations where the AI deployment scope is meaningful.
The reporting structure question is secondary. The role works whether it sits as a direct report to the CEO, peers with the CISO, or sits inside the security organization. What matters is that the AI risk surface gets dedicated executive attention before the first major incident, not after.
Predictive AI is what runs the mission-critical systems
The under-discussed point Disesdi surfaces in the closing segment is that the security industry's intense focus on LLM red teaming is potentially leaving the more consequential AI systems undefended. Generative AI has the headlines because it talks back. Predictive AI — classical ML, classifiers, deep neural networks for prediction tasks — runs everything actually mission-critical. Nuclear systems, military targeting, financial-market predictive modeling, alert-triage classifiers in security itself. These systems are already in production, already widely deployed, and already attackable through adversarial-example techniques that have been publicly known since 2014.
The community focus on LLM red teaming is producing a generation of AI security researchers who have never tested a predictive model. The Disesdi argument is that we need both — and the predictive AI side is currently underinvested in research, tooling, and organizational attention. The CISOs who have predictive AI deployed (which is most of them, even if they don't know it — alert-triage classifiers, fraud detection models, customer-segmentation models all qualify) need to make sure those systems are getting adversarial testing too.
The closing thought to land on is the engineering humility piece. AI is software. It's math. It's old technology in new clothes for many of the practitioners who've been working on it for a decade. The transformer paper is from 2017. AI security work is from before that. Treating any of this as magic — therapy use cases, autonomous decision-making, super-intelligence speculation — is what gets people hurt. Treating it as engineering, with engineering discipline borrowed from older industries that figured out how to operate at high stakes, is what produces defendable systems.
Show notes
Guests — Disesdi Susanna Cox, AI architect and patent holder; consulting security researcher; principal contributor to the OWASP AI Exchange
Books mentioned — none
Frameworks / models / tools named — the 25-dimensional adversarial attack subspace framing; OWASP AI Exchange; OWASP Top 10 for LLMs; MLSecOps (Disesdi's 2022 paper called "the first MLSecOps paper" by some); MLOps; the three-step CISO starter (data flows, data and model provenance, data governance); design assurance cases (borrowed from aerospace DO-178B/C); failure modes and effects analysis (FMEA, adapted for software); the predictive-AI vs generative-AI distinction; Chief AI Security Officer (CAISO) as a proposed role; the OWASP AI Exchange threat-modeling overlay onto data flow diagrams
Other people / shows / resources referenced — Simon Sinek (referenced re: outsourcing thinking and the wedding-vows analogy); Yuval Noah Harari (referenced re: AI bureaucrat predictions and bank-loan decision automation); Anthropic CEO Dario Amodei (referenced re: people using AI for therapy); Reddit and 4chan (the actual training data substrate, not "Shakespeare and Socrates"); the EU's regulatory posture on automated decision-making (referenced as legally restrictive on AI bureaucrat use cases); the radioactive children's chemistry sets of the 1950s (Disesdi's analogy for under-regulated dangerous tech); the Homer Simpson nuclear-baton analogy (Conor's framing for the current AI moment)
Hosted by Conor Sherman and Stuart Mitchell.