What you'll learn
The F5 source-code theft is materially worse than previous breaches because cyber-reasoning agents can compress the time from disclosure to working exploit from weeks to hours.
The AWS US-East-1 outage exposed how badly fourth-party risk gets ignored — most "third-party" vendor reviews stop one layer too shallow to see real concentration risk.
The Darktrace CEO deepfake voicemail is the public proof point that voice-print authentication is dead, and it's the work of a CISO to make sure their org isn't defaulting to it.
Description
This episode is a tight news loop through three signals from one bad week — the F5 source-code theft, the AWS US-East-1 outage, and a deepfake voicemail dropped on the Darktrace CEO during a board meeting — plus a long segment on the FTC's Operation AI Comply and the OpenAI Atlas browser launch. Together they form an unusually clean argument that the right unit of measurement for a 2026 security program is no longer "did we prevent the breach" but "did we recover with integrity and at what cost."
The F5 incident matters more than the headline reads. Losing source code to a nation-state actor was always bad. The new variable is that cyber-reasoning agents — the same architecture demonstrated in the DARPA AI cyber challenge and made operational in tools like CVE-Genie — can now consume that source code, locate exploitable code paths, and produce working exploits at meaningful scale. Roughly 266,000 BIG-IP instances are reachable on the public internet, more than half in the US. The defender's window has compressed accordingly.
The AWS outage was the visible test. A DNS issue in US-East-1 took down 113 services and rippled into thousands of dependent vendors. Most companies discovered their fourth-party risk live, in production, on a Monday morning. The takeaway both hosts converge on is the same: SOC 2 is not enough, contractual clawbacks are not enough, and resilience needs to be measured the way uptime and cost already are. The closing segments on the FTC pursuing AI-washing claims and on the Darktrace deepfake voicemail extend the pattern — proof that AI governance and authentication need to be rethought from the boundary inward.
What we cover
"OpenAI's Atlas browser" — why the new agentic browser launch is a security story, not just a product story
"F5 source code in the wild" — what changes when adversaries have the blueprints and cyber-reasoning agents
"the AWS Monday" — DNS, US-East-1, 113 services, and the fourth-party risk every CISO just discovered
"resilience as a measured KPI" — the LTV-of-a-customer math that turns an outage into a real dollar figure
"FanDuel went down on Monday Night Football" — what a single outage costs when customer convenience is the moat
"Operation AI Comply" — the FTC enforcing truth-in-advertising on AI claims, and what AI-washing is going to cost
"Sam Altman is right about voice prints" — the Darktrace CEO deepfake call and what authentication has to look like next
"this is rubbish until proven real" — the cultural shift listeners have to demand from themselves and their teams
Thank you to our Sponsors:
Hampton North is the premier US based cybersecurity search firm. Start building your security team with Hampton North
Sysdig is the leader in AI-powered real-time cloud defense; stop watching and start defending
The conversation
F5: source code in adversary hands meets cyber-reasoning agents
The F5 source-code theft is a useful test of how the AI era changes the meaning of an old breach category. Source code falling into nation-state hands has happened before — the playbook used to be that defenders had weeks or months while attackers reverse-engineered, located vulnerabilities, and wrote exploits. That window has collapsed. Cyber-reasoning agents, trained on code at scale, can now ingest a leaked source tree and surface exploitable paths in hours. The CVE-Genie data point Conor keeps returning to is the right benchmark — 51% success rate at producing working exploits from a CVE description, at $2.71 of compute per exploit. Apply the same architecture to a leaked source tree and the math gets worse for defenders.
The exposed surface is the second order of the problem. Roughly 266,000 BIG-IP instances are reachable on the public internet, more than 50% in the US, sitting at the boundary between enterprise networks and the open internet. CrowdStrike's most recent telemetry has 81% of intrusions running malware-free and cloud intrusions up 136%. Mandiant's M-Trends is showing about 30% of breaches now starting from software exploits rather than phishing. The triangulation is consistent — defenders need to assume the breach window collapsed, prioritize containment and recovery over prevention, and stop budgeting against threat models built for a 2022 adversary timeline.
The AWS Monday and the fourth-party risk that nobody had measured
AWS US-East-1 had a DNS issue around 3 AM on October 20th. 113 services degraded. Six thousand AWS-direct outage reports. Thousands more for dependent vendors. A snow day for corporate adults. The story isn't the outage itself — DNS at hyperscaler scale fails occasionally. The story is how many companies discovered, mid-Monday, that their "third-party" vendor list was actually a list of pointers into a single concentrated infrastructure provider, and that no part of their procurement, vendor risk, or contractual review process had ever measured the depth of that concentration.
SOC 2 doesn't fix this. SOC 2 requires a vendor to pay lip service to resilience without exposing fourth-party dependencies. Contractual clawbacks don't fix this either — the canonical example in the episode is paying a $100K/month vendor for 10% downtime and getting back a $10K credit when the actual cost to the business of three days of outage is materially larger. Stuart's FanDuel-versus-DraftKings example is the one to internalize. Monday Night Football is the highest-bet game of the week. FanDuel was down. DraftKings wasn't. Customers don't return after they've found a working alternative. Customer acquisition cost is one of the largest line items in any consumer-facing P&L, and every customer lost in an outage carries a multiplier far higher than what shows up in the SLA credit math.
The actionable framing both hosts converge on: pick the LTV of a typical customer, measure conversion-rate suppression at 1, 3, 7, and 30 days post-outage, and back into a per-hour-of-degradation cost figure. Once the cost-of-fragility number is real, the resilience program funds itself. Until then it's an abstraction, and abstractions lose budget fights.
The Atlas browser, FTC enforcement, and the regulatory edge of AI
The OpenAI Atlas browser launch is the kind of product that splits security from the rest of the org. 800 million existing OpenAI users, an agentic browser that hoovers data into the model layer, and the same prompt injection / image manipulation / URL hijacking attack surface that Brave's research already documented in Perplexity's Comet. Security walks into the room with all the reasons not to default to it. Engineering walks in with all the reasons to lean in on innovation. The CISO's job is the unglamorous middle — clear-eyed risk management when the rest of the room wants the easy answer.
The FTC's Operation AI Comply is the regulatory backstop for the same dynamic. There is no "AI exemption" from truth-in-advertising. The Commission has filed multiple cases this year — most recently against air AI in August — targeting companies that market AI-powered capabilities they can't substantiate. For governance leaders, the implication is that AI governance now has to extend into marketing, sales, and investor relations, not just into model selection and access controls. AI-washing is now actionable conduct. The fines aren't yet large enough to deter the worst offenders, but the case law that's about to develop will define what "AI-powered" can legally mean in a contract, and that's where the lever lands.
The Darktrace deepfake call and the death of voice-print authentication
The deepfake voicemail dropped on the Darktrace CEO during her board meeting — closing-the-deal week, high pressure, lots of moving information — is the public proof point of a problem the security community has been warning about for two years. North American deepfake-enabled fraud is up roughly 1,740% from 2022 to 2024. Q1 2025 losses crossed $200M. Per Ironscales, 55% of organizations have experienced an AI voice-fraud attempt in the last year, with average losses of $280,000 per incident.
Sam Altman's frame is the right one to put in front of an executive team — voice prints as authentication are fully defeated. Any institution still using one is choosing a process that the model market has retired. The harder cultural problem is that the phone call has been the default "out of band" verification mechanism for fifteen years, and most organizations will revert to it under pressure unless they proactively replace it. The replacement isn't conceptually hard — pre-authenticated, biometric-bound, key-managed approval applications for the small set of decisions that genuinely need human-in-the-loop verification (wire transfers, sensitive data movement, executive impersonation-targeted operations). The hard part is doing the inventory, getting the process built, and rehearsing it before the crisis arrives. The deepfake era runs on crisis-management discipline, and that discipline doesn't materialize on demand.
The cultural shift: "this is rubbish until proven real"
The episode's most useful framing for non-technical audiences came from Stuart's closing observation. The historical default for media literacy has been "this is real until proven rubbish." The deepfake era inverts that — the default has to become "this is rubbish until proven real." That posture is exhausting at scale, but cryptographic signing of high-stakes content, biometric identity verification on sensitive transactions, and explicit protocols for verifying what you see and hear are the direction of travel. The companies and individuals who internalize this earliest will have the lowest incident rates. The ones who don't will keep paying tuition to the threat actors who have already moved.
Show notes
Guests — solo episode (Conor Sherman and Stuart Mitchell, hosts; no in-studio guest)
Books mentioned — none
Frameworks / models / tools named — F5 BIG-IP (target of the source-code breach); CVE-Genie (referenced as the proof point for cheap exploit generation); DARPA AI Cyber Challenge (referenced); cyber-reasoning engines; CrowdStrike Annual Report (81% malware-free intrusions, cloud intrusions +136%); Mandiant M-Trends (~30% of breaches from software exploits); OpenAI Atlas browser; Brave research on Perplexity's Comet browser (prompt injection / image manipulation / URL hijacking); FTC Operation AI Comply; Section 5 of the FTC Act; AWS US-East-1 (October 20 DNS outage); deepfake-enabled fraud statistics from Ironscales (55% organizations experienced an AI voice fraud attempt, $280K average loss); SOC 2 (called out as insufficient for fourth-party risk); fourth-party risk; "this is rubbish until proven real"
Other people / shows / resources referenced — Disesdi Susanna Cox (prior Zero Signal guest, quoted on the Atlas browser launch); Iman Ghanizada (prior Zero Signal guest, teased on incident management as the highest-leverage CISO discipline); Jason Rebholz (prior Zero Signal guest, referenced on deepfake risks); Sam Altman (quoted via AP — "AI has fully defeated" voice-print authentication); FanDuel and DraftKings (Monday Night Football outage example); Apple, Facebook (referenced on customer trust and convenience moats); air AI (FTC enforcement target — August action)
Hosted by Conor Sherman and Stuart Mitchell.