What you'll learn
The CVE-Genie paper put a real price on the new exploit-development curve — 51% hit rate at $2.77 per CVE — and that price flips the calculus on every defensive program built for a slower attacker.
OpenAI's hallucinations explainer reframes the problem as a training-incentive issue, not a data issue — and the right vendor question is no longer "how accurate" but "how calibrated."
The talent funnel into security is breaking at the entry level, and the AI-driven hiring cooldown is going to compound that into a real pipeline crisis if leaders don't redesign roles.
Episode
This solo Zero Signal pulls four interlocking threads from the week and uses them to set the agenda every CISO needs to be working in 2026. Gen Z is rethinking the four-year college path with confidence in the ROI slipping, while interest in skilled trades is climbing. Anthropic is throwing public weight behind California's SB53, the documentation-and-disclosure-focused successor to the more controversial SB1047. The Axios jobs revision wiped roughly a million jobs off last year's totals — for the first time since 2021, there are more unemployed workers than open roles. And on the technical side, OpenAI just published an explainer on why hallucinations happen, while a new arXiv paper on CVE-Genie demonstrated working exploit generation at $2.77 per CVE.
The throughline is that the unit economics of attack are collapsing while the unit economics of hiring defenders are tightening — at exactly the moment the talent pipeline is starting to question whether the four-year-degree path even makes sense. Stuart and Conor walk through the implications for how a security leader should be thinking about budget, headcount, role redesign, and vendor diligence in this environment.
What we cover
"the four-year-degree question" — Gen Z, skilled trades, and what to tell a 17-year-old who asks where to go
"Anthropic backs SB53" — what the bill actually requires and why the lab is leaning in
"the Axios jobs revision" — a million jobs off the books, more unemployed than open roles
"the CVE goes for $2.77" — what the CVE-Genie paper actually proves and what it doesn't
"hallucinations are a training-incentive problem" — OpenAI's explainer and the new vendor question to ask
"calibration over accuracy" — the metric that should replace "how accurate is your model"
"job hugging and the great stay" — what a tightening market does to security-team retention dynamics
"agentic vulnerability remediation" — the only credible defender response to a $2.77 exploit market
Thank you to our Sponsors:
Hampton North is the premier US based cybersecurity search firm. Start building your security team with Hampton North
Sysdig is the leader in AI-powered real-time cloud defense; stop watching and start defending
The conversation
The four-year-degree question and the failing entry-level funnel
The opening segment goes after a hard question every parent of a high-schooler is now wrestling with — is the four-year college path still the right answer when the white-collar jobs that used to anchor it are getting compressed by AI. The honest answer from both hosts is no clean one. The talent pipeline into security specifically is breaking at the entry level. Stuart's read from the recruiter's chair is that the cohort straight out of computer science programs is now competing for $80K roles that used to be a near-guaranteed six-figure landing — and that's before AI has hit the headcount math at scale. The boot-camp industrial complex muddied the waters further. The next-generation security workforce is currently being filtered out at the top of the funnel by a combination of degree cost, vanishing entry-level roles, and the loud message that the L1 SOC tier is going away.
The harder argument is what this does over five to ten years. If the industry continues to eliminate L1 work because agentic systems can do it cheaper, the L2 and L3 talent that was supposed to grow up out of L1 doesn't exist. The next generation of CISOs are currently being bottlenecked at the front door. Conor's reframe — get really outcomes-oriented about skills you build rather than pedigrees you accumulate — is the right north star for any 17-year-old asking. The modular-skills model, where a learner stacks specific certifications and project-based work into a credible portfolio over 12-24 months, is increasingly the better economic bet than a $160K four-year degree, especially in a market where the half-life of the relevant skill set keeps shrinking.
Anthropic backs SB53 — and why that signal matters
California's SB53 is the cleaner successor to last year's SB1047. Where SB1047 tried to force a kill-switch and capability cap on frontier models, SB53 focuses on documentation, red-team reporting, evaluation results, and security protocols. It's the paper-trail bill, not the shut-it-down bill. Anthropic's public endorsement is the interesting development. Frontier labs typically fight regulation. Anthropic leaning in suggests the lab views verifiable safety as a competitive advantage rather than a tax — and that the bar for what's expected from frontier providers is about to formalize in California, and likely cascade.
For CISOs, the second-order implication is what cascades next. If California sets the bar, large enterprises tend to align to the highest watermark across jurisdictions for operational simplicity. That means vendor diligence on AI providers is about to expand to include red-team dossiers, calibration data, and refusal-rate evidence. The AI procurement questionnaire just got longer. The CISOs who get out ahead of that and standardize what they're asking for now will save themselves a year of catch-up later.
The Axios jobs revision and what tightening does to security teams
The Axios reporting on the BLS revision — roughly a million jobs wiped off last year's totals, more unemployed workers than open roles for the first time since 2021 — is the macro context every CISO needs to internalize. CFOs are about to get tighter on every requisition, and security headcount is going to be on the same scrutiny list as every other function. The historical "engineers per security headcount" ratio is no longer a winning argument. The replacement framing is the one Conor walked through — tie every requested hire to a mission-critical business outcome, document the AI-leveraged process redesign that justifies the work the new headcount will do, and bring the throughput-per-analyst gains from co-pilots and agentic SOC tooling into the budget conversation as evidence.
The other dynamic worth watching is what Stuart flagged as "job hugging" — the great-stay pattern where employees hold onto current roles in a tight market rather than chase new ones. Security teams that retain their senior talent through this period are going to be operationally much stronger than those that don't. Compensation banding, role-design clarity, and visible investment in skill development matter more in this market than they did 18 months ago.
Hallucinations are a training-incentive problem, not a data problem
OpenAI's new explainer on why models hallucinate is the conceptual reset every governance program needs. The summary: the issue isn't bad data, it's that models are rewarded during training for confident answers — including confident wrong ones — and not for saying "I don't know." The fix is better training objectives, specifically incentive structures that reward calibration. A model that abstains when uncertain is more useful in an enterprise setting than a model that fabricates with confidence.
The vendor-diligence implication is the operating change. The right question to ask a frontier lab is no longer "how accurate is your model" — accuracy without calibration is structurally dangerous in agentic chains, where a confident wrong answer at step three propagates undetected through steps four through ten. The right question is "how calibrated is your model" — what's the abstention rate, what does the confidence distribution look like, what feedback loops surface uncertainty to the human-in-the-loop. CISOs who make this part of their AI vendor governance now are going to have materially better risk visibility than those who don't.
The CVE-Genie paper — exploit development at $2.77 per CVE
The arXiv paper on CVE-Genie is the technical news of the week worth committing to memory. The framework is a multi-agent system that ingests a CVE entry, rebuilds a vulnerable environment, and reproduces a working exploit. The authors tested it on 841 CVEs from 2024-2025. It succeeded on 428 of them — a 51% hit rate. The cost per successful exploit reproduction was $2.77. The paper's important caveats — open-source code only, often requires existing proof-of-concept material — narrow the immediate applicability, but they don't blunt the directional signal.
The directional signal is that the unit economics of exploit development are now collapsing in exactly the way the unit economics of any AI-augmented workflow do. What took a skilled exploit developer a week of focused effort six months ago now costs less than a cup of coffee for a 50-50 shot at a working artifact. Layer that against last week's research showing AI-driven coding is producing a 4x increase in code velocity and a corresponding 10x increase in vulnerable pull requests, and the defender's blast radius is expanding faster than any traditional security program is structured to defend.
The defender's response can't be "patch faster" inside the existing operating model. The patching cadence the industry built around 30/60/90-day SLAs was designed for an attacker timeline measured in weeks. The new attacker timeline is measured in hours. The only credible response is agentic vulnerability remediation across the full lifecycle — IDE-time guardrails for engineers, CI-time scanning that's actually wired into the agentic coding pipeline, runtime protections in production that assume some vulnerable code is going to ship no matter what gating you put in place, and an automated patch pipeline that can move at the cadence the threat actors now operate at. None of that is a single-vendor purchase. All of it is the architectural work the next two years are going to demand.
The throughline of the episode is that the math is moving against defenders on three axes simultaneously — talent pipeline narrowing, hiring budget tightening, and attacker cost collapsing — at the same time as the AI providers themselves are starting to acknowledge structural problems (hallucinations, manipulation, calibration) that previously got hand-waved. The CISOs who reframe their teams around this reality, get aggressive about AI-leveraged process redesign, and demand calibration data from every vendor in their stack are going to be the ones who keep their organizations defendable. The ones who don't are going to find out the hard way what $2.77 exploits look like at scale.
Show notes
Guests — solo episode (Conor Sherman and Stuart Mitchell, hosts; no in-studio guest)
Books mentioned — none
Frameworks / models / tools named — California SB53 (the documentation-and-disclosure AI bill, successor to SB1047); Anthropic's endorsement of SB53; OpenAI hallucinations explainer (training-incentive framing); CVE-Genie (multi-agent exploit-reproduction framework, 51% hit rate at $2.77 per CVE, arXiv September 2025); the calibration-vs-accuracy framing for AI vendor diligence; agentic vulnerability remediation; "job hugging" / "the great stay"
Other people / shows / resources referenced — Axios reporting on the BLS jobs revision (~1M jobs wiped, more unemployed than open roles for first time since 2021); PBS reporting on the surge in skilled-trades interest; Gadi Evron (referenced as the LinkedIn surfacing for the CVE-Genie paper); Liquid Death (Conor's running plug); Ruby Murphy (Stuart's Hampton North teammate, referenced re: "job hugging" terminology); Apraio (referenced as the source of the prior week's 4x velocity / 10x vulnerable PR data)
Hosted by Conor Sherman and Stuart Mitchell.