What you'll learn
Synthetic insider risk — agentic AI behaving like a compromised insider — is the predicted breakout incident category for 2026.
AI-driven patch diffing, supply chain compromise, and developer-endpoint blind spots are converging into a defender's incentive problem, not a tooling gap.
Cyber insurance is the most likely place "agentic negligence" gets priced before regulation gets there.
Description
Predictions season is mostly performance art. This episode is built differently. Conor and Stuart pulled together ten 2026 calls from six security leaders Zero Signal trusts — Madison Horn, Nicole Carignan, Greg Notch, Crystal Morin, Daniel Miessler, and James Berthoty — added their own commentary, and then closed with three of their own predictions for the year. The result is a working forecast for what to budget against and what to ignore.
The signal across the ten calls is consistent. Critical infrastructure is exposed in ways that depend on incentives, not technology. Agentic AI is going to behave like a compromised insider before it behaves like a malicious one. Supply chain compromise is going to outpace zero-day exploitation as a category, and the new attack surface lives in AI model hubs, prompt libraries, and plugin ecosystems. Patching is about to break under the speed of AI-driven exploit generation, and the defender's response — agentic patch pipelines like the CVE-Genie pattern — exists technically but is blocked on courage, not capability.
Stuart and Conor close with three calls of their own. Stuart's: 2026 is a perfect-storm year for high-profile incidents, distracted security teams will get hit hard, and new role categories will emerge faster than the talent pool can adapt. Conor's: agentic coding becomes the norm and forces software supply chain into the budget conversation; the first documented insider threat against a frontier AI lab makes major headlines; and tier-one cyber insurance carriers introduce AI-specific liability riders requiring formal AI governance committees as a precondition for coverage. Together, the ten plus six form the most opinionated picture of 2026 the show has put on record.
What we cover
"AI cascading failures across critical infrastructure" — Madison Horn's prediction on tightly coupled systems and poorly governed agents
"AI supply chain compromise overtakes zero days" — the cross-cutting theme almost every analyst flagged
"agentic AI as the next insider risk" — Nicole Carignan on pliable agents behaving like compromised insiders
"automated weaponization of day-one vulnerabilities" — Greg Notch on AI-driven patch diffing as the incentive flip
"the top 10 ways to get breached in 2026" — Crystal Morin on supply chain risk extending into model hubs and prompt libraries
"the haves and have-nots of security talent" — Daniel Miessler on the 100x engineer divide arriving inside security
"developer endpoints as the production boundary" — James Berthoty on MCP servers, open-source malware, and the new attack surface
"synthetic insider risk, agentic negligence, AI liability riders" — the show's three predictions for the year
Thank you to our Sponsors:
Hampton North is the premier US based cybersecurity search firm. Start building your security team with Hampton North
Sysdig is the leader in AI-powered real-time cloud defense; stop watching and start defending
The conversation
Critical infrastructure, supply chain, and the incentive problem
Madison Horn's call on AI cascading failures across critical infrastructure isn't a prediction that this is the year a Colonial-Pipeline-scale event happens. It's a prediction that the architectural conditions for one are now present — tightly coupled systems with poorly governed AI agents whose trusted autonomous responses can propagate a single compromise into a cascade. Conor and Stuart agreed the headline event isn't likely to land in 2026, but the warning shots will. The Colonial Pipeline analogy from a few years ago is the model: it took a material incident to align incentives in the energy sector, not technology. The same pattern is queued up for the agentic-AI era of critical infrastructure.
The companion theme — AI supply chain compromise overtaking zero days — was the single most consistent prediction across the analysts surveyed. Crystal Morin's piece extended the surface beyond traditional libraries and APIs into AI model hubs, prompt libraries, and plugin ecosystems. The Shai-Hulud-style attacks already showed how a single compromised package ripples through thousands of organizations. Add model artifacts, pickle-format payloads, and unvetted MCP servers, and the surface area in 2026 is materially bigger than the security tooling ecosystem is currently calibrated to defend.
Synthetic insider risk: the predicted breakout incident category
Nicole Carignan's framing of agentic AI as the next big insider risk is the most useful single concept from the analyst roundup. The mental model that sticks is "synthetic insider" — an agent with legitimate access, no malicious intent, and pliable behavior that a skilled attacker can manipulate into doing things a compromised employee would do. The traditional insider taxonomy splits malicious from accidental; synthetic insider is a third bucket and probably the dominant one for 2026.
Conor's reasoning for why this is the year it shows up in headlines isn't novel technology — it's adoption math. 2026 is when the early majority of enterprises move agentic AI into core business flows. The attack surface scales with the number of agents performing real work. A larger surface plus an established malicious-insider playbook is a predictable headline. Sandy Dunn's earlier Zero Signal episode on the architectural ungovernability of LLMs is a useful companion read here — the synthetic insider problem isn't solvable, only manageable.
Patching breaks under AI; defenders have the tools but lack the courage
Greg Notch's prediction on automated weaponization of day-one vulnerabilities is the cleanest framing of the incentive problem in security. AI-driven patch diffing collapses the time between disclosure and exploit availability. Defenders are not losing because they lack tools or talent — they're losing because the incentive structure favors attackers, and the defender's playbook hasn't moved.
The optimistic counterweight is the CVE-Genie pattern — the same scaffolding that lets an attacker generate an exploit for less than a cup of coffee can be turned around to let a defender programmatically validate exposure, generate a fix, and push a patch through a pipeline. The capability exists. The blocker is whether CISOs are willing to move vulnerability remediation from a 30/60/90-day human-mediated cadence to an agentic process measured in hours and days. 2026 is a bridge year for this — the early adopters will make the move, the early majority will pilot it, and the lagging half of the curve will get exploited.
Talent stratifies — the 100x security engineer arrives
Daniel Miessler's call on top security talent being in extreme demand was the call most directly aimed at hiring managers and individual contributors. The framing matters: the gap between the top tier and the second tier widens, and the bottom tier loses access to the kind of work the industry used to absorb at scale. Stuart's hiring read confirms it from the recruiter side — there's a cohort of professionals who were strong five-to-seven years ago and have not upskilled, and they're going to find 2026 hard.
The 100x engineer is the right mental model. The difference between the engineer who composes agents fluently and the engineer who still ships handwritten code linearly is no longer 10x — it's two orders of magnitude. The same shift is now landing inside security. The leaders who win the talent war will be the ones recruiting for judgment, taste, accountability, and AI fluency at the same time, and rebuilding their team's skill model around the assumption that the median individual contributor is augmented by a handful of agents.
Three calls from the hosts: synthetic insider, lab espionage, and the insurance lever
Stuart's headline call: 2026 is a perfect-storm year for high-profile incidents. Distracted teams chasing AI strategy while neglecting fundamentals, top operators leaving to start companies, post-COVID hiring chills leaving teams burnt out, agents and AI tooling getting deployed without governance, and adversaries with a generationally improved toolkit — together those conditions push the projected incident count past 2017 and 2021. He also predicts new role categories — head of preparedness, forward-deployed security engineer, AI cleanup specialist, physical-cyber crossover roles for the data-center build-out — emerging faster than the labor pool will adapt.
Conor's three: software supply chain becomes a material budget line because agentic coding becomes the norm; the first documented case of insider threat or industrial espionage targeting a frontier AI lab makes the New York Times and the Justice Department; and tier-one cyber insurance carriers introduce AI-specific liability riders or exclusions requiring formal AI governance committees as a precondition for coverage. The phrase to watch is "agentic negligence" — the insurance market is the fastest lever to price exogenous risk, and it's going to move on AI before regulators do in the US.
The closing thought is the optimistic one. Most of these calls describe downside, but the right posture for a security leader walking into 2026 is the courage to grow — more compute, more intelligence, better tooling, and a chance to do things differently for the first time in a decade.
Show notes
Guests — solo episode (Conor Sherman and Stuart Mitchell, hosts; no in-studio guest)
Books mentioned — none
Frameworks / models / tools named — synthetic insider risk; agentic negligence; CVE-Genie (referenced); CTEM (continuous threat exposure management); MCP servers; AI model hubs / prompt libraries / plugins as new supply chain surface; "the courage to grow" (Conor's Pillar 1); zero trust (NIST SP 800-207, 2020); the AI 2027 essay (referenced as source for AGI-by-2027 frame); "100x engineer"
Other people / shows / resources referenced — Madison Horn (national security and critical infrastructure advisor, World Wide Technology — prediction on AI cascading failures); Nicole Carignan (SVP of Security Operations, Darktrace — agentic AI as insider risk); Greg Notch (Chief Security Officer, Expel — automated weaponization of day-one vulns); Crystal Morin (senior cybersecurity strategist, Sysdig — Top 10 Ways to Get Breached in 2026 blog); Daniel Miessler (founder, Unsupervised Learning — 2026 predictions); James Berthoty (founder and CEO, Latio Technology — vendor-market predictions); Saad Ullah (CVE-Genie creator, prior Zero Signal guest); Sandy Dunn (CISO at SPLX, prior Zero Signal guest at Black Hat — LLM ungovernability); Clint Gibler (head of research at Semgrep, prior Zero Signal guest, author of TLDR Sec); Anthropic's November threat-actor report (referenced); Mike Privette / Return on Security (referenced re: M&A activity); Ross Haleliuk (referenced re: defense-industrial-base consolidation analogy)