This website uses cookies

Read our Privacy policy and Terms of use for more information.

What you'll learn

  • Apiiro's data on AI-augmented coding shows a 4x increase in commit velocity and a 10x increase in pull-request rejections — net productivity may actually be down once quality is held constant.

  • Brave's Comet research is the canonical demonstration of why agentic browsers shouldn't be deployed across the enterprise yet — prompt injection on a single visited page can take real action on the user's behalf.

  • Sysdig's 2025 Cloud Defense Report puts a number on the upside of AI done well — a 76% reduction in detection-to-response time when the integration is built thoughtfully on the right data.

Episode 

This solo Zero Signal goes after the September news cycle through a single lens — AI-augmented coding, agentic browsers, and the Cisco Shodan research showing exposed Ollama models all describe the same problem. The pace of AI deployment is outrunning the security fundamentals that should have been in place before the deployment started. The episode opens on the Apiiro research that gave the segment its title — agentic coding is producing 4x more commits while pull-request rejections climb 10x — and works through to the optimistic counterweight, the Sysdig 2025 Cloud Defense Report data showing that AI integrated well into the SOC compresses detection-to-response by 76%.

The throughline is the operating advice both hosts converged on for the year. "Putting fun back in fundamentals" is the operating principle for any CISO walking into 2026. The shiny-object cycle is a trap. The teams that build defendable organizations are the ones who get the boring stuff right at scale — secret detection in pull requests, paved-path automation, internet-exposed-asset discovery on the cadence the threat environment now demands. The episode closes with a generative discussion of the NIST SP 800-53 adaptation project that's pulling 3,000+ practitioners together to figure out which existing controls translate cleanly to agentic AI environments and which need new ones built from scratch. 

What we cover

  • "the September surge" — what hiring patterns at the start of fall tell you about where security spend is heading

  • "4x velocity, 10x rejected PRs" — the Apiiro data on what AI-augmented coding is actually shipping

  • "the four-letter scapegoat" — who actually owns security accountability when an agent commits the code

  • "context is king" — why architectural and privilege flaws are increasing while syntax bugs decline

  • "a 76% reduction in detection-to-response" — what the Sysdig 2025 Cloud Defense Report actually showed

  • "the agentic browser threat" — Brave's research on Comet and why the enterprise rollout has to wait

  • "Ollama models on the public internet" — Cisco Shodan research and why it shouldn't be a 2025 headline

  • "the NIST 800-53 adaptation project" — 3,000 practitioners adapting the control library for AI

Thank you to our Sponsors:

Hampton North is the premier US based cybersecurity search firm. Start building your security team with Hampton North

Sysdig is the leader in AI-powered real-time cloud defense; stop watching and start defending here

The conversation

Apiiro's data — agentic coding is shipping more, but quality is dropping faster

The Apiiro research is the most useful single proof point on what AI-augmented coding is actually producing in real engineering organizations. Developers using AI assistants are generating 3-4x more commits than their peers. Pull request rejection rates against those commits are up roughly 10x. Net throughput in productive code is, by any reasonable measure, lower than the per-engineer velocity number suggests. The vulnerability classes that are increasing aren't the easy syntax bugs — those are actually trending down because the model handles them well. The increases are concentrated in privilege issues, architectural flaws, and shared-library logic. The harder failure modes are the ones the model can't reason about without context the engineer hasn't provided.

The CISO action that follows is not to ban AI coding. The teams that try are losing the politics fight before it starts. The action is to mandate the AI AppSec that has to ride alongside the AI coding. Every commit through generation-time guardrails. Pull-request-time scanning that's actually wired into the agentic flow, not a stale gate. Architectural review at the design stage rather than at the production-incident stage. The research closes the question of whether the basics still matter. They matter more, not less, when the volume of code in motion is multiplied by 4x and the rejection rate is multiplied by 10x.

The accountability question is the one most organizations haven't answered cleanly. When an AI assistant commits code that introduces a vulnerability, who is responsible? Conor's working answer: the engineer who submitted the pull request owns the code, the architect owns the systems-level decision, the CTO owns the function. If the chain isn't explicit, accountability lands at the CISO's feet by default — and that's the four-letter scapegoat outcome no security leader should accept. Get this written down in the engineering handbook before the first incident, not after.

Sysdig's 76% — what AI in the SOC actually delivers

The optimistic counterweight to the Apiiro data was the Sysdig 2025 Cloud Defense Report. The team measured a 76% reduction in detection-to-response time when AI was deployed in well-integrated, well-trained, use-case-specific fashion inside customer SOCs. That number is meaningful in a way the headline AI productivity statistics often aren't — it's a measured operational improvement against a specific defender workflow, not a survey of executive sentiment. The takeaway is that AI in the SOC works when the integration is thoughtful. It doesn't when the integration is "we bought a chatbot and bolted it onto the alert queue."

The pattern that produces the 76% is the pattern security teams should be replicating. Right data. Right use case. Right integration depth. Genuine reasoning over the telemetry, not pattern matching against keywords. Feedback loops that improve the system over time. The CISOs who get this combination right are seeing genuine productivity gains. The ones who buy AI features bolted onto legacy products are mostly disappointed and quietly walking the licenses back. 

Brave's Comet research — the agentic browser problem

The Brave team's published research on Perplexity's Comet agentic browser is the technical demonstration every CISO should send to their CTO before approving an enterprise rollout. The vulnerability is structural, not implementation-specific. The agentic browser reads a website's content, including comments, hidden text, and any payload an attacker decides to leave in a place the model will encounter. The browser cannot reliably distinguish between content the user wants summarized and instructions the page is trying to inject. So the malicious instructions get executed against the user's context — which now includes their email, their session cookies, and any tools the agent is wired to.

In the demonstration Brave published, the agentic browser obediently extracted the user's information from prior tabs and emailed it to the attacker — based on instructions hidden in a blog comment on a site the user visited. There is no current architectural answer that scales. The defender's response right now is to hold the line on enterprise rollout of agentic browsers until the prompt-injection problem is genuinely solved at the architecture level. This is not a "we'll figure it out as we go" technology in an enterprise context. The blast radius is too large and the controls are not yet credible.

Cisco's Shodan research — exposed Ollama models in 2025

The Cisco research that surfaced this week shouldn't be a 2025 headline, but it is. They used Shodan to identify Ollama model instances hosted in public cloud environments, exposed to the internet, with no authentication in front of them. This is the same shape of mistake the cloud security community has been documenting for a decade — engineers stand up infrastructure to do the experiment they were asked to do, get the functionality working, and forget the perimeter. The new variation is that the exposed asset is now a model serving endpoint, which means the exposure is both the classic cloud foothold and a fresh attack surface for LLM-jacking, prompt injection, and model exfiltration.

The CNAPP capability to detect this is in every major cloud security platform. The CISO action this week is to filter for "internet-exposed model serving endpoints" and have an answer for every hit by the end of the day. If the answer is "we didn't know," that's the program gap. If the answer is "we know and we accept the risk," that should be documented memo-to-file with the executive who accepted it. The unsolved version of this in 2025 is just unforced error.

The NIST 800-53 adaptation project — adapting the control library for AI

The closing segment focused on a project that's quietly important. NIST SP 800-53 is the foundational control library for federal and federal-adjacent security programs. The project Sandy Dunn surfaced is pulling roughly 3,000 practitioners together to adapt the existing 800-53 controls for the new AI infrastructure and workload classes — instead of reinventing the wheel, the work is to identify which controls translate cleanly, which need adaptation, and where genuine new controls have to be built. The project won't immediately become regulation. The NIST AI Risk Management Framework is the more likely near-term regulatory anchor. But the 800-53 adaptation will give security practitioners the operating-control vocabulary they need to actually implement governance programs against the new infrastructure, in the way 800-53 today is the foundation for FedRAMP, SOC 2, and most enterprise security programs.

The take-home for any security leader following along is to plug into the project, contribute domain experience, and start aligning internal control libraries to the emerging AI-specific control set. The first organizations to operationalize this work will be materially ahead of the regulatory curve. The ones that wait for the rules to fully form will be playing catch-up.

The throughline of the episode is one any CISO can take into their own program planning this week. The fundamentals matter more in the AI era, not less. The teams that win are the ones who get the boring work right at scale — secret detection, exposed-asset hygiene, paved-path engineering, and AI integrations that are designed deliberately rather than bolted on. The teams that lose are the ones chasing the shiny object and skipping the fundamentals because they assumed AI would compensate. It won't.

Show notes

Guests — solo episode (Conor Sherman and Stuart Mitchell, hosts; no in-studio guest)

Books mentioned — none

Frameworks / models / tools named — Apiiro research on AI-augmented coding (4x velocity, 10x PR rejections); Sysdig 2025 Cloud Defense Report (76% reduction in detection-to-response when AI is integrated thoughtfully); Brave research on Perplexity's Comet agentic browser (prompt injection demonstration); Cisco / Shodan research on exposed Ollama model endpoints; the NIST SP 800-53 adaptation project for AI controls; NIST AI Risk Management Framework (referenced as the more likely near-term regulatory anchor); CNAPP (cloud-native application protection platforms)

Other people / shows / resources referenced — Sandy Dunn, CISO at SPLX (referenced as the surfacing for the 800-53 adaptation project); Daniel Miessler (referenced re: context is king); Simon Sinek (referenced re: leadership and middle-of-the-road thinking); Coinbase CEO Brian Armstrong (referenced re: "every engineer needs an AI coding assistant" mandate); Lemonade CEO (referenced re: AI directives); MIT Sloan's 95% AI adoption failure stat (referenced as a frequently misread headline); Liquid Death (Conor's running plug); the Spirit Halloween / CVS seasonal-merchandising aside

Hosted by Conor Sherman and Stuart Mitchell.

Keep Reading