This website uses cookies

Read our Privacy policy and Terms of use for more information.

What you'll learn

  • Agentic coding has already happened — the security opportunity isn't to slow it down, it's to embed at the moment the code is generated.

  • Generation isn't the bottleneck — the pipeline after the code gets written is, and that's where security debt is now compounding fastest.

  • Security awareness training for engineers may finally land via Cursor- and Claude-style rule files, not via slide decks.

Description

Most security leaders are still relating to agentic coding the way they related to cloud in 2014 — as something to negotiate, slow down, or carve exceptions around. That window has closed. Adam Arellano — former VP of Cybersecurity at PayPal, now Field CTO at Harness — came on Zero Signal to argue that the security teams who'll matter over the next five years are the ones who pick up the agentic-coding shift and turn it into a way to make engineering both faster and safer at the same time. The teams that try to gate it are losing already.

The conversation goes after the practical version of that thesis. 60% of code in a recent industry sample is now AI-generated. Only 18% of those organizations have policies on AI use, and a large majority are shipping code without knowing if it's vulnerable. The instinctive security response — slow down, mandate review, add gates — is exactly the wrong move because it pretends the problem is generation. The actual problem is everything that happens after the code gets written. The pipeline that was already slow, overworked, and back-logged the day before Claude Code launched is now going to drown in volume.

Adam's frame is the Toyota-vs-GM analogy: American factories used to put fixers at the end of the assembly line. Toyota fixed the process at the point of interaction. Most security teams are still the fixers — finding problems weeks after the engineer who wrote the code has moved on and forgotten the context. The teams that win move security to the moment of generation. With agentic coding tools that read rule files at every prompt, that's now actually possible — the security awareness training that never worked in slide form might finally work as a markdown file the model reads on every turn. 

What we cover

  • "the train has left the station" — why fighting agentic coding is the wrong battle for security to pick

  • "23% of a developer's time is actually writing code" — what changes when you fix the other 77% of the pipeline

  • "the opportunity for security to make engineering faster" — the only credible posture left for app sec

  • "the Toyota assembly line" — fix the process at the point of interaction, not at the end

  • "chaos monkey for code" — resilience engineering as the modern alternative to deterministic testing

  • "the markdown rule file as security training" — Daniel Miessler's prediction that this is the year it works

  • "the gates that were guarded by technical skill are now wider" — what generative AI does to who can build

  • "resilience isn't never failing" — redundancy and recovery as the real definition of secure systems 

Thank you to our Sponsors:

Hampton North is the premier US based cybersecurity search firm. Start building your security team with Hampton North

Sysdig is the leader in AI-powered real-time cloud defense; stop watching and start defending 

The conversation

The window to slow agentic coding has already closed

Adam was direct on the framing question. The opportunity to slow this down or stop it is long past. Anybody who could see the promise of what generative coding could do knew it was inevitable, and the founders, VCs, and engineers driving it were never going to wait for security's blessing. The only available move for a security team is to figure out how to help engineering go faster and safer simultaneously.

To win at security, you've got to do it through engineering. There's no other way to go.

— Adam Arellano

The corollary is the math. Adam's previous environment had roughly 12,000 engineers and 400 security people. There's no version of that ratio where the 400 catch up to the 12,000 by adding gates. The only way the 400 stay relevant is by making the 12,000 better at security at the moment they're making decisions. That requires security teams to actually understand how engineering works — not in the abstract, but with their actual people sitting next to engineering's actual people.

Generation isn't the bottleneck — the pipeline i 

The deeper insight, and the one Harness was built around, is that everything that happens after the code is written is broken. Pipelines to production were slow, overworked, and back-logged before Claude Code shipped. With agentic coding generating volume an order of magnitude faster, the pipeline becomes the constraint that destroys the value of all that generation.

Adam quoted a number worth keeping: roughly 23% of a developer's time is actually spent writing code. The other 77% is everything around it — review, integration, testing, deployment, debugging, communication. AI accelerated only the first 23%. If security and engineering don't fix the rest of the pipeline, the new bottleneck just moves downstream and a flood of code that nobody can review fast enough becomes the security problem.

Toyota at the assembly line, not GM at the end of it

The most useful operational frame in the conversation is the assembly line analogy. American auto plants in the 80s used to staff fixers at the end of the line — people whose job was to repair whatever the line had missed, as long as the line never stopped. Toyota upended the model by fixing the process at the point of interaction, where the worker actually was, the moment something went wrong. The American producers were always two weeks behind the defect. Toyota was inside the moment the defect was created.

Most security teams are still the fixers. Code commits, ships, runs in production, and then security catches something and brings it back to the engineer who wrote it. By that point, the engineer has moved on, doesn't remember why they made the choice, and treats the ticket as overhead. The teams that get this right move feedback loops to within minutes of the commit. Engineering leaders love that posture because it makes everything more efficient. Security leaders should love it because it's the only thing that scales.

The markdown rule file is the new security awareness training

The most concrete near-term win Adam and Conor landed on is the rule-file pattern. Cursor, Claude Code, and the rest of the agentic coding tools read markdown rule files on every prompt. The model is told what cross-site scripting looks like, why secrets shouldn't be in code, what authentication patterns are acceptable, and which library categories are common hallucinations. Every code generation runs through that context.

Daniel Miessler's 2026 prediction was that this is the year security awareness training for engineers finally works. The reason isn't pedagogy. It's that the security knowledge isn't being trained into a human anymore — it's being injected into the model at the moment of generation. There's a fast-growing set of public security rule repositories that any team can adopt. For a CISO whose AppSec program has been measured by training completion rates that nobody believed in, this is a genuinely better mechanism. Generation-time guardrails get applied 100% of the time. Slide decks did not. 

Resilience, not zero failure

The closing frame extends the engineering parallel into aviation. The interesting thing about aviation safety isn't that planes never fail. It's that the engineering discipline accepts failure as inevitable and designs redundancy three or four layers deep so that when something breaks, the system continues. The default failure mode of an airplane is falling out of the sky, so the engineering bar is correspondingly higher than for cars — but even there, the goal isn't zero failure. It's recoverable failure.

Resilience isn't never failing. Resilience is being able to absorb failure and continue on.

— Adam Arellano

The implication for AI systems and the agentic coding pipeline is direct. Generative systems are probabilistic, not deterministic. You will not test your way to zero defects. The right architecture is layered resilience — chaos-monkey-style adversarial testing inserted early, redundant controls inserted along the pipeline, and a recovery posture that absorbs failures and continues without crashing the whole system. That's the bar to be designing toward. "Never fails" was never the right goal, and it's certainly not the right goal for systems built on probabilistic models. 

Show notes

Guests — Adam Arellano, Field CTO at Harness; previously VP of Cybersecurity at PayPal

Books mentioned — none

Frameworks / models / tools named — Claude Code; Harness "resilience engineering" / "chaos monkey"; Cursor (referenced via rule-file pattern); markdown rule files for agentic coding; Toyota vs. GM assembly-line frame; "task creation, not task replacement" (referenced); the World Economic Forum Davos Anthropic interview with Dario Amodei

Other people / shows / resources referenced — Dario Amodei (Anthropic CEO, WEF Davos quote on engineers no longer writing code); Daniel Miessler (2026 prediction on engineer security training via rule files); Mike Lyons (CISO at Cribl, mentioned by Stuart as a future guest); Central Piedmont Community College (Charlotte, NC — Adam serves on AI/coding/gaming-creation advisory board); Toyota / GM / Chevy 1980s manufacturing partnership (referenced); Cruise (autonomous-driving comparison); Anthropic (referenced as Claude Code maker); Hampton North RSA golf morning at TPC Harding Park; Damien from Sysdig (RSA happy hour reference)

Hosted by Conor Sherman and Stuart Mitchell.

Keep Reading