What you'll learn.
AI agents inherit the credentials, privileges, and blast radius of the employees who deploy them — every over-permissioned account in the company is now also an over-permissioned junior worker.
Three-year security roadmaps lost their grip the moment frontier model cycles compressed to weeks; vendor partnerships and contract terms have to compress with them.
Trust in an AI-embedded business rests on transparency at every layer — input/output, agent orchestration, model, RAG, infrastructure — not on a single prompt-injection check at the front door.
Description
Ashish Rajan — founder of TechRiot.io and host of the Cloud Security Podcast and AI Security Podcast — has been advising CISOs and CTOs through the AI rebuild. His framing for the moment, and the title of this episode, is the cleanest on the table: enterprises have not just deployed AI, they have unleashed junior employees with root access. The agent's privileges are the deploying employee's privileges. Every dormant identity, every over-broad role, every tolerated permission is now a runnable surface.
The conversation tracks what changes downstream. Three-year roadmaps lose their grip when frontier models ship monthly. Vendor selection shifts from "are you the best fit today" to "can you keep pace with the models my developers will start using eight hours from now." Procurement teams are not yet asking that question; the security leader has to. And the CISO who tries to own the AI decision alone is positioning the role to fail: this is committee work — legal, HR, security, the business — making slower decisions that everyone has signed.
Ashish's own framework moves the conversation from prompt injection (where most security shops get stuck) to a layered model — user, input/output, agent orchestration, model, data, infrastructure, identity — that lets you ask what the right control is at each level. The piece that has not changed, and the piece he keeps returning to: the fundamentals. GRC, hygiene, least privilege. The new threat surface is mostly the old one with the speed dial turned to maximum.
What we cover.
"Junior employees with root access" — what AI agents inherit when over-permissioned humans deploy them.
"From three years to three months" — why the traditional security roadmap is no longer a planning artifact.
"Procurement at the speed of frontier models" — vendor selection criteria, contract length, and the transparency question.
"Trust as transparency" — security, safety, reliability, resilience, and who actually owns shipping trust to customers.
"Decision by committee" — why the CISO should not be the one approving (or rejecting) AI tools alone.
"A seven-layer view of AI security" — Ashish's stack from user query to infrastructure, with identity threaded through every level.
"What reasoning would change" — why current frontier models keep humans in the loop, and what flips when reasoning crosses the line.
"GRC wouldn't exist if hygiene was covered" — the hot take on what AI is really exposing.
The conversation.
Junior employees with root access
The central frame Ashish opens with is the cleanest articulation of the problem most CISOs are now navigating. AI agents do not get their own service accounts in most enterprises. They borrow the credentials of the employee who deployed them — and most employees, particularly developers, have far more access than their job actually requires. Hand a curious engineer an MCP server, and the result is an autonomous process running with admin rights to wherever that engineer happens to have access.
Big junior employees with root access because they are using that employee's credential. Whatever credential that person has is the credential that the AI agent has as well.
Authorization has been the dirty laundry of security for two decades. Authentication got cleaned up. Authorization stayed messy because the business risk was abstract — until now. Agentic systems make every over-permissioned identity an active risk, because the agent has none of the social friction that kept a human from clicking through every door they had a key to.
Three months instead of three years
Ashish is direct about what happened to the three-year security roadmap. It still works for the parts of GRC that AI hasn't disrupted yet — PCI, basic compliance — but for anything touching applications, models, or developer tooling, it's gone.
Today planning for three months or three years does not make sense.
The downstream effect is procurement. Long contract terms with vendors who can't keep pace with frontier model releases are now risk in their own right. The screening question isn't "are you the right partner today" — it's "are you still going to be the right partner when a new model lands eight hours from now and my developer wants to use it." That changes the contract length conversation, and it puts transparency back at the center of vendor selection.
Trust as transparency
When the conversation turns to trust, Ashish is clear that the CISO does not own it alone. Trust is built on transparency, and it gets shipped at the level of culture, not policy.
We haven't cracked into the reasoning yet, where you can trust a LLM to make a reasoning call
The thread Conor introduces — security, safety, reliability, and the resilience addition Ashish offers — is the underlying frame for what enterprises actually have to be transparent about. None of those four are owned by one role. The CISO contributes the security pillar and helps frame the others, but if the management team isn't building safety, reliability, and resilience into the product itself, no amount of security review is going to retrofit them.
The structural shift in this stretch of the conversation: the CISO who walks in and tries to own "the AI decision" is positioning the role to fail. The right answer is the AI governance forum — security, legal, HR, the business — meeting monthly and deciding slower than any one person would. The decisions that come out of those forums are slower, but they are decisions everyone has signed.
A seven-layer view of AI security
Ashish's framework — and the part of the conversation security teams are most likely to want a transcript of — is the layered model he uses to walk CISOs through where the actual controls go. The short version, working from query to substrate:
User layer — authentication, authorization, and the validation that this is the human (or the AI agent of the human) you think it is.
Input/output — content safety on what comes in and what goes out, before anything else acts on it.
Agent / orchestration — multi-agent flows, tool use, MCP servers, code-execution environments.
Model — trusted model selection, enterprise license vs. open source, provenance, what got pulled off Hugging Face yesterday.
Data / RAG — what internal knowledge the model is allowed to see, and what classification controls run on the response coming back out.
Infrastructure / cloud — Bedrock, Azure OpenAI, hosted vs. self-hosted, the underlying compute boundary.
Identity — threaded through every layer above. Every call gets re-validated.
The point of the framework is not the layer count. It's that most security conversations stop at layer two — prompt injection — and never ask the question at layer four or five. Once the question is asked, the answers exist. Some are vendor problems, some are configuration problems, some are old hygiene problems with a new label.
GRC would not exist if you were just doing basic hygiene.
That last line landed where the rest of the conversation pointed. AI is mostly exposing what was already true.
Hosted by Conor Sherman and Stuart Mitchell.