What you'll learn
An agentic process with full admin rights carries the same risk profile as a compromised insider — without the burnout, fatigue, or behavioral signals that make a human insider easier to detect.
Vendor security claims fail at the verification step: a vulnerability scan dressed up as a pen test, or a green compliance scorecard, is not a defensible answer to "are we secure."
The CISO role survives the AI rebuild only when the burden is shared — across BISOs, deputy CISOs, the executive team, and a board that owns the risk.[Editorial description — 200–280 words. Why this conversation matters now. Conor's voice.]
Description
The conversation in episode 2 moves between two angles that turn out to be the same problem from opposite sides. AI is being handed admin access to the systems that run the business, while the people who would normally watch those systems are being thinned out, retrained, or retired. Olivia Phillips — Vice President and US Chapter Chair of the Global Council of Responsible AI, and founder of Wolf By Technology — sat down in Las Vegas to walk through what that means for security leaders.
Olivia's frame on insider threat is the most useful thing in this episode. The classical insider risk model assumes a human who became compromised — disgruntled, burnt out, bribed, talked into it. Strip out the human and replace them with an agentic process that has the same admin rights and the same access to data, and the model doesn't go away. It gets harder. The agent will not get tired. It will not signal disengagement on a Slack channel. Behavioral analytics will not flag it, because the agent has been there since the beginning.
The conversation works back from there. If verification is the answer — to vendor security claims, to AI outputs, to who is actually on the other end of a phone call — then verification has to be cheaper, faster, and embedded in the build process. And if the CISO is the only person carrying that burden, the role does not survive the rebuild. That last part — sharing the burden across BISOs, deputies, the management team, and the board — is the thread the conversation keeps returning to.The conversation.
What we cover
"Why agentic processes need to be managed as insider threats" — and what the classical insider model gets right and wrong when applied to non-human access.
"Pen test or vulnerability scan?" — how to read past the cover slide of a vendor security report.
"The CISO is a bad news job" — why the role is reporting risk, not preventing it, and what that means in the boardroom.
"Sharing the wealth of the burden" — BISOs, deputy CISOs, AI governance councils, and where each fits into a CISO org that has outgrown one person's shoulders.
"Authenticity will be the coin of the realm" — what carries forward as binary skills become replaceable.
"Battleship: AI vs. AI" — what an offensive-defensive AI race actually looks like, and where regulation has to land first.
"Deep fakes and the limits of biometrics" — why families and finance teams are setting safe words.
The conversation
When AI inherits the insider threat profile
The cleanest insight in this episode is the symmetry. Insider threat has always assumed a human who became compromised — disgruntled, overworked, bribed, talked into it. The agentic equivalent is not metaphorical. The same access, the same blast radius, and crucially, the same opportunity for someone behind the system to manipulate it.
AI is going to be like, it's been there since forever. It's OK, because from a behavioral analytics standpoint, it's been there since the beginning. Not realizing it's somebody who shouldn't be there.
Behavioral baselining is one of the strongest detective controls security has built over the last decade. It depends on the assumption that there is a baseline — that a new process is identifiably new. Hand admin rights to an agent at the start of a re-platforming and the baseline includes the threat. The asset inventory question becomes urgent again, but not in the form security teams are used to.
And it just takes a tweak in code to manipulate the information so that because it already has full admin rights, can do whatever it needs.
The verification gap
Olivia spent part of her career as a pen tester, and the verification problem follows her around. Vendor security claims have always been hard to read. AI is making them harder, because the artifact a vendor hands over — a clean dashboard, a scorecard with an A — is itself the kind of output AI is now generating.
I have an A and we're so secure. But then in reality, you're actually a D.
The fix is not new framework. It is a willingness to demand the report behind the claim. Conor's framing was that businesses still treat AI like magic — natural language goes in, a confident answer comes out, and the answer is treated as true. The corrective is the same one that worked for the pen test era: trust requires a report someone is willing to defend, and the security team has to be willing to ask for it.
The episode keeps returning to the role itself. The CISO has been handed AI on top of everything else — adversaries with new firepower, new compliance overhead, new questions about whether the AI is even going to be honest about its own footprint.
I think the CISO should be there, but I think the CISO needs to share the wealth of the burden that they have to carry for the entire organization.
The structural answer is the deputization stack — BISOs embedded in the business, deputy CISOs carrying functional weight, and a board that owns the risk rather than offloading it. The reframing in this stretch of the conversation: the CISO's job is not to fix the bad thing. It is to articulate the risk in financial terms, agree on tolerances repeatedly, and stand behind the work. "Bad news job" is the shorthand. Treat it as an executive role, not a fixer role, and the burden becomes shareable.
Authenticity as the last remaining skillset
The detour into hiring agents and the disappearance of the human handshake turns into the most personal stretch of the conversation. Olivia and Conor both end up in the same place: the skills that matter most in five years are the skills that are hardest to digitize.
That maps directly onto the security leader question. If everything binary is replaceable — the deck assembly, the policy lookup, the model interpretation — then the differentiator for the next generation of CISOs is the ability to walk into a board meeting, hold a difficult position, and be trusted under pressure. Those skills do not show up on a certification track. They show up over time, with the same people, through the bad days.
Show notes
Guests — Olivia Phillips, founder of Wolf By Technology; Vice President and US Chapter Chair of the Global Council of Responsible AI.
Books mentioned — None explicitly named in the conversation.
Frameworks / models / tools named — NIST 800-53; PCI DSS 4.0; Sarbanes-Oxley (referenced as a regulatory analogue for AI governance).
Other people / shows / resources referenced — Simon Sinek (referenced by Conor on authenticity); Mike Tyson (the "everyone has a plan until they get punched in the face" line); the MGM Las Vegas deep-fake-driven breach; Meta's announced Manhattan-scale data center build; the OpenAI / Oracle ~$30B/year cloud agreement.
Hosted by Conor Sherman and Stuart Mitchell.