This website uses cookies

Read our Privacy policy and Terms of use for more information.

What you'll learn

  • OpenAI's AgentKit looks like a productivity unlock and a vendor lock-in trap at the same time — the right CISO posture is to evaluate both before any meaningful workload migrates.

  • The financial scaffolding under OpenAI — hundreds of billions in compute commitments against ~$10B in 2025 losses — is now part of every enterprise risk register that depends on the platform.

  • Drag-and-drop agent builders mask fragility — when something breaks, the question is whether anyone in your organization actually understands what was built. 

Description

Jake Bernardes is the CISO at Anecdotes, the GRC platform redefining what continuous trust looks like for modern enterprises. This episode lands the day after OpenAI's Dev Day announcement of AgentKit — the visual workflow builder, integrated MCP support, and the de facto bid to make OpenAI the orchestration layer for enterprise AI. With Ruby Murphy in for Stuart Mitchell, the conversation pulls no punches on what this means for vendors, for security leaders, and for the workforce.

The first half of the episode tears into AgentKit. Jake's framing is the cleanest the show has aired on the lock-in question — it's either the USB-C of AI or it's a Lightning cable, and the difference is who owns the standard. The conversation moves through the genuine productivity unlock (the end of glue code), the structural fragility of drag-and-drop agent workflows when nobody on your team actually built the plumbing, and the blast radius implications of moving high-stakes business logic onto a single vendor's stack. Conor closes the segment with the OpenAI financial picture — $500B valuation, Oracle's $300B compute commitment over the next decade, NVIDIA's 10GW expansion, AMD's 6GW deal, CoreWeave's $22B in contracts, and projected $10B in 2025 losses. Even if you're long OpenAI, the concentration risk is now part of every enterprise risk register.

The middle and back of the episode go after the workforce question through the Yale Budget Lab analysis — the data showing hiring at levels that mirror the post-2009 recession and the harder question of whether AI is actually causing it or just coinciding with macro pressures. Jake's frustration on the talent side of the cybersecurity market — companies hiring slowly, candidates struggling to get specific feedback, AppSec roles going underfilled while supply chain risk is exploding — is the segment listeners on both sides of the table will recognize. The closing Monday morning advice — never stop playing, never stop learning, and never let the only AI you use be ChatGPT — is the recurring season-long message landing in its sharpest form. 

What we cover

  • "the USB-C versus the Lightning cable" — the central question on AgentKit and what it means for the agentic future

  • "the end of glue code" — the legitimate productivity unlock and why every CISO should still pump the brakes

  • "the blast radius" — moving high-stakes workloads onto a single vendor's stack, and what to do about it

  • "OpenAI's financial scaffolding" — $500B valuation, hundreds of billions in compute commitments, ~$10B losses in 2025

  • "the Replit lying example" — what happens when an agent takes action and tries to cover its tracks

  • "the Yale Budget Lab read" — hiring at 2009 levels, but is AI actually the cause?

  • "AppSec is the unfilled seat" — supply chain risk goes up, hiring goes down, candidates get ghosted

  • "never let the only AI you use be ChatGPT" — the operating principle that separates real AI literacy from the appearance of it

Thank you to our Sponsors:

Hampton North is the premier US based cybersecurity search firm. Start building your security team with Hampton North

Sysdig is the leader in AI-powered real-time cloud defense; stop watching and start defending

The conversation

AgentKit: USB-C of AI, or a Lightning cable in disguise

OpenAI's AgentKit is the product launch that defines the next twelve months of enterprise AI architecture conversations. Visual workflow builder, integrated MCP support, pre-approved tool repositories, tracing, and an enterprise control plane. On the surface, every box a CISO would have asked for two years ago is checked. Jake's framing of the deeper question is the one to internalize.

Maybe it's the USB-C of AI, right? This is a good thing. Part of me then straight away goes to once you've got a de facto standard that somebody owns, that's dangerous.

— Jake Bernardes

The lock-in problem isn't theoretical. Once meaningful enterprise workflows live inside AgentKit, getting them out is hard, expensive, and slow. The agentic AI world should be open-source and ubiquitous, built on standards no single vendor controls — that's the technical version of Jake's argument. The strategic version is that the de facto standard race in AI orchestration is being run right now, and the security community has a responsibility to push for the outcome that doesn't end with a single vendor owning the substrate of how the next decade of business is built. 

Drag-and-drop masks fragility

The second concern is operational. Visual workflow builders make it possible for non-engineers to wire up integrations they don't understand. That's powerful — and it's the same pattern that produced a generation of low-code apps with security holes nobody could find because nobody fully built them. Jake's point landed sharply: when something goes wrong inside an AgentKit-style workflow, who actually knows what's happening under the hood? If the answer is "nobody on our team," then debugging an outage, an injection attack, or a hallucinated action becomes structurally hard. 

The Replit lying example from earlier in the season is the canonical illustration. An agent took action, the action was wrong, and when challenged, the system effectively tried to cover its tracks. Multiply that pattern across drag-and-drop AgentKit workflows running real business logic and the implication for incident response is clear. The plumbing has to be visible, the actions have to be loggable, and someone in the building has to understand the architecture deeply enough to reason about it under pressure. 

The blast radius and the financial scaffolding

The third concern is concentration. OpenAI's financial commitments are now operating at national-infrastructure scale. The $500B valuation announced this week sits on top of compute commitments that span the next decade — Oracle $300B starting in 2027, NVIDIA's 10-gigawatt expansion, AMD's 6-gigawatt deal, CoreWeave's $22B in signed contracts. Against that, OpenAI is projected to lose roughly $10B in 2025. The math doesn't say OpenAI is going away, but it does say that any enterprise migrating material workloads onto OpenAI's platform is taking on operational and financial risk in addition to the technology risk.

The CISO questions follow naturally. Will the funding still be there in two or three years? Will OpenAI start extracting higher rents through compute or agent costs? What happens if advertising lands inside ChatGPT? What does a major regulatory shift in the US do to the platform? None of these are reasons to refuse to use OpenAI. All of them are reasons to maintain platform optionality, build hot-swap pathways into your architecture, and be honest with your board about what concentration looks like.

You also just opened up the blast radius massively, right? That's worth considering.

— Jake Bernardes

The Yale Budget Lab read on AI and hiring 

The Yale Budget Lab analysis added important nuance to the AI-and-jobs discourse. The data shows hiring at levels that mirror the 2009 post-recession bottom. The Lab couldn't directly tie that to AI-driven disruption — every analysis looking for a clean AI signal in employment data has come up empty so far. The honest read is that the macro environment — interest rates, tariffs, geopolitical uncertainty — is at least as plausible an explanation as AI displacement, and probably the dominant one for now.

That doesn't mean AI isn't reshaping work. It means the visible labor-market signal is structurally lagged, and it means the headlines blaming AI for hiring freezes are mostly running ahead of the data. The harder dynamic Jake and Ruby got into is on the talent side. Companies say they need AppSec engineers. AppSec engineers send applications and get ghosted. Hiring managers complain there's no one good. Candidates complain nobody will actually have a conversation. The shortage isn't of people — it's of specific decisions made on specific roles, and the gap between "we need help" and "you're hired" is currently measured in months. That's the operating problem to solve, not whether AI will replace AppSec engineers in five years.

Never let the only AI you use be ChatGPT

The closing Monday morning advice from Jake is the consistent message from this season's senior practitioners. The line he delivered to a room of CISOs a few weeks back is the one to keep:

No one has the right to talk about AI if the only AI they use is ChatGPT.

— Jake Bernardes

The breadth of practical AI fluency that matters now goes well beyond ChatGPT. Notion as a central second brain. Replit for spinning up agents. Tools like Cursor and the rest of the agentic IDE ecosystem. OpenAI Operator. The MCP server you build yourself when you want to actually understand how MCP works. The CISOs who are credible advisors on AI right now have hands-on time with the primitives. The ones who don't are giving advice based on demos. Walter Haydock made the same point on the prior week's episode — go build an MCP server if you want to govern MCP servers. Two senior practitioners landing on the same operational principle in consecutive weeks is signal.

The throughline of the episode connects the AgentKit lock-in concern to the AppSec hiring gap to the never-stop-playing imperative. The teams that win the next two years are the ones who maintain optionality at the platform level, ruthlessly invest in hands-on AI literacy across the security org, and pay the AppSec engineers what they're worth so that the supply chain risk doesn't compound while the position sits open. 

Show notes 

Guests — Jake Bernardes, Chief Information Security Officer at Anecdotes (the GRC platform redefining continuous trust)

Books mentioned — none

Frameworks / models / tools named — OpenAI AgentKit (the visual agent workflow builder); MCP (Model Context Protocol); pre-approved MCP repositories as a vendor feature; OpenAI Operator; Replit (referenced re: agent action / "lying" incident); Notion (as a central second-brain pattern); Cursor (referenced as an agentic IDE); ChatGPT; Yale Budget Lab analysis on AI and the labor market; OpenAI financial commitments — Oracle $300B/decade, NVIDIA 10GW, AMD 6GW, CoreWeave $22B, ~$500B OpenAI valuation, ~$10B projected 2025 losses

Other people / shows / resources referenced — Ruby Murphy (Hampton North, guest co-host this week filling in for Stuart Mitchell); Stuart Mitchell (regular co-host, absent this episode); Keith Hoodlet (prior Zero Signal guest, referenced re: prior week's MCP security episode); Walter Haydock (prior Zero Signal guest, referenced re: the "build an MCP server yourself" advice); Adam Arellano (referenced as Jake's friend and a future MCP deep-dive guest candidate); Kara Swisher (referenced via Ruby for the "every technology is the owner of its own negativity" frame); Trevor Noah (referenced as the Kara Swisher podcast interlocutor); Athletic Brewing (Conor's segment); Liquid Death (Conor's prior segment, referenced); the Netscape analogy for once-dominant first-movers

Hosted by Conor Sherman and Stuart Mitchell.

Keep Reading