This website uses cookies

Read our Privacy policy and Terms of use for more information.

What you'll learn

  • Vulnerability management has plateaued — about 5.5% of production workloads run critical or high vulns year over year — and attackers exploiting fresh CVEs in under twenty hours have moved past anything a manual SOC can match.

  • Machine identities outnumber humans 35 to 1 in the cloud, but the 2.8% human slice carries a 67% risk profile against 34–48% for machines — meaning the small human population is doing most of the damage.

  • EMEA leads cloud AI/ML adoption at the package level (52% versus 33% in the Americas and 19% in APAC), suggesting the EU AI Act and data sovereignty rules are functioning as a permission structure to build, not a brake.

Description

The fourth Sysdig Cloud-Native Security and Usage Report makes one argument from three different angles: the human ceiling on security operations is real, attackers have already cleared it, and the data shows it everywhere you look. About 5.5% of production workloads run critical or high vulnerabilities — flat year over year, despite better tooling and more headcount. Threat actors exploited a LangChain CVE in twenty hours and a Python notebook CVE in under ten, windows no manual SOC reaches. Human identities, the 2.8% slice of the cloud population, carry the risk profile that drives most breaches. And the chart that surprised the report's author most: EMEA leads cloud AI/ML adoption at 52%, versus 33% in the Americas and 19% in APAC — suggesting the EU AI Act and data sovereignty regimes function as a permission structure to build, not a brake on innovation.

This conversation is for security leaders trying to figure out where their program sits relative to that ceiling, and what an honest move past it looks like. Crystal Morin walks through the specific guardrails that make autonomous remediation safe — what the system can patch, what it can kill, what it can revert. Where the McKinsey makers-takers-shapers frame helps you decide what AI risk you actually own. Why the kill-9 number is up 140% and which rungs of the response ladder come next. And what LLM jacking — Chipotle's chatbot, $7K overnight bills, a credential market on the dark web — does to the economics of waiting.

What we cove 

  • "The hustle hard era is over." — Why ~5.5% of production workloads carrying critical or high vulns hasn't moved year over year, despite better tooling.

  • "Image bloat went down significantly." — How a sub-1% statistic doubles as cost takeout for the CTO and risk reduction for the CISO.

  • "Exploited within 20 hours." — How a LangChain CVE and a Python notebook CVE compressed the patch window past anything human SOCs can reach.

  • "35 to 1." — Why the cloud's 2.8% human identity slice does 67% of the damage, and where machine identity governance lands organizationally.

  • "More than 50% in EMEA." — Why the region with the heaviest AI compliance regime leads cloud AI/ML adoption at the package level.

  • "Makers, takers, and shapers." — The McKinsey frame that explains why B2C is shaping models while B2B is consuming them through SaaS.

  • "140% increase in killing processes." — The on-ramp to agentic defense and where the next rungs of the response ladder break down.

  • "LLM jacking." — Chipotle's chatbot, $7K bills overnight, and the dark-web economy in stolen LLM credentials. 

Thank you to our Sponsors: 

Hampton North is the premier US based cybersecurity search firm. Start building your security team with Hampton North

Sysdig is the leader in AI-powered real-time cloud defense; stop watching and start defending

The conversation 

The vulnerability ceiling and the case for guardrails

The headline in the fourth Sysdig usage report is the one Crystal didn't have to write — she pulled the data, sat with the team, and the story declared itself. About 5.5% of workloads in production run critical or high vulnerabilities. That number is essentially flat year over year. The encouraging counter-finding: vulnerabilities with a known exploit in production are now under 0.2%, a roughly 75% drop year over year. Prioritization works. Runtime context works. The absolute count of vulnerabilities, though, is still going up — Crystal cited public reporting that the volume has climbed two to three hundred percent in recent years, and she expects newer initiatives to keep the inflow heavy.

The conclusion isn't "try harder." Crystal's frame is that agentic AI has to come into remediation, and the real operational question is what the system is allowed to do.

If you're deciding to implement agentic AI into your threat detection response or just remediation platform, taking the role of being able to define what that system is able to do. ... Is it that it can only apply patches? Can it kill a container to be able to apply patches to something? Or is it a list of golden images that you have and you know that these are known good and your agentic system can then revert back to those?

— Crystal Morin

The image bloat finding sits inside the same argument. Less than 1% of packages inside images are unused this year, down sharply from 2025, when the AI integration scramble pushed bloat up. That converts a CISO conversation directly into a CTO conversation: smaller images cost less to run and reduce the surface vulnerability management has to chase. The agent that walks an S-bom every release isn't the long-pole capability — it's exactly the kind of work the human ceiling is preventing today. 

Identity is a now problem

Across roughly 800,000 cloud identities, humans account for 2.8% of the population. Machines outnumber humans 35 to 1. That ratio is the easy part. The harder part: 67% of those human accounts carry a risk profile — administrator privileges with no MFA enabled, non-rotated access keys, inactivity for 90+ days. Machine identity risk by comparison sits between 34% and 48%, which Crystal calls genuinely encouraging at the per-identity level even as the absolute count of risky machine accounts is enormous.

So 3% are accounting for more than 50% of the breaches, 97% are all of the identities, but we're actually doing a good job managing them.

— Crystal Morin

The org-design question is where this lives. Stuart asked whether non-human identity belongs in the CISO org, the CTO org, or a new function entirely. The position from the show: put the responsibility with whoever creates the risk. If 97% of identities are machine, and machine identities are being spun up by engineering teams to ship features, the team shipping features owns the on-call when those identities get abused. Identity is not a 2027 problem — every agent built, every machine-to-machine connection wired up, every API token issued is adding to the count today.

Why EMEA is leading on AI/ML adoption

The most surprising chart in the report: EMEA accounts for 52% of cloud AI/ML packages in production, against 33% in the Americas and 19% in APAC. The conventional narrative — that the EU AI Act and data sovereignty regimes would slow European innovation — does not survive contact with the data, at least not at the package level inside cloud-native environments.

Crystal's read is that the regulations are functioning as a permission structure. A clear rubric for what "secure AI usage" looks like gives organizations confidence to build at the package level rather than route everything through SaaS. Stuart compared it to brakes and seatbelts in a car — the assumed safety lets people drive faster, even though it doesn't actually eliminate the underlying risk. The same dynamic appears here: clearer compliance scaffolding may be giving European enterprises confidence to ship more AI internally rather than push it through third-party APIs.

The McKinsey makers-takers-shapers frame helps locate enterprises along this axis. Takers consume AI through SaaS — ChatGPT, Claude, integrations on top of Snowflake. Shapers take an open base model from OpenAI or Hugging Face and customize it with their own data pipelines. Makers build models from scratch, almost certainly the smallest segment. The data Crystal pulled tracks with the frame: B2C-heavy sectors like media, internet, transportation, and retail show more package-level AI adoption (the shaper pattern), while B2B software and manufacturing trail (the taker pattern). Different business models route AI risk through different surfaces, and the security strategy follows the surface.

The on-ramp to agentic defense

The 140% year-over-year increase in customers using kill-9 — programmatic process termination — is the clearest data point that organizations are starting to trust autonomous response. Killing a process is the lowest-blast-radius action on the response ladder. The next rungs: pause or stop a container, kill a container, block drift or malware, take a forensic snapshot, revert to a golden image. Each rung carries more containment value and more potential to disrupt production.

Crystal's argument for moving up the ladder: the industry has had Chaos Monkey-style resilience patterns for over a decade, the architectural ground for autonomous response is already laid, and the remaining hesitation is mostly cultural. Some response actions don't even require an agent — for crypto miners, behavioral and IOC-based detections are reliable enough that scripted termination is the right answer. The economics push the same way. CPU and now GPU compute is what attackers steal. Stop the crypto miner, stop the LLM jack, and the savings show up on the cloud bill before the breach report does. 

LLM jacking and the new economics of stolen compute

LLM jacking — Sysdig's research team coined the term in 2024 — is the same pattern as crypto jacking with a far more expensive resource. An attacker breaches the environment, scans for available LLM access, and uses your model on your tokens. The Chipotle example Stuart raised was even cheaper than that for the attacker: no breach required, just prompt manipulation against the public chatbot to use the company's Claude credits.

Crystal's field example: an individual on AWS, hosting an LLM, woke up to a six-or-seven-thousand-dollar bill from a single overnight session. Stolen LLM credentials are now sold on the dark web alongside the rest of the credential market — the buyer gets your environment, your model, and your billing relationship. 

The provider-side question Stuart raised — whether Anthropic or OpenAI have any commercial incentive to fix this — is the right one. They are selling compute. Conor noted that Anthropic is now publishing a shared responsibility model for AI, which is the ground floor of the same conversation cloud providers had a decade ago. Until that model is widely adopted and operational, the answer for security teams is the same as for crypto mining: detect early, kill the process, and treat the cloud bill as a leading indicator.

Show notes

Guests — Crystal Morin, Senior Cybersecurity Strategist at Sysdig; former Air Force intelligence analyst; author of four of the nine annual Sysdig Cloud-Native Security and Usage Reports. 

Books mentioned — None named in the conversation.

Frameworks / models / tools named — McKinsey "Makers, Takers, Shapers" AI-adoption framework; EU AI Act; Sysdig Cloud-Native Security and Usage Report (4th edition / 2026); LangChain; Chaos Monkey (AWS); Hugging Face; Snowflake; LLM jacking (term coined by Sysdig research, 2024); IBM Cost of a Data Breach Report; autonomous response actions (kill-9 / process termination, container kill, drift block, forensic snapshot, golden-image revert).

Other people / shows / resources referenced — Adam Aurelio (Harness AI; prior podcast guest); Mike Prevet (cyber economist; prior podcast guest); Sysdig Threat Research Team (LLM jacking research, 2024); Dark Reading, Security Magazine, SANS webinars (cited the Sysdig usage report); MITRE / CISA (referenced for the 200–300% multi-year vulnerability volume increase statistic); Chipotle chatbot incident (LLM credit abuse via prompt manipulation, public reporting); Project Glasswing (referenced in the vulnerability-volume context).

Hosted by Conor Sherman and Stuart Mitchell.

Keep Reading