What you'll learn
The economically useful definition of AGI is when an AI system can show up to a knowledge-worker job on Monday and do it credibly — and Daniel's working timeline puts that between 2025 and 2028, with 2027 as the central case.
The natural state of any economy is the founder doing all the work themselves; AI is finally giving founders the leverage to live that out, which means the ideal number of employees inside any company is zero.
The acute timeline for labor disruption isn't five or ten years away — CEOs are already emotionally committed to the trajectory, the snowball is rolling, and the impact lands in the next 18-36 months whether the technology fully ships or not.
Description
Daniel Miessler is the founder of Unsupervised Learning, the cybersecurity industry's most-read independent newsletter, a longtime security veteran, and one of the more provocative voices in the AI-and-work conversation. This is his first Zero Signal appearance, and it's the most uncompromising take on what's actually coming for white-collar work and what security leaders should be doing about it.
The opening segment lays out Daniel's working definition of AGI — economic, not technical. AGI is when an AI system can credibly replace an average knowledge worker showing up to a new job on Monday. Watch the videos, read the docs, take instruction from the manager, adapt when the project shifts in two weeks. The "general" in AGI is the ability to handle the breadth of inputs that any human knowledge worker handles. By that definition, we're not there yet — Daniel's own digital assistant can do most of it but requires hand-holding into a new environment — but the gap is smaller than most people think, and the timeline he's most confident in is 2027.
The middle of the episode goes after the harder argument that's the title of this post. The natural state of any economy where people create things is the founder doing the work themselves. The only reason employment exists at scale is the human limitation that one person has only one brain and two hands. AI is finally giving founders the leverage to escape that limitation. The ideal number of employees inside any company, Daniel argues, is zero. That's not a moral claim — it's a structural observation about what business is actually for. The implication for the labor market is severe, the timeline is acute, and the meaning-crisis question that follows is the one society hasn't begun to grapple with seriously.
What we cover
"the economic definition of AGI" — replacing an average knowledge worker, not passing a benchmark
"the canyon" — Daniel's framing for the labor disruption ahead and how long it lasts
"the ideal number of employees is zero" — the structural reframe of what businesses are actually for
"the red button problem" — why large companies will lag and small ones will win the AI replatforming race
"the acute time horizon" — why CEO emotion is what's driving the timeline, not the tech maturity curve
"the meaning crisis" — what happens when work, religion, and family all stop being viable sources of meaning
"why America will adopt UBI last" — and the social fallout of being late to the safety net
"the optimism case" — education, healthcare, and housing as the real upside of the transition
Thank you to our Sponsors:
Hampton North is the premier US based cybersecurity search firm. Start building your security team with Hampton North
Sysdig is the leader in AI-powered real-time cloud defense; stop watching and start defending
The conversation
Defining AGI by what it does, not what it is
Daniel's working definition of AGI is the most useful starting point any security or business leader should adopt. Forget the technical definitions that try to pin down exactly what a model can do at a benchmark level. The economically useful definition is whether an AI system can show up to work on Monday at a new company, watch the onboarding videos, read Confluence and Slack, take direction from a manager, adapt when the project pivots two weeks later, and produce credible work product across that arc. That's the test. The "general" in AGI is the breadth of inputs and the capacity to adjust to changing context, which is exactly what humans do at work and what current AI systems mostly can't.
By that test, we're not at AGI yet. Daniel's own digital assistant can do most of the underlying work but requires meaningful hand-holding into a new environment — show it the videos, point it at the right docs, scaffold the context. The gap between "almost there" and "actually there" is smaller than most people think, though, and Daniel's timeline puts AGI between 2025 and 2028, with 2027 as the most likely hit point. The leading-indicator companies are already advertising agentic-employee replacement products with taglines like "stop hiring humans." Daniel's bar is that we'll see multiple companies publicly using these systems and reporting actual deployment-scale results before the call gets made. That's the trigger to watch.
The ideal number of employees is zero
The most provocative reframe in the episode is the one every executive should sit with for a few minutes. Imagine a founder with 10,000 brains and 20,000 pairs of arms, capable of being deployed wherever the business needs work. As problems come in — 500, 1,000, 2,000 — they handle them all themselves. They make tons of money. Someone shows up picketing outside their house demanding they hire people, and the founder asks the obvious question: why would I hire you when I can do all the work myself?
The natural state of any economy where people are creating things is for the founder to do the work themselves
The reason this seems strange is that we live inside the human limitation. One brain, two hands. The entire economy as we know it is structured around the workaround for that limitation — division of labor, employment, organizational hierarchy. None of that exists because it's economically optimal in some abstract sense. It exists because humans physically can't do it all themselves.
All that AI is doing is giving the founder that original thing they wish they had
The implication is direct. As AI tooling scales toward the general capability bar, the structural pressure on every founder is to take advantage of the leverage and reduce the headcount they were only forced to hire because of the human limitation in the first place. The ideal number of employees in a company isn't 100 or 50 or 10. It's zero. That's not a moral claim. It's an observation about what businesses are structurally optimizing toward when the limitation is removed.
The complement to the zero-employees argument is what Daniel calls the red button problem. Imagine a magic red button that, when pressed, would solve any problem the company has — but pressing it requires getting enough of the executive team aligned to push it together. Most companies couldn't get the right people aligned to press the button even if it was sitting on the boardroom table. The internal politics, the executives who would lose their orgs if the problem got solved, the institutional inertia — all of it adds up to large established companies being structurally bad at making the AI replatforming bet at the speed it requires.
The implication is the innovator's-dilemma pattern playing out at AI scale. Smaller companies — five-person teams, nimble startups — will lean into AI-native operating models without the political friction. Their products will be cheaper, faster, and more responsive. They'll show up next to the 10,000-person legacy company and start eating market share. The CISOs and operating executives at the larger companies who can clear the political path for AI replatforming will save their companies. The ones who can't will watch their employer get eaten over the next five years by a five-person AI-native startup nobody had heard of in 2024.
The acute time horizon — CEO emotion is the driver
The most differentiated claim in the episode is on timing. Most senior voices in the industry put the labor disruption window at 5-10-15 years. Daniel puts it at 18-36 months. The reason isn't that the technology will be fully ready in that window — it's that CEOs are already emotionally committed to the trajectory, and the snowball is already rolling. The Wall Street reward for layoffs-blamed-on-AI is real. The peer-to-peer FOMO at the executive level is real. The CEO who watches three competitors announce major AI-driven workforce reductions in the same quarter will follow within a quarter, regardless of whether the underlying AI capability has actually matured to the point of supporting the cuts.
The damage gets done in the lead time before the data catches up. By the time the AI capability either materializes or doesn't, the layoffs have happened. The careers have been disrupted. The political environment has shifted. The legislation conversation has started. The CEO who took the bet two years early either looks brilliant in 2028 (because the AI did materialize) or quietly hires people back at lower wages (because it didn't). Either way, the workforce went through the canyon.
The legislative pushback Daniel anticipates is delayed by the political alignment. The current US administration's populist base is also the most exposed cohort to AI labor displacement. The administration won't be able to position as the AI accelerator while its base loses jobs to AI. The likely outcome is some form of "it's now illegal to fire people because of AI" legislation in the 2027-2028 window, but by then the displacement will already have happened. The legislation will be reactive, not preventative.
The meaning crisis — and why work being yanked away matters
The harder problem behind the labor disruption is the meaning question. Religion provides meaning to a smaller portion of Western populations than it used to. Family is providing meaning to fewer people, with birth rates falling across most developed economies. Work is one of the largest remaining sources of meaning for the average adult. When work gets pulled out from under tens or hundreds of millions of people in a compressed timeframe, the meaning crisis isn't an abstract concern — it's the social-stability question of the decade.
The historical analogy Daniel offered is the manufacturing collapse in middle America over the past 40 years. The political consequences of that collapse are visible across both major US parties' realignment over the past decade. The white-collar version of the same shock, at AI scale, lands harder and faster. The America that's structurally last to adopt UBI as a safety net is also the America that will feel the social fallout of being late to it. The path through the canyon eventually leads somewhere — Star Trek-style post-scarcity societies are one possible destination, with currencies reorganized around something other than the sale of labor — but the transition itself is going to be ugly, and the security and political infrastructure to manage it isn't built yet.
What it means for security — and the optimism case
The security-specific implication Daniel surfaces is that quality and security converge in the AI-built future. AI is currently producing roughly 40% vulnerable code — the same rate humans produce, because AI was trained on human code. Over time, the quality bar improves. Building secure code becomes a default of well-engineered AI tooling, the same way construction crews don't have a separate "building doesn't fall down" department because they follow the engineering instructions properly. The CISOs whose programs survive the transition are the ones whose foundational disciplines — supply chain hygiene, identity governance, data classification, runtime defense — were already in place.
The optimism case Daniel ended on is the right place to land. Education is broken in most countries because individualized curricula and real mentorship are economically unaffordable at scale. AI tooling makes them affordable. The schools already piloting AI-driven custom curricula are seeing students hit top-1% performance bands within two years. Healthcare is broken at global scale because billions of people lack any access. AI-augmented basic care is technically possible right now at trivial cost. Housing is broken because manufacturing economics don't favor mass-produced quality homes. Combine AI tooling, manufacturing automation, and intelligent districting decisions and the cost of a livable home drops dramatically. The slack in the rope across these massive societal challenges is enormous, and AI is the lever that finally lets us pull it.
The career-advice question Daniel closed on is the one to give any 14-year-old now. Read widely. Learn how the world works — physics, computer science, history, politics, world dynamics. Don't optimize for a specific programming language; optimize for understanding how things are built. Curiosity, passion, and the ability to articulate an opinion (and the opposite of that opinion) are the durable superpowers. The Einstein quote Daniel referenced applies — "I have no special talents, I am only passionately curious." The young people who internalize this and learn in public — through writing, YouTube, building things — will be the ones who navigate the canyon best.
Show notes
Guests — Daniel Miessler, founder of Unsupervised Learning newsletter; longtime cybersecurity veteran; recently writing on the Human 3.0 framework
Books mentioned — none specifically named, though Daniel referenced his curriculum-building work for the Human 3.0 framework as a forthcoming resource
Frameworks / models / tools named — the economic definition of AGI (replacing an average knowledge worker on Monday); the "ideal number of employees is zero" reframe; the red button problem (large company AI-replatforming friction); the acute time horizon framing (CEO emotion-driven, 18-36 months); the canyon (the labor disruption transition); Star Trek-style post-scarcity society; the meaning crisis (work, religion, family as sources of meaning); UBI as the eventual safety net (with America structurally last to adopt); the Animatrix robot-rights riot scenes (referenced as the political pushback model); Daniel's digital assistant (referenced as currently approaching but not yet meeting the AGI bar); the slack-in-the-rope framing for AI-enabled improvement in education, healthcare, and housing
Other people / shows / resources referenced — Albert Einstein ("I have no special talents, I am only passionately curious"); The Animatrix (referenced for the human-vs-robot political conflict scenes); Star Trek post-scarcity society (referenced as the long-run destination); Marcus (the recent debate Stuart referenced — verify name); Will Ferrell character (Stuart's reference for the ServiceNow CEO's announcement style); Wall Street reward function for AI-blamed layoffs (referenced as the structural pressure on CEO behavior)
Hosted by Conor Sherman and Stuart Mitchell.