What you'll learn
AI's economic upside collapses if it's used to harvest existing businesses instead of creating new ones — substitution leads to stagnation, augmentation creates new markets.
Human-centered AI is a design constraint, not a value statement — accountability is uniquely human and cannot be outsourced to a system.
If an AI system can't be defended, observed, or recovered under attack, it shouldn't hold power — secure AI is responsible AI, not a checkbox.
Description
Most podcasts in this space are downstream of vibes — what the host saw at a conference last week, what's trending on Twitter, who got funded. Zero Signal is built on a different premise. Every guest, every segment, and every editorial choice runs through three pillars that decide what the show is for and what it refuses to platform. This solo episode is Conor stepping back from the usual two-host format to lay those pillars out in plain language so listeners know exactly what kind of work the show exists to do.
The first pillar is the courage to grow. AI is the largest general-purpose technology shift since electricity, and history is unambiguous about what happens to general-purpose technologies that get used only for cost-cutting — they fail to deliver broad economic gains. The $600B revenue gap conversation is being framed defensively when it should be framed as a growth mandate. New industries, new categories, new markets — that's where the upside actually lives. Zero Signal will not platform the harvesting mindset.
The second pillar is human-centered systems. Accountability is uniquely human, and a system that removes the person from meaningful responsibility is already broken. The third pillar is that secure AI is responsible AI — not as ethics theater, but as a prerequisite for delegation. If an AI system cannot be defended, observed, or recovered under attack, it shouldn't hold power. These three pillars are the editorial filter. Whether you're choosing what content to consume, what guests to invite, or what bets to make as a security leader, they translate into a decision framework you can use immediately.
What we cover
"the courage to grow" — why substitution-mode AI starves the future and augmentation-mode AI creates new markets
"the harvesting mindset" — the deflationary spiral that hits when AI is used to do the same work with fewer people
"task creation, not task replacement" — Acemoglu's distinction and why it matters for durable growth
"human-centered AI as a design constraint" — not a value statement, a structural requirement
"accountability is uniquely human" — no moral outsourcing, no abdication to agents, no "the system decided"
"reciprocal learning" — the loop where humans improve the AI and the AI improves the human
"keep the robots out of the gym" — Daniel Miessler's frame for what AI should and shouldn't do for you
"secure AI is responsible AI" — defendable, observable, recoverable as the three preconditions for delegation
Thank you to our Sponsors:
Hampton North is the premier US based cybersecurity search firm. Start building your security team with Hampton North
Sysdig is the leader in AI-powered real-time cloud defense; stop watching and start defending
The conversation
Pillar one — the courage to grow
AI is collapsing the cost of historically expensive work. That fact isn't in dispute. What's still undecided is how leaders respond to it. The wrong response — the one that pattern-matches to every previous tech cost-cycle — is to use the savings for headcount reductions, thinner organizations, and slimmer payrolls. That gets you stagnation, not progress.
History is consistent on this point. Electricity, the steam engine, the internet — every general-purpose technology that delivered durable economic gains delivered them through expansion, not efficiency. When organizations used the new technology to do new things, new markets emerged. When they used it to do the same things with fewer people, they preserved margin briefly and then watched the floor drop out.
The current $600B AI revenue gap is being framed defensively, as something to be made up through efficiencies. That framing is wrong. A revenue gap of that scale is a growth mandate, not a debt to pay down. The justification for the capital that's pouring in is to do things that were previously impossible — personalized medicine at scale, climate modeling that reshapes energy systems, one-to-one education for every child. Erik Brynjolfsson at the Stanford Digital Economy Lab has named the alternative explicitly — the Turing trap, where AI gets developed for human substitution rather than human augmentation. Substitution leads to stagnant wages and weak productivity. Augmentation expands output and creates demand for new work. Daron Acemoglu's distinction lands the same way: technology creates durable growth only when it enables task creation, not just task replacement.
Zero Signal will not platform the harvesting mindset. The show exists to amplify leaders who see AI as a way to grow the economic pie, not slice it thinner.
Pillar two — human-centered systems
The second pillar is more structural than it sounds. Human-centered AI isn't a values statement to put on a slide. It's a design constraint with operational teeth. Organizations are socio-technical systems where humans and machines shape one another over time. The boundaries are not fixed. The work is never finished. The collaboration has to be designed, not assumed.
Sometimes the right tool is a machine acting autonomously. Sometimes the right tool is a human acting alone. Often the right answer is a deliberately designed collaboration between the two. What matters is that those choices are intentional. Machines bring speed, pattern recognition, and scale. Humans bring judgment, context, moral reasoning, and responsibility for consequences. Good systems put responsibility where the strength is highest. Bad systems blur the line and call it progress.
The deeper truth — the one that often goes unsaid — is that accountability is uniquely human. Humans are the only actors who fear consequence: legal, moral, social, personal. That fear isn't a weakness. It's the foundation of responsibility. There is no moral outsourcing to machines. There is no "the system decided." Someone owns the objective. Someone defines success. Someone answers for the outcome.
Research from MIT and Harvard sharpens the design imperative. People follow inaccurate AI advice at meaningful rates — roughly a third to 40% of the time in controlled experiments — even when they have the information to detect the errors themselves. The failure mode is cognitive, not technical. Left unattended, humans don't supervise AI; they defer to it. Daniel Miessler's "keep the robots out of the gym" frame captures the operating principle — strength comes from effort with support, not by removing it. Human-centered AI should make people better decision-makers, better leaders, and better stewards of power, not just faster workers.
Pillar three — secure AI is responsible AI
The third pillar collapses two conversations into one. Trust is the only currency that matters in the AI economy. As authority gets delegated to AI systems across markets, governments, and institutions, trust shifts from people to systems — and the bar that systems have to clear in order to be trusted goes up, not down. Security stops being a feature and becomes institutional infrastructure.
The preconditions are simple to state and hard to deliver. If AI systems are going to carry authority, they must be governable. If they're going to make decisions, they must be interruptible. If they're going to act autonomously, they must be observable. These are not ethical preferences. They are prerequisites for delegation. Cybersecurity is the discipline that forces builders to internalize the cost of failure before the harm occurs. Without it, "responsible AI" is a slogan. With it, responsibility is the practice.
Trust collapses the way Hemingway described going bankrupt — gradually, then suddenly. Small cracks accumulate. Minor incidents get normalized. Signals get ignored. Then a single event exposes how little resiliency was actually there. The future demands real-time transparency into system behavior — how decisions are made, how failures occur, how recovery happens. Quarterly reports are insufficient for systems that act continuously. Annual audits are insufficient for threats that adapt hourly. Trust isn't the absence of failure. Trust is visible recovery with integrity.
The editorial commitment from this pillar is the cleanest of the three. If an AI system cannot be defended, it shouldn't be deployed. If it cannot be observed, it should not be trusted. If it cannot recover under attack, it shouldn't hold power. Zero Signal platforms builders and leaders who treat security, resiliency, and transparency as first-order design constraints — not optional, not deferred, not abstracted away.
Show notes
Guests — solo episode (Conor Sherman, host)
Books mentioned — none
Frameworks / models / tools named — three pillars of Zero Signal (the courage to grow; human-centered systems; secure AI is responsible AI); the Turing trap (Erik Brynjolfsson, Stanford Digital Economy Lab); task creation vs. task replacement (Daron Acemoglu, MIT); "keep the robots out of the gym" (Daniel Miessler); anti-fragility; reciprocal learning; "gradually, then suddenly" (Hemingway, on bankruptcy)
Other people / shows / resources referenced — Erik Brynjolfsson, Stanford Digital Economy Lab; Daron Acemoglu, MIT; Jim Covello, Goldman Sachs (on AI killer-app conditions for justifying investment); Daniel Miessler (prior Zero Signal guest); MIT and Harvard University (research on human deference to inaccurate AI advice); Stuart Mitchell (regular co-host, absent this episode)
Hosted by Conor Sherman and Stuart Mitchell.