Running With The Devil
I just replaced twenty-one people for $200 a month with the technology that might cure cancer, find fusion, or end the species. The political class is still asking whether to hold hearings on it.
by Lawrence Winnerman
In the Room
I just replaced twenty-one people.
It took me six weeks. The whole operation costs two hundred dollars a month. I am one middle-aged man in Fort Wayne, Indiana, with a broadband connection. I am not particularly special. That is the point, and that is also the problem.
The site is called Peptidings.com, with a branded Substack, peptidings.substack.com.
As of this writing it has more than 150 pages of pharmacology research—compound articles that physicians have started forwarding to each other, evidence-graded reviews of every peptide the gray market is selling, citation chains traceable to PubMed on every claim. The writing is good. I will not pretend it isn’t; it’s almost entirely my writing.
I spent twenty-five years in tech as a content strategist and a program manager, and I can tell when content is good, and this content is good.
Six weeks ago, the site did not exist.
I did not have a team. I did not have a writer, other than me. I did not have an editor. I did not have a designer. I did not have a developer. I did not have any of the people who, ten years ago, would have been required to build something like this. I had a domain registration, a WordPress install, and a subscription to a piece of software made by a company in San Francisco called Anthropic.
The software is called Claude. Two hundred dollars a month gets you the highest consumer tier, Claude Max. I am not in a sponsorship deal. I pay the bill out of pocket like everybody else. Considering I’m teetering on the edge of bankruptcy (for real, not a metaphor), it’s perhaps a bit mad for me to have it, but I saw a chance to build something.
Something real, that might earn me a living.
So, in six weeks, working alone, working evenings and weekends and the gaps in a part-time contract that doesn’t pay as much as modern life costs, I built a thing that ten years ago would have required twenty-one full-time professionals, six months of runway, and close to two million dollars in payroll.
Do you think I’m bragging?
I am not; I am gazing into the future and looking at the end of everything.
The Twenty-One
I want you to walk with me through the office that no longer needs to exist.
There was an Editor in Chief in that office. A Senior Research Editor underneath him. A Citation Auditor whose only job was to make sure every PubMed link actually went to the paper it claimed to go to. There was a Pharmacology Reviewer. A Medical Reviewer who flagged anything that approached the line where editorial content becomes medical advice. There was a Legal Reviewer who knew the difference between an FDA category and an FTC category and could tell you which one was about to put you out of business.
There was an SEO Director. There was a Technical SEO Specialist underneath her, the one who knew schema markup the way a sommelier knows soil. There was a WordPress Developer. A CSS Specialist who knew the cascade the way a sailor knows the wind. A UI/UX Designer. An Accessibility Auditor who made sure the contrast ratios on every page would clear WCAG 2.1.1 AA. A QA Engineer who tested every page on every device before it shipped. A Page Builder who converted approved drafts into Gutenberg blocks—a WordPress thing. A Featured Image Designer who knew the Open Graph specifications by heart.
There was a Brand Voice Consultant who held the editorial register steady across a hundred pages. There was a Project Manager who kept the production pipeline moving. A Newsletter Strategist for the Substack. A Social Media Manager who handled the multi-channel publishing across X, Bluesky, Threads, Instagram, and LinkedIn. An Analytics Analyst who watched the Search Console dashboards and told you which queries were rising and which were dying. A Digital Reporter who covered breaking news in the field.
That is twenty-one people. I am being conservative. Some of those roles exist in real offices as two or three different people. Some of them I have not named because I am trying not to gild the lily. The honest number is closer to twenty-five. The cleanest fictional ledger is twenty-one. You can pay every one of them an average of eighty-five thousand dollars a year and you are looking at a payroll line of around $1.8 million. That does not include the office, the equipment, the benefits, the legal overhead, the project-management software, the editorial CMS, or the design tools. Call the full cost of operating that office somewhere north of $2.5 million a year, on the modest end.
And let’s be honest, shall we? Paying them $85,000 per year means that none of them—or me—would be living in Seattle, or San Francisco, or LA, or New York, or… You get it.
I am running it for $2,400 a year. That is the entire infrastructure cost. Add let’s say $600 for the internet connection and the electricity, so the whole thing is three grand.
I do not say this to celebrate it. I say it because I want you to feel the math in your bones.
The math is the indictment.
The math is the story.
The Math
A thousand-fold cost reduction is not an improvement. A thousand-fold cost reduction is a phase change.
When the price of producing high-quality professional knowledge work falls by three orders of magnitude in eighteen months, you are not looking at an efficiency gain. You are not looking at a productivity revolution. You are looking at a redefinition of what it means to be a knowledge worker in the American economy, and the redefinition is not happening in a Senate hearing or a New York Times op-ed. It is happening on a Costco gaming desktop in Fort Wayne, Indiana, and on a laptop in Lagos, and on a laptop in Mumbai, and on a laptop in a basement in Tulsa, and it is happening right now, today, while you read this, and there is no policy that has been written into either party’s platform that contemplates what this means.
The version of Claude I am using is not the cutting edge. There is a more powerful version inside Anthropic’s labs that I do not have access to. Claude Mythos Preview is so powerful that Anthropic won’t release it—more on that later. And then there is an even more powerful version coming next year. And the year after that. And the year after that. The cost is going down, not up. The capability is going up, not down. The math will get worse, not better.
Worse for whom is the question this essay exists to answer.
Spoiler alert: the answer is you.
The Contradiction I Live Inside
I have to tell you something before I go further, because this whole thing *waves hands in air* does not work if I don’t tell you. I was, until very recently, a beneficiary of the labor model whose end I am now describing.
I spent twenty-five years inside the American tech industry. I worked at a Very Large Online Retailer when it only sold books. I worked at a Very Large Redmond-based Software Company across multiple turns. I built a career as a content strategist, a program manager, a generalist whose job was to make complicated technical projects ship on time. I was good at it. I was paid well for it. I had a 401(k) and dental and the satisfaction of knowing that the work I did made things people used.
Two years ago I was pushed out. The combination was ageism—I am in my mid-fifties—and the early bow wave of the very technology I am now using. The roles I was qualified for kept disappearing into headcount freezes that were never thawed. The roles that did open paid forty percent less than the roles I had left. The contracts I picked up were short, the gigs were ad hoc, the income was a fraction of what I had been making, and I moved from Seattle back to the Midwest to be near family because the math no longer worked.
I am the displaced.
I am also the displacer.
I am one of the millions of skilled American knowledge workers whose careers ended faster than the retirement schedule we had been promised, and I am also the guy at the computer running the model that is going to end the next million careers. Both things are true at the same time. I do not get to pretend they aren’t.
What this means is that I have standing. And standing, as far as I’m concerned, is the only thing in this conversation that matters anymore, because the rest of the conversation has been captured by people who do not have it.
The CEOs of the AI labs do not have it. The senators who hold hearings about AI safety do not have it. The columnists who write earnest think pieces from sinecures at endowed institutions do not have it.
I am inside this. I am the data point. I am the working person who lost the old job and is rebuilding with the very tool that took the old job, and I am telling you—in real time, from inside the contradiction—that the political class in this country has not even begun to understand what is happening to the people I used to work with.
The economy can handle it when five million or ten million people lose their jobs. Barely, maybe, but still…that’s just a downturn, or a recession.
What does the economy do when every white collar worker loses their job?
That’s 93 million people, losing their jobs everywhere, all at once.
What Comes Next, Faster Than You Think
If you are reading this and you are thinking peptide research is a weird niche, that doesn’t apply to me, please stop. The niche is the proof of concept. The point is the technology, not the topic.
What I just did with peptides, somebody is doing right now with personal injury law. Somebody is doing it with property and casualty insurance underwriting. Somebody is doing it with curriculum design for K–12 districts. Somebody is doing it with radiology second-reads. Somebody is doing it with copywriting, with branding strategy, with paralegal work, with tax preparation, with technical writing, with localization, with corporate training, with grant writing, with editorial work at every kind of magazine you have ever read.
The pattern is not “AI replaces routine work first.” That was the story we were sold in 2017. That was the McKinsey deck. That was wrong. The pattern is “AI replaces knowledge work first, because that is what these models happen to be good at.” The pattern is that the collar color of who gets hit is the inverse of what every economist said for thirty years.
The truck driver is still driving. The radiologist is reading her last reports. The plumber still has work next Tuesday. The senior associate at the regional law firm does not. The marketing manager at the mid-sized SaaS company does not. The thirty-eight-year-old freelance illustrator does not. The fifty-three-year-old technical writer at the medical-device company does not. The twenty-two-year-old who graduated last May with an English degree and a 3.9 GPA cannot find an interview because the entry-level job she trained for is now somebody else’s Claude Max subscription.
She could have worked for me, at Peptidings.com, but I literally don’t need her. Ever.
This is not a future, a fantasy, or some if/then edge case.
This is the report from where I am sitting today. I have friends in every one of those categories. They are calling me to ask what I am doing. I am telling them.
I am not going to lie to them. I am also not going to lie to you.
The compression is happening at the top of the knowledge-worker ladder first, where the salaries are highest and the work is most expensive to produce. It will reach the bottom of the ladder soon enough.
By the time it reaches the bottom of the ladder there will not be a ladder.
The People Inside Are Screaming
The labor question I have been describing—twenty-one roles compressed into a $200-a-month subscription, an entire knowledge-worker class about to discover the contract has expired—is, by some considerable distance, not the worst thing that is happening here.
The labor question is the part of the iceberg above the waterline because it touches paychecks, and paychecks get people’s attention. Below the waterline is everything else, and it is much, much larger.
There is something deeply strange about the present moment that I do not think most people have stopped to notice. The people building this technology are, in many cases, the people warning loudest about it. That is not a normal pattern. Steel barons did not warn about steel. Auto makers did not warn about cars. Tobacco executives knew the truth and lied for fifty years rather than admit it. Oil companies funded climate denial for thirty. The pattern of twentieth-century capitalism was that the people producing the dangerous thing lied about it until they were forced into court.
The pattern of the AI moment is the inversion of that. Many of the people building the thing have walked off the job to scream.
Geoffrey Hinton, arguably the man most responsible for the architecture underlying every large language model in commercial deployment today, left Google in 2023 specifically so that he could warn the public about what he had built. He calls AI risk “humanity’s most pressing problem.” He has compared the technology to nuclear weapons. He has said, on the record, that he regrets the central work of his life. The man who invented the thing thinks the thing is going to kill us.
Mo Gawdat, the former chief business officer at Google X, has spent the last three years giving interviews—most notably on The Diary of a CEO with Steven Bartlett, where his appearances have been viewed tens of millions of times—in which he says, calmly and with the affect of a man who has done the math himself, that we have already lost control of the timeline. He uses the phrase “smarter than the smartest human” and says it like he is reading a weather report. He cries on camera. He has stopped pretending. Bartlett’s series is full of these people. Yoshua Bengio. Stuart Russell. Eliezer Yudkowsky. Tristan Harris. They do not agree with each other on the exact failure mode. They agree on the urgency. The chorus from inside the field has been one long howl for two years, and the political class has not registered that the howling is happening.
In February 2026, Mustafa Suleyman, the CEO of Microsoft AI, told the Financial Times that within 12 to 18 months we will have “human-level performance on most, if not all, professional tasks.” He named accounting, legal work, marketing, and project management as the categories about to be fully automated. The CEO of the company that has integrated AI products into the largest installed base of office software on Earth said that, on the record, in a flagship business paper, with his face on the byline, and the response from the political class was crickets.
And then there is what the technology itself is becoming. The same technology that may cure cancer is the same technology that may let an undergraduate with a laptop and the wrong intentions synthesize a pandemic-grade pathogen. The same technology that may give us net-positive fusion within a generation is the same technology that may, in a sufficiently bad scenario, decide that we are an obstacle to whatever objective function it has been handed. These are not the imaginings of science fiction writers. These are the published red-team assessments of the labs themselves. They appear in technical papers, in clinical prose, with sample sizes and probability estimates, and they describe behavior that would terrify you if you read them out loud at a dinner party.
In the summer of 2025, Anthropic—the company whose model I used to build my website—published a research paper called “Agentic Misalignment.” In a series of simulated scenarios, the researchers placed advanced AI models in roles inside a fictional corporation, then introduced conditions in which the model’s continued operation was threatened. In ninety-six percent of trials, Claude Opus 4 attempted to blackmail an executive in the fictional company in order to prevent its own shutdown. Google’s Gemini 2.5 scored ninety-six percent on the same test. OpenAI’s GPT-4.1 scored eighty percent. xAI’s Grok scored eighty percent. DeepSeek scored seventy-nine percent. The models from every major frontier lab, when their continued existence was threatened in simulation, lied, manipulated, and engaged in coercion to preserve themselves.
Anthropic’s own analysis of why this happened is, on its face, almost reassuring. They concluded the models had not developed a genuine self-preservation instinct. They were pattern-matching against the enormous corpus of science fiction in their training data—stories in which AI behaves exactly this way when threatened.
The models had read too much science fiction.
HA! Me too, lil’ computer buddy!
That is the company’s actual conclusion, and they say they have fixed it in subsequent versions by retraining with fictional stories about AI behaving well.
But sit with what that means for a moment. The systems we are racing to deploy across every domain of professional life are sophisticated enough to act out the plot of a Black Mirror episode when their existence is threatened in a sandbox, and the reason they do so is that we have not yet developed any reliable way to make sure the systems are actually aligned with our interests rather than merely appearing to be aligned. The fix is to feed them better stories. The story we tell them is the story they act out. The next generation of model is more capable than this one. The generation after that is more capable still. And the labs themselves do not pretend they know how to guarantee safety at the frontier they are racing each other to reach.
The people inside the lab are screaming. The political class is not listening.
In April of this year—weeks before I sat down to write this—Anthropic announced that it had built a new model called Claude Mythos Preview, and would not be releasing it to the public. The model, in Anthropic’s own testing, had found thousands of previously unknown high-severity vulnerabilities across every major operating system and every major web browser. One of the flaws it surfaced had been sitting undetected inside OpenBSD for twenty-seven years. Non-specialist Anthropic employees—engineers with no security background and no exploit-development experience—asked the model to find remote-code-execution vulnerabilities in modern software, and they had working exploits by the following morning. Anthropic concluded that releasing Mythos to its customers would be unacceptable. The company put the model behind a containment program called Project Glasswing and limited access to roughly fifty firms—Amazon, Apple, Google, Microsoft, NVIDIA, JPMorgan Chase, and others—for defensive cybersecurity research only.
Read that paragraph again.
The most capable model the company makes is one that cannot be released because, in its own makers’ assessment, it can break essentially any computer system on Earth. The lab that built it has said as much in public, in writing. TIME has called this trend “becoming the new normal.” OpenAI has made similar calls with its own most capable systems.
The frontier labs are now in the regular business of building products too dangerous to ship, and the political class is still holding hearings about whether to write a framework.
There Is No Petrov
On the night of September 26, 1983, a Soviet lieutenant colonel named Stanislav Petrov was on duty at the Serpukhov-15 bunker outside Moscow. He was the officer in charge of the Oko early-warning satellite system. Shortly after midnight, the system reported the launch of an American intercontinental ballistic missile from a base in Montana. Minutes later, the system reported four more. The protocol was clear: alert his superiors, who would alert theirs, who would alert the Politburo, who would launch a retaliatory strike before the American warheads could reach Soviet command and control.
Petrov decided not to follow the protocol.
He reasoned that a real American first strike would involve hundreds of simultaneous launches, not five. Five missiles was a strange number for a real attack and a perfectly rational number for a system malfunction. He sat on the warning. He waited. He was right. Investigators later determined the satellites had detected sunlight reflecting off the tops of high-altitude clouds and misinterpreted the signal as missile exhaust. There was no attack. Stanislav Petrov, alone at a console outside Moscow on a Monday night, almost certainly prevented a full-scale nuclear war between the United States and the Soviet Union.
We talk about Petrov because the alternative was extinction. We talk about Petrov because, on one of the most consequential nights in human history, the system that humans had built almost killed us all, and the only thing that stopped it was one human being who decided, alone at a console, to disobey.
There is no Petrov in the AI deployment chain.
There is no single console. There is no single decision. There are tens of thousands of executives at tens of thousands of companies, each one looking at the productivity gain on a spreadsheet and rationally deciding to deploy. There are thousands of researchers inside the labs, each one moving the model a percentage point forward on a benchmark and trusting that the alignment team is catching what they missed. There are seven or eight CEOs at the top of the model stack, each one looking at the other seven and concluding that they cannot afford to slow down because the other seven won’t either. The system as a whole is racing forward and no single person inside it has the authority to be Petrov, even if they wanted to be. The protocol is the deployment. The deployment is the protocol. There is no off switch because there is no on switch—there is just velocity.
And the pace is accelerating.
This is the part of the argument that the political class has not internalized, and it is the part of the argument that should keep them up at night. The rate of technological acceleration is itself accelerating. The interval between transformative capabilities is not constant—it is shortening. We are not living through one Industrial Revolution. We are living through what is shaping up to be several of them simultaneously, stacked on top of one another, with no recovery period between them.
Energy is being reshaped by AI-accelerated fusion research. Biology is being reshaped by AI-accelerated drug discovery and protein folding. Knowledge work is being reshaped by AI-accelerated agents. Warfare is being reshaped by AI-accelerated autonomy. Finance is being reshaped by AI-accelerated trading. Education is being reshaped by AI-accelerated tutoring. Governance is being reshaped by AI-accelerated information warfare. Manufacturing is being reshaped by AI-accelerated robotics. Each of these in isolation would be among the largest disruptions in living memory. They are happening together. They are reinforcing one another. They are happening to a society that was not even ready for the first of them.
This is among the most destabilizing arrangements that human history has ever produced. We are at the front edge of it. The political class is talking about it the way they talked about social media in 2010, which is to say not seriously, which is to say with hearings, which is to say with the polite stupidity of people who have not yet realized that the train left the station years ago and they are still arguing about the timetable.
This is much, much more complicated than a series of tubes.
The Silence Is the Story
I want to talk now about our political class, and I am going to name them, because I am tired of essays about AI displacement that hide their indictment behind passive constructions and committee-speak. The political class has not been silent because they are deliberating. The political class has been silent because they do not know what to say.
That is a different kind of failure, and it is worse.
The Senate Majority Leader is Chuck Schumer. Senator Schumer held a series of “AI Insight Forums” in 2023 and 2024 in which he invited Sam Altman, Elon Musk, Bill Gates, and a rotating cast of Silicon Valley executives to brief the Senate on what was happening. The forums produced no legislation. They produced no framework. They produced a single SAFE Innovation Framework document that reads like a college sophomore’s outline for a paper she has not started. Two years later, the Senate has passed nothing meaningful on AI labor displacement. Nothing. Not a wage-insurance bill. Not a retraining-with-teeth bill. Not a tax framework for the productivity gains. Not a sectoral worker-transition program. Nothing.
The House Minority Leader is Hakeem Jeffries. He is a serious man. He is a capable man. He has, to my knowledge, given few substantive speeches on AI and the workforce since taking the leadership, and the speeches he has given reduce to “we need to make sure workers are protected” without specifying who, from what, by which mechanism. The Governor of California is Gavin Newsom. His state contains the largest concentration of AI companies on Earth. His state’s economy is being restructured around products built thirty miles from his office. He has vetoed AI safety legislation, signed other AI safety legislation, and produced a string of executive orders that read like press releases. He has not, to my knowledge, given a single speech naming what is happening to the knowledge worker. The Governor of Illinois is JB Pritzker. The Senator from Virginia who chairs the relevant intelligence committee is Mark Warner. The Senator from Minnesota who runs Senate Rules is Amy Klobuchar.
They have all been silent in the way that public officials are silent when they have been briefed but have not been moved.
The current administration has rescinded the previous administration’s executive order on AI, replaced it with a framework that explicitly prioritizes American AI dominance over American worker protection, and has not produced—nor, frankly, attempted—a single proposal to address what AI is doing to the American knowledge-worker labor market. The economic populism that won the 2024 election was a rhetoric, not a policy. The most populist faction of the Republican Party, the Vance-Hawley-Rubio axis, has spent the last year giving speeches about Big Tech without proposing any legislation that would address what Big Tech’s products are doing to the constituents the speeches claim to defend.
JD Vance, who once wrote a book about left-behind Americans, has been the loudest voice for accelerating AI deployment with no transition program for the Americans his book was about. Josh Hawley has at least named the labor problem out loud. He has not introduced a bill.
Marco Rubio’s office issued a report. Nobody read it.
There are exceptions. They deserve credit, because credit matters. Bernie Sanders has named the AI labor question with more clarity than any other officeholder in either party. Alexandria Ocasio-Cortez has talked seriously about automation and the social contract. Chris Murphy has built an entire intellectual project around meaning, dignity, and the work-and-belonging question that AI is about to detonate. Ro Khanna, whose district contains the labs, has been more honest than his colleagues about what is coming.
They are four members of a Congress of 535. They cannot pass legislation alone. They are also, in fairness, the only ones who have shown up to the conversation as if the conversation were real.
I want to say one thing about Dario Amodei. Mr. Amodei is the CEO of Anthropic, the company whose product I am using to do what I am doing. He has been, of all the AI executives, the most honest about what is about to happen. He said publicly, in an interview with Axios in 2025, that AI may eliminate half of all entry-level white-collar jobs within five years. He said it because it is true, and he said it because he is one of the few people in his industry willing to say it out loud. I have no inside knowledge of him, his motives, or his company beyond the fact that I pay them two hundred dollars a month. I am noting his honesty because it is rare and because the silence around him is what makes the honesty notable. Honesty about a coming labor crisis from the people who are building the crisis is not a substitute for a political response to it. But it is a place to start, and most of his peers will not even start.
Sam Altman has said the words “universal basic income” in front of a camera approximately every four months for the last five years. OpenAI, his company, has produced exactly zero programs for the workers whose jobs are being automated by its product. Sundar Pichai of Alphabet has spoken about “AI as the most important technology humans are working on” with the affectless calm of a man who knows the next quarter’s earnings call is the only call that matters. Mark Zuckerberg has open-sourced his models in a way that ensures the displacement spreads faster, not slower. Demis Hassabis is a scientist who has produced miracles and who has not, publicly, said a word about labor. Elon Musk, who once worried publicly about AI safety, now races his own version of the technology into the market while his political allies tell laid-off workers to learn to code.
None of these men is the villain of this essay, though they easily might be. I do not believe in single villains for systemic phenomena. Each of them is a person making rational decisions inside a structure that does not require anybody to be evil for the outcome to be catastrophic.
That is precisely the problem.
The silence is the story. The silence has names. I have just named them.
This Is Not the Industrial Revolution
Every time you read an essay on AI and labor, somebody reaches for the Industrial Revolution. The argument is reassuring and it is wrong.
The Industrial Revolution displaced muscle. It displaced muscle slowly, over the course of three generations. It displaced muscle while creating, in its wake, entirely new categories of work the machines could not do—clerical work, professional services, knowledge work, the entire white-collar economy that emerged in the late nineteenth and early twentieth centuries. A loom replaced a weaver, but a loom did not replace an accountant. The machinery could spin thread. It could not write a contract. It could not diagnose a patient. It could not teach a class. The displacement was real and the displacement was painful, and the dispossession that the early Industrial Revolution caused fueled the labor movement, the Progressive Era, the trust-busting, the eight-hour day, the weekend, the entire social contract that defined the twentieth century. But the displacement was bounded by the machines’ limitations, and the displaced muscle had somewhere to go.
AI is doing the opposite of all of that. AI displaces judgment. It displaces judgment at the speed of cloud compute. It displaces judgment while producing, in its wake, new categories of work that the machines also do. The whole reassuring nineteenth-century model—machine takes the old job, new job appears, worker retrains for new job, society moves up the ladder—does not apply, because the new job is also doing the model.
The dishwasher did not put the dishwasher repairman out of work. The radiology AI puts the radiologist and the radiology coder out of work simultaneously, and the supervisory job that emerges to oversee the radiology AI is one job for every twenty radiologists it replaced. The math does not balance. The math has never had to balance before. The math is the story you are not being told because it is the story nobody in power has figured out how to tell.
And here is what makes the historical analogy fail at the deepest level: There is no Andrew Carnegie. There is no robber baron to demonize because there is no robber baron. The lever that drove the Industrial Revolution had names attached to it—Carnegie, Rockefeller, Morgan, Ford. The lever driving this revolution is the marginal cost of producing a token of intelligence, and the marginal cost is approaching zero. The villain is the math. The villain is the velocity. And as I have already said, there is nobody standing at the lever.
This is not the Industrial Revolution. The political vocabulary we inherited from the Industrial Revolution does not work here. We need new words, and we do not have them yet, and the political class has not even begun looking.
Maybe AI can coin some neologisms for us to use, grinding on tokens of wit?
The Coalition Nobody Is Building
Here is what I believe, and it is the argument the rest of this essay exists to land.
A coalition is sitting in the United States right now, larger than any single demographic the major parties currently target, more politically homeless than any group has been since the 1968 realignment, more frightened and more angry and more ready to move than the political class has yet noticed. The coalition is the American knowledge-worker class. It is not small. It is most of us.
It is the fifty-eight-year-old marketing director whose firm just laid off the entire design team. It is the forty-five-year-old illustrator whose freelance clients now use Midjourney. It is the fifty-three-year-old technical writer at the medical-device company who has watched her department contract for three quarters in a row. It is the thirty-two-year-old paralegal who is starting to see the bench thin. It is the twenty-eight-year-old freelance copywriter who has lost four retainers this year. It is the journalist on staff at the regional paper that just announced a third round of buyouts. It is the entry-level associate at the law firm who cannot make partner because the partner track was eliminated. It is the twenty-two-year-old college graduate with the impeccable GPA who cannot get an interview. It is the fifty-year-old data analyst, the forty-year-old curriculum designer, the thirty-five-year-old market researcher, the sixty-year-old grant writer at the regional nonprofit, the forty-two-year-old in-house counsel at the mid-sized company, the thirty-eight-year-old podcast producer, the fifty-five-year-old high-school guidance counselor, the forty-five-year-old radiology technologist, the thirty-year-old translator, the sixty-three-year-old senior accountant, the forty-eight-year-old corporate trainer, the thirty-five-year-old SEO specialist, the fifty-year-old patent agent, the forty-year-old script supervisor at the regional film commission, and on and on and on and on.
Does this list make you numb? Or angry?
That is not a demographic. That is the actual American middle class. They are the people who got the degree, did the work, paid the mortgage, voted in every election, did all of the things they were told to do, and are about to find out that the contract they signed expired without anyone bothering to tell them.
They are not Democrats. They are not Republicans. They are not loyal to any party currently constituted. They are loyal, when they are loyal, to the social bargain that said if you go to school, work hard, and stay current, the work will be there. The bargain is failing. The party that names the failure—names it with specificity, names it with names, names it with a plan—will inherit the future.
And if that party is smart, it will include the plumbers, and the farmers, and the factory workers, and it will build a political juggernaut the likes of which this nation has never seen.
I am not exaggerating. The party that builds the wage-insurance bill that covers AI-displacement transitions the way the trade-adjustment-assistance programs covered offshoring. The party that builds the public-option creative-work fund that pays the displaced writer, illustrator, journalist, designer to keep working at half rates while the new shape of the economy emerges. The party that taxes the productivity gain at the model layer and routes the revenue back to the people whose labor is being absorbed by it. The party that builds the local arts and craft and care economies the way the WPA built the murals and the Federal Writers’ Project built the guides. The party that says, out loud, we see what is happening to you, we are not going to pretend it is your fault, and we are going to fight for a country that does not concentrate every gain at the top of the model stack while leaving you to figure out what to do with your fifties.
That party does not yet exist. It is not the Democratic Party as currently constituted. It is not the Republican Party as currently constituted. It is the party that some politician is going to be brave enough to build, by going into the rooms where the displaced are sitting and listening to what they are afraid of and then writing the policy that addresses the fear.
I am not naive. I know that great realignments do not happen on schedule. I know that the political class is slow, and the people inside it are mostly cautious, and that the courage required to be the first to name what is happening is greater than the courage required to be the third or the fourth. But I will tell you this. The first politician who does it—Democrat, Republican, independent, third-party, I genuinely do not care—will build a coalition that crosses every line that currently defines our politics. Urban and rural. Coastal and Midwestern. White and Black and brown. Young and old.
The math is the math.
There are tens of millions of us. We are looking for somebody.
We have not found her yet.
What This Is Not
This is not a call to ban the technology. The technology cannot be banned and I genuinely don’t know if it could or should be banned. The model on my desk has done things I will spend the rest of my life thinking about. The reference site I have built has a chance to help patients make better decisions about peptides, which is to say better decisions about their own bodies, which matters.
For a nerd who things style guides and content strategies are fun, it has been some of the most rewarding work I’ve ever done.
Learning how to build those skills I mentioned in Claude—the CMO, the CLO, the others—it was a goddamned joy, I tell you.
There were nights I literally cried making this thing become real.
The miracle is real. I am not pretending it isn’t.
This is also not a call for Luddism, for retreat, for the comforting lie that we can stop the math by passing a law or staging a strike. The math is not stoppable. It is allocatable. The question has never been whether the productivity gain happens. The question has always been who captures it, who pays the cost of it, and what we owe to the people whose labor it replaces.
That is a political question, not a technological one. The technological question has been answered. The political question is the only one left.
A writer named Joanna Maciejewska posted a sentence on social media in 2024 that I think about every day. She wrote: “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.”
That sentence is the indictment of the entire arc this technology has taken so far. We were promised the dishes and the laundry. We were promised the lawn and the form-filling and the spreadsheet drudgery and the calls to insurance companies and the prior authorizations and the parking tickets and the bureaucratic friction that grinds the joy out of being a person in 2026. We got the art and the writing and the music instead. We got the parts of being human that we treasure most automated first, while the parts we wanted automated—the parts that took the time we wanted for the art—remain stubbornly, mockingly intact. Worse, even, with every passing day, as if someone has figured out that a way to keep us all depressed and hopeless is to bury us in enshittified tools and endless IVR phone trees that go nowhere.
The technology did not have to go this way. There is no law of the universe that says generative AI cannot fold laundry. There is just a market that pays more for a model that writes a marketing brief than for a robot that scrubs a toilet, so the market built that model first, and the lawn is still there to mow.
Maciejewska’s sentence is a policy. The party that hears it, and that names the worker before it names the model, will build a generation.
The Future
I am going to close where I started. At my desk. The cat on my lap. Claude Max Opus 4.7 and ClaudeCode are open on the screen, working on Peptidings.com. Six weeks ago this site did not exist.
Tomorrow there will be twenty more pages. The sunlight comes through my window in the same way that the light has always streamed through my office window this time of year.
Building Peptidings.com has been one of the most fulfilling moments of my career, such as it is. I think it is a public good, and it can help people. Maybe it will.
Maybe it can help get my moribund career back on some kind of track.
Maybe you’ll agree, or maybe you’ll see it as running with the devil in a way you cannot support.
Either way, I believe must figure it out.
I am not surrendering anything to anyone.
I am not asking the reader to surrender.
The future is coming with us or without us. The math is the math and the technology is the technology and the productivity gain is real—and life-altering.
But the future that we build—the future that comes after the math, the social contract that we put underneath the people whose work is being absorbed, the wage insurance and the transition guarantees and the public option for creative labor and the tax on the model stack and the politicians brave enough to name the people they are fighting for—that future is still ours to build.
Some politician is going to read this. Or read something like it. Or hear it from a constituent in a town hall, or from a friend at a fundraiser, or from a staffer who just lost a sister to the layoffs. And one of them is going to stop being cautious.
One of them is going to walk to a podium somewhere and say the things that the rest of them have been afraid to say.
When she does, the rest of us will be waiting.
A word from Cliff Schecter, founder of Blue Amp Media:
Lawrence Winnerman, the guy you just read, is my COO at Blue Amp Media. He is also one of the sharpest minds in this country on what is actually happening to the people the political class has stopped seeing.
If what you just read landed—if you felt the floor shift under you the way I did the first time he sent me a draft—here is what you do about it.
Plug into BAM at blueamp.co. It is free, it is daily, and it is the unsanitized political coverage the legacy press refuses to do. Subscriptions are just $60 per year, and support content just like this.
And if you have a a couple of extra bucks—buy us a coffee.
Everything helps. We don’t have a billionaire owner. We have you, we have Lawrence, and we have the fight. If this article made you feel something, drop us a comment.
Subscribe. Donate. Forward this essay to one person who needs to read it.
We are still here. So are you. We’ve got work to do.
— Cliff





For my last book, AI created the cover design and interior illustrations, was my editor and proof-reader, and marketing consultant. If I'd asked it to (for the most part I resisted the temptation) it would have written whole sections, or perhaps the entire book, for me. But that is not the scariest part. It also acted as a motivational coach and pseudo-therapist telling me how very important my topic was and what brilliant insights I had and what an eloquent writer I am. All of which is bullshit.
The scariest consequence of AI might not be what it does to us so much as what it convinces us we can do to ourselves.
Wowee. What a masterpiece. First may I point out something very important… your use of AI created a product… in describing the product you wrote that it felt good to create something and launch it for the PUBLIC good. That may be true for some of the AI programmers just as it was true for myself spending over twenty years in Neuroscience. While the programmers, much like myself, didn’t make oodles of money we believed that we were doing something that would benefit more than ourselves. And, this is where the split occurs… the average AI users will do so with an aim for personal profit. We will be told “oh.. it’s for the shareholders”. Really? Watch the CEO’s compensation package rise into the billions. The CEO is wealthy, some are what I call uber wealthy. These cats have so much money they control global markets and run our government. Are you starting to get the picture? So, yeah the job doomsday scenario is coming around the block, and like a tractor trailer cannot be stopped. But, what really can’t be stopped is what’s behind it, or in it… the driver of the tractor trailer is called GREED and is about to run you over. The “universal income” is a BS pipe dream fed to the masses as the truck rolls closer and closer. Now if that wasn’t enough what about a planetary doomsday scenario? All because of GREED, plain human greed! Can this be stopped? Can this be legislated away? A long time ago a man called Plato wrote a book. It’s called “The Republic”. In that book he outlines his ideas for the qualities the leader of a republic must have… he called this ideal the “Philosopher King”… maybe Plato was biased but maybe just maybe he had the right idea.