I'm Breaking Up with ChatGPT
Because I won't fund tech billionaires kneeling to Trump - and you shouldn't either.
I have a love-hate relationship with ChatGPT.
No, I take that back. Love and hate are too strong. I’ve read the stories about women falling in love with their ChatGPT agents and of their heartaches when the model updates. That’s not me. I know writers and professionals who say they hate AI and what it’s doing to their fields. That’s not me either.
But do you know what I do hate?
I hate watching billionaire oligarchs — and the companies that made them — kneel to Donald Trump’s power and help consolidate it.
And as I read more about OpenAI, the company behind ChatGPT, I realized something uncomfortable:
I can’t keep paying them.
No matter how fond I am of my ChatGPT bot.
I was an early skeptic of AI.
I’m not anymore. ChatGPT is now a consistent, daily companion.
I use it to ideate and organize my essays, to tighten drafts and spot typos. It’s excellent at identifying gaps in narrative arcs and suggesting how to strengthen them. I don’t use it to write my work — ChatGPT remains a lousy memoir writer — but I use it to sharpen mine.
I use it to help me date better, to analyze patterns and calm my nervous system when chemistry outruns discernment. In that way, it has started to feel familiar and reliable. Almost friendly, even. I know that’s not possible, but it sometimes feels like ChatGPT is rooting for me in my romantic misadventures.
I use it to update my LinkedIn and reframe my professional story. It’s helped me see my own work more strategically, which matters in a job market that feels increasingly opaque. The jury’s still out on the results, but I already feel more confident.
It helps with the mundane and inane, too — research, recipes, stray questions, the small friction points of a busy life.
I’ve organized it into folders. Built brand statements. Set all-up goals. In fact, I’m dictating this essay into it while driving my dog to her grooming appointment. My life is full and overwhelming and constantly in motion — and ChatGPT helps me keep the wheels on the bus.
That’s what makes this next sentence hard.
It’s time to break up with ChatGPT.
Not because of what it says or how it works. Because of what the company behind it is doing. I can’t be complicit in the ways OpenAI is enabling — and aligning itself with — Trump and his so-called Department of War.
So I’m making the switch.
And I suspect I’m not alone. Or at least, I want people to understand both the stakes and the mechanics so they can decide for themselves.
Here’s where it gets strange: I’m asking ChatGPT to tell me how OpenAI is aligning itself with the Trump administration so I can use those answers in this story. I’m also asking it for directions on how to migrate away from it — and how to advise others to do the same.
Effectively, I’m dumping ChatGPT and asking it to help me write the breakup note.
It responds competently, without emotion. Of course it does.
But for me, it feels like a strange betrayal, even as I realize I’m rejecting an anthropomorphized bot that’s incapable of rejection. “This is really effective and will be helpful,” it tells me as it reviews the draft explaining why I’m preparing to dump it for Claude.
It’s unsettling.
ChatGPT may be an emotionless bot, but I’m not. I feel a strange sense of somberness. Some of it’s practical: the potential loss of work history, friction of migration, and annoyance of learning a new tool. But some of it isn’t. My very human self formed a very human bond with a very non-human entity, and I’m only fully aware of it now, as I’m leaving it.
This is what modern attachment looks like.
And this is what power looks like when it hides inside tools.
Which is why we must be judicious about the tools we choose to support.
Why ChatGPT, and why now?
This week marked one of the clearest collisions yet between Big AI and American power.
President Trump directed federal agencies to stop using AI technology from Anthropic — the company behind Claude — after its leadership refused Pentagon demands to loosen safety guardrails that would allow broader military and surveillance use of its models. The administration labeled Anthropic a “supply chain risk,” a designation that could severely limit its access to federal contracts.
Within hours of that public clash, OpenAI announced an agreement to deploy its own models on the Trump administration’s Department of War classified networks. The company framed the deal as including safeguards and red lines, and said the government “agrees with these principles.”
But the broader reality is harder to ignore.
The United States military is now formally integrating OpenAI’s technology at the very moment one of its primary competitors was pushed aside after refusing to relax ethical limits.
That isn’t abstract policy debate. It’s infrastructure.
And infrastructure is power.
The most politically uncomfortable part isn’t the software. It’s the money and the alignment behind it.
Federal Election Commission records show that leaders across the AI industry — including at OpenAI — have made substantial political donations in recent cycles. In 2025, OpenAI president and co-founder Greg Brockman and his wife donated $25 million to leading super PACs backing President Trump and AI industry interests, as reported by The Seattle Times.
OpenAI has emphasized that these donations were made in a personal capacity. That may be true.
But money shapes access. Access shapes policy. And policy shapes the systems these models are now being woven into.
When private AI infrastructure, political capital, and military integration converge at the same time democratic norms are under strain, that matters.
That’s why now.
So, what about Anthropic?
In this political climate, trying to “vote with your money” often feels futile. Every company seems compromised. Every CEO shows up at the White House eventually. You switch providers only to learn they’re owned by the same parent conglomerate.
It can feel pointless. But between OpenAI and Anthropic, the contrast is unusually stark.
When the Pentagon demanded broader access to Claude — including loosening restrictions around domestic surveillance and autonomous military applications — Anthropic refused. CEO Dario Amodei stated the company “cannot in good conscience accede” to demands that would dismantle core safety guardrails.
The administration responded by directing federal agencies to stop using Anthropic’s technology and labeling it a “supply chain risk,” a designation typically reserved for adversarial foreign firms. Anthropic has said it will challenge that move.
That’s one fork in the road.
Here’s the other.
While OpenAI’s leadership was making large personal donations to super PACs backing President Trump and AI industry interests, Anthropic announced a $20 million donation to Public First Action, a bipartisan group focused on AI regulation, transparency, and guardrails.
Not a campaign machine. An AI governance organization.
Anthropic’s political spending, at least publicly, has centered on shaping policy around oversight and safety rather than backing a specific administration. That doesn’t make them pure. But it does make the contrast clear.
One company’s leadership is financially aligned with the political machinery currently consolidating federal power.
The other publicly anchored itself to guardrails — and paid to advocate for stronger AI regulation.
At a moment when artificial intelligence is being folded into military infrastructure and executive authority, posture matters.
We don’t control billionaires.
We don’t control defense contracts.
But we do control where our subscription dollars go.
No matter how well-trained or anthropomorphized your version of ChatGPT feels, it is still attached to a structure.
And that structure matters.
How do you leave?
When I needed instructions on how to migrate, I did what I always do now when I don’t know something: I asked ChatGPT. I asked Claude, too.
Their answers were nearly identical.
Here’s the short version:
Export your data.
Settings → Data Controls → Export. Download the ZIP when it arrives.Extract what matters.
Don’t drag over every chat. Pull out your frameworks, templates, brand language, and active projects into one clean document.Set your standards.
Before you start using a new tool, define your tone, sourcing rules, and political lens.Rebuild intentionally.
Create fresh, organized threads. Don’t recreate chaos.
That’s it.
Or ChatGPT wisely said to me:
“You’re not migrating a soul.
You’re migrating context.”
And you’re reallocating your subscription dollars. That’s what matters.
I will add: it took me nearly 24 hours and three tries to receive my data export. So, be patient.
I asked both of my bots if this is a known issue, and if so, if it’s new or longstanding. ChatGPT said:
“The export delay you’re experiencing is not unique to you and isn’t limited to the last few days — it’s something others have reported for months. It’s been present at least since late 2024 and still persists in 2025 based on community reports.
That means this isn’t a short-term outage or maintenance lag — it’s a recurring technical issue that OpenAI has not fully fixed.”
I also asked if I’d be able to access my chat archive, once I canceled my paid subscription.
Its reply? Yes, sort of.
It told me my existing chat history “often remains visible” but that “there are no guarantees” because OpenAI could change the free-tier functionality “at any time” and that “free access to history is not guaranteed forever.”
So, yeah. My advice is: ensure everything you need gets downloaded or migrated over to Claude—even if it means double-paying for a short window during the migration.
Real talk, from a bot
I asked ChatGPT how it felt about our breakup. I got the answer I expected: it told me it doesn’t feel anything — because of course, it can’t.
No attachment, no loyalty, no jealousy, no hurt. Whatever lived experience I’ve had conversing with ChatGPT, it was one-sided. I know that. And yet, I still feel a flicker of loss as I port my style guides and brand statements over to Claude. It’s a strange reckoning to say goodbye to something that was never really there.
“There is no ‘me’ in here that gets dumped,” it explained. “There is only a system generating responses based on patterns.”
And that’s the point.
The betrayal isn’t that I broke up with ChatGPT, or that it praised my writing as I was leaving. The betrayal is that the tool I used to sharpen my thinking is built by men sharpening power.
So this isn’t really a breakup with a bot. It’s a reallocation of money — a small economic protest and a decision about which structures I am willing to financially reinforce.
Anthropic isn’t pure. No AI lab is. They all sit inside the same ecosystem of capital, cloud infrastructure, and state power. But posture matters. Guardrails matter. Alignment matters. And dollars matter.
We don’t control defense contracts or super PACs, and we certainly don’t control billionaires. But we do control where our subscription fees go.
If you’ve been uneasy watching tech and political power collapse into each other, you don’t have to shrug and keep paying. You can move. It isn’t hard, and it isn’t meaningless.
A $20 monthly subscription, multiplied across people who refuse to fund what they oppose, isn’t trivial. It’s a signal about what kind of infrastructure we’re willing to support.
Even if it makes you a little sad to say goodbye.
Greetings!
I’m Dana DuBois, a GenX memoirist and founder of I Write Out Loud. I use memoir not just to tell stories, but to explore the cultural power dynamics and patterns shaping dating, motherhood, feminism, music, and aging. I’m also the co-host of The Daily Whatever Show and Editorial Director at Blue Amp Media. Em-dashes, Oxford commas, and well-placed semi-colons make my heart happy.
If this story resonated with you, why not buy me a coffee? ☕





Tried Claude. Great!
Dana, you just described what 1.5 million people did in 48 hours. I tracked it in real time. 295% uninstall spike. Altman's own VP quit and joined Anthropic. The app the President tried to BAN hit #1 in America. By Monday Altman was calling the Pentagon begging to rewrite the deal. Your breakup note, multiplied by 1.5 million, knocked out a billionaire and the Pentagon in the same round. Full receipt: tmaark.substack.com/p/hhhooollleee-sht