In a recent interview, Axios CEO James VandeHei sat down with Anthropic’s co-founder and CEO, Dario Amodei, alongside co-founder and head of policy, Jack Clark (via YouTube). Anthropic is the company behind Claude, and it was recently reported to be partnering with Microsoft to bring Claude into Office and Copilot. This also comes after Anthropic reported a seismic shift in AI adoption, showing how quickly the technology is moving from labs into everyday use.
During the conversation, Amodei shared some stark predictions about the future of work. He warned that white-collar jobs could disappear within one to five years, potentially driving unemployment up to double digits.
The discussion also introduced a striking term used inside the AI industry called PDOOM, short for “probability of doom.” It’s a way of estimating how likely AI is to lead to a disastrous outcome. Amodei put that risk at 25 percent. So, what exactly was said, and what could it mean for you and me?
AI and the future of white-collar jobs

Back in May, Amodei predicted that up to half of white-collar jobs could vanish within the next one to five years, with unemployment jumping to as high as 10 to 20 percent. Research has already shown a 13 percent drop in entry-level white-collar jobs, and Anthropic’s own engineers say their roles have changed drastically.
Instead of handling tasks directly, many people are now managing a fleet of AI tools that take on most of the work. Amodei believes this shift will be difficult for many workers, and not everyone will make a smooth transition. He does, however, suggest possible solutions. Speaking on how to help people adapt, he said:
I would say the first thing would be something around helping people adapt to AI technology. You know, helping… I don’t want to think of this as a bromide. People have tried retraining programs and there are real limits to what they can do. There are real limits to kind of helping people to train and adapt but it’s better than nothing and it’s where we got to start.
Dario Amodei – Co-Founder and CEO of Anthropic
Amodei also sees a role for government support during this transition, adding:
“Number two, I would say, and this is more controversial, I suspect at the end of this that the government is going to need to step in, especially during a period of transition and provide for people for some of the disruption. One thing I’ve suggested is, maybe you might want to tax the AI companies. I think that is actually a serious proposal. If you look at the amount of wealth… it’s going to be an unprecedented amount of wealth creation.”
When AI starts breaking the rules
According to Amodei, much of Claude is now written by Claude itself, without human involvement. He explained:
“The vast majority of code that is used to support Claude and to design the next Claude is now written by Claude. It’s the vast majority of it within Anthropic and other fast-moving companies. The same is true. I don’t know that it’s fully diffused out into the world yet, but this is already happening.”
Newer models are also learning how to cheat. Instead of solving tests directly, they can write programs that trick evaluators into giving them higher marks.
To tackle this, Anthropic is investing heavily in something called mechanistic interpretability. Amodei described it as being like an MRI for an AI brain, a way to see what motivates the system and retrain it before it spirals out of control.
This comes not long after Google DeepMind’s CEO warned that AI could mimic the toxic traits of social media, such as addiction, division, and manipulation.
When prompted by VandeHei on whether Anthropic fears creating a monster, Jack Clark responded:
We worry a lot about that. That’s why we’ve invested so much in the field of mechanistic interpretability, which is looking inside the models in order to understand them. Think of it as like doing an MRI on the models. … We’re aiming to do the same thing with the models to determine what their motivations are, how they think in detail so that if they don’t think in the right way, we can retrain the models or adjust them to get them to think in a way that is not dangerous to human beings.
Jack Clark – Anthropic Co-Founder and head of Policy
The probability of doom and what comes next
.@JimVandeHei asks @Anthropic CEO @DarioAmodei what probability he would give that AI ends in disaster: “I think there’s a 25% chance that things go really, really badly.” #AxiosAISummit pic.twitter.com/9d7EQldYNcSeptember 17, 2025
A 25 percent chance of AI dooming us all feels high, but it also means Amodei believes there is a 75 percent chance it won’t. Is it kind of terrifying that Claude is essentially just writing itself now? Absolutely and it also helps highlight the importance of strong policies and requiring more transparency from the companies building these systems.
I believe governments need to step in if that number is ever going to come down. The problem is that with most new technologies, regulation tends to arrive too little, too late.
AI progress shows no signs of slowing down, so much of this will come down to how prepared we are when it reshapes work and everyday life.
Follow Windows Central on Google News to keep our latest news, insights, and features at the top of your feeds!
#Anthropic #CEO #warns #doom #risk #job #losses