There’s a quiet revolution taking place behind the walls of public administration. It doesn’t wear jackboots. It doesn’t shout from podiums. It doesn’t roll tanks through city streets. Instead, it hums in server rooms, buried deep in code, trained on historical data and instructed to cut costs. It’s artificial intelligence, an austerity engine cloaked in the language of innovation.
Governments the world over are embracing AI not just to improve services, but to eliminate them. Not to help people, but to manage them more cheaply. And in doing so, they are rewriting the social contract without ever holding a vote.
Austerity has long been the preferred medicine of governments looking to appease bond markets and billionaire donors. But unlike the public fights over food stamps or Medicaid cuts in decades past, today’s austerity is being administered with the antiseptic efficiency of an algorithm. It’s what has been the driver of Elon Musk’s DOGE. That’s what makes it so dangerous. It’s not loud. It’s not visible. It’s quietly catastrophic.
In the United Kingdom, government forecasts claim that digitizing public services with AI could save tens of billions of pounds annually. That promise sounds benign, responsible, even, until you consider what it actually means. It means fewer people answering the phone when you need help. It means opaque systems denying disability claims based on patterns in your online behavior. It means elderly citizens being told by a chatbot that their housing benefits no longer qualify.
And it’s not just Britain. Here in the United States, the federal government is pushing hard to modernize its operations with AI. The White House frames it as a drive for agility and cost-effectiveness. That’s code. What it often means in practice is smaller workforces, automated eligibility determinations, and a greater reliance on digital systems that don’t care if you’ve lived a hard life or made one mistake too many.
Take Hong Kong. Facing a massive budget deficit, the government plans to slash 10,000 civil service jobs while leaning on AI to keep the machine running. The plan is framed as efficiency. The result, inevitably, will be fewer humans helping other humans, and more people slipping through the cracks of automated bureaucracy.
This isn’t innovation. It’s abandonment. It’s the weaponization of math to do what politicians are too cowardly to say out loud: “We want to spend less on you, and we don’t care if it hurts.”
Some might argue that AI is just a tool. That it depends on how it’s used. And sure, in theory, AI could be used to increase access, detect fraud without punishing the innocent, or optimize the delivery of services to those most in need. But theory isn’t practice. And when the mandate given to these systems is to cut costs, not increase care, the outcome is predetermined. Garbage in, cruelty out.
What’s worse, the black-box nature of these systems means that when something goes wrong, and it will, you may never know why. Why your SNAP benefits were denied. Why your unemployment claim stalled. Why your housing voucher was revoked. The decision will have been made by a neural network trained on past data, operating according to rules no one elected and no human fully understands.
The implications go beyond public services. AI’s creeping influence into governance is creating a world where accountability is always one step removed. “It wasn’t us,” officials will say. “It was the system.” But the system isn’t neutral. It reflects the biases of its creators, the incentives of its funders, and the blind spots of the data it ingests. It is, as Cathy O’Neil once wrote, “an opinion embedded in code.” And in this case, that opinion is austerity.
Austerity disguised as modernization is not a new trick. In fact, it’s one of the oldest. Starve a program, then blame it for failing. Replace the workers who care with software that doesn’t. Then call it innovation. But we must be clear-eyed about what’s happening here: AI is being used to shrink the public sphere while insulating the powerful from the consequences. It is the outsourcing of political will to an unaccountable machine.
In a functioning democracy, this would spark outrage. And maybe it still could. But we are so exhausted, by disinformation, by crisis fatigue, by the sheer velocity of change, that we barely register the loss. We click “accept” on another cookie notice. We tap through another chatbot menu. We adapt. And in doing so, we normalize a world where our needs are handled not by people, but by pattern recognition models optimized to say no.
The danger isn’t that AI is making government too smart. It’s that it’s making government inhuman. And when humanity is subtracted from decision-making, justice becomes a casualty.
This is not an anti-technology argument. I have great respect for AI. In most cases it is smarter than we are. It’s a plea for moral clarity. We cannot allow a future in which policy is outsourced to machines and then called progress. We cannot accept that a budget shortfall justifies algorithmic cruelty. And we cannot pretend that the silent unraveling of the social safety net is anything less than a political choice dressed in technological clothing.
Governments have always had tools. What matters is how they’re used, and whom they serve. Right now, AI is serving austerity. It’s being used to save money at the expense of those who have the least of it. And if we don’t fight to reverse that trend, we’ll find ourselves ruled not by laws or leaders, but by systems designed to forget we exist.
Quietly catastrophic. That’s how it begins.
Disclaimer: Jim Powers writes Opinion Columns. The views expressed in this editorial are my own and do not necessarily reflect those of Polk County Publishing Company or its affiliates. In the interest of transparency, I am politically Left Libertarian.