What’s your name?
Who’s your daddy?
Is he rich like me?
Has he taken
Any time
To show you what you need to live?
(Tell it to me slowly) tell you why?
(I really want to know)
“Time of the Season” by the Zombies - 1968
Artificial Intelligence is everywhere. It’s sorting your social media feeds, screening job applications, diagnosing illnesses, and steering the cars we drive, or rather, the cars that drive us. AI is quickly reshaping our world. You may not feel it, yet. You will. But pause for a moment to consider who exactly is shaping AI?
This isn’t a trivial question. Most of us assume that new technologies naturally improve lives and are driven by innovation, creativity, and competition. But AI is different. This isn’t just another smartphone or social media platform. AI systems don’t just serve us. They influence, categorize, and sometimes control us. And these systems aren’t built by neutral, public-minded institutions. They’re built and controlled by a handful of giant, profit-driven corporations whose primary obligation is to their shareholders, not society.
As OpenAI CEO Sam Altman himself acknowledged, “We’re a company with investors who expect a return.” And that candid statement captures the heart of the issue. The companies driving AI, OpenAI, Google, Microsoft, and Meta, aren’t necessarily sinister, but their incentives are crystal clear, to maximize profits, dominate markets, and keep their shareholders happy. Everything else is secondary.
Geoffrey Hinton, often called the “godfather of AI,” recently warned after leaving Google that AI companies are “racing toward the brink” without fully considering the dangers. Why? Because caution doesn’t reward shareholders as reliably as rapid progress does. And the public’s role is increasingly reduced to spectators watching a race with potentially catastrophic stakes.
We’re told repeatedly about the incredible potential of AI for curing diseases, addressing climate change, even creating abundance. These are enticing visions. Yet behind closed doors, the primary calculations are financial, not humanitarian. AI companies tightly control what information they release. They selectively publicize impressive capabilities and downplay or outright conceal flaws and risks.
Consider OpenAI’s cautious release of GPT-4. Despite enthusiastic marketing and widespread excitement, details about the training methods, datasets, and safety concerns remain largely secret. Renowned AI ethicist Emily Bende described this secrecy bluntly: “These companies are opaque because transparency might threaten their market advantage.”
We’ve seen similar dynamics before. Pharmaceutical companies, social media giants, and tobacco corporations have all historically put profits ahead of people and only reveal dangers after severe damage becomes impossible to hide. AI’s risks are potentially even more profound. Bias in automated systems could unfairly exclude entire populations from critical opportunities. AI-driven misinformation campaigns can reshape democracies overnight.
The competitive dynamics of capitalism exacerbate this problem. Companies locked in an arms race for AI dominance have every incentive to cut corners and rush their products out the door. Tristan Harris, co-founder of the Center for Humane Technology, starkly describes this situation when she points out that “We’re in a race to the bottom of the brain stem.” If profit and market dominance remain our primary guiding stars we risk navigating directly into disaster.
Leaving AI exclusively in corporate hands means we are not merely passive consumers of a shiny new product.
We become subjects of a private authority. The AI systems influencing our lives will be accountable to shareholders, not to the citizens whose data they rely on and whose futures they shape.
This isn’t about resisting progress. It’s about demanding that the fruits of progress are responsibly managed. We urgently need transparency, independent oversight, and genuine public engagement in AI’s development and deployment. As Meredith Whittaker, President of Signal and co-founder of the AI Now Institute, recently warned: “The danger isn’t AI itself. It’s the concentration of AI power in a handful of companies who prioritize profit above the public good.”
We face a defining choice. If we allow AI’s future to remain guided solely by profit motives we risk waking up one day to realize the technology controlling our lives isn’t accountable to us at all. But we’re not powerless. Demand transparency from AI companies by supporting legislation and regulations requiring openness about AI models and their impacts. Support independent organizations working to ensure AI serves the public interest and hold your elected representatives accountable for ensuring robust oversight. Engage in public discussions, educate yourself and others about AI, and voice your concerns clearly and loudly.
The future of AI should be decided by all of us, not just those who stand to profit from it. The time to act is now, not when the damage becomes obvious.
Disclaimer: Jim Powers writes Opinion Columns. The views expressed in this editorial are my own and do not necessarily reflect those of Polk County Publishing Company or its affiliates. In the interest of transparency, I am politically Left Libertarian.