Splicetoday

Digital
Mar 05, 2026, 06:28AM

OpenAI’s “Don’t Be Evil” Moment Has Arrived

In less than four years, one in three people worldwide will have their picture of the world painted by a system with no formal obligation to paint it honestly.

Fef35a3e 09ad 4c4c be7f ff36508f71cb.png.jpg?ixlib=rails 2.1

Google once had three words it lived by: don't be evil. But in 2018, those words were scrubbed from the company's code of conduct. No press conference. No apology. Just a clean deletion—because by then, the gap between the motto and the business model had become impossible to ignore. Google was collecting location data hundreds of times a day without people realizing, building surveillance-grade profiles to monetize human attention, and spending inordinate sums of money lobbying against the very privacy regulations those three words implied it would embrace.

Now OpenAI, once the industry's self-appointed conscience, is running the same playbook.

The world’s most valuable private company recently raised $110 billion—from Amazon, SoftBank, and Abu Dhabi's MGX—in what’s likely the largest funding round in history. Shortly before the money arrived, the company quietly removed "safety" from its core mission statement. The charitable reading is coincidence. The accurate one is choreography.

In less than a decade, OpenAI has rewritten its mission six times. What began as a nonprofit pledge has become a $300 billion infrastructure giant answerable only to investors. Google discarded its principles and spent two decades damaging the information environment, manipulating markets, and harvesting the private lives of billions. The consequences were serious, documented and largely unpunished. OpenAI without principles is categorically worse—an AI layer embedded in hospitals, courtrooms, and governments, making consequential decisions at scale, with no formal obligation to get them right. Its investors aren't philanthropists. They're expecting returns, and soon.

At the center of this shift sits ChatGPT. Hundreds of millions of people use it daily for health decisions, legal guidance, academic work, and government policy analysis. It isn't a search engine that people fact-check. Rather, it's an authority people trust. When a system with that reach shapes decisions across daily life, errors don't stay individual. They scale, compound and sometimes kill. Young people in America and beyond have already died following interactions with OpenAI platforms operating without meaningful oversight—and those deaths occurred before the company removed safety from its mission. Before the $110 billion arrived.

What makes this alarming is that OpenAI's ambitions now include powering hospitals, legal systems, government services, and national defense contracts. The company’s actively building the substrate through which consequential decisions will be made, at scale, affecting lives. Diagnostic tools that flag cancer. Sentencing recommendations that determine freedom. Financial instruments that move markets.

To understand the scale of what's coming, consider what AI will likely do in the coming decades—accelerate drug discovery, automate legal systems, transform battlefield decision-making and reshape education. OpenAI, already the dominant player, will sit at the center of that transformation. Google's influence, vast as it became, was always downstream of human decisions. It nudged and filtered. But OpenAI will be upstream, embedded in the diagnosis before the doctor speaks, the recommendation before the judge rules, the intelligence before the general acts. Google's damage was significant. OpenAI's potential damage operates at a different altitude entirely.

At sufficient scale, AI systems stop reflecting reality and start constructing it. That distinction matters more than anything else. Google shaped what people found. OpenAI will shape what people know, what they're told, and increasingly, what they believe to be true about themselves and the world. Answerable to a fund cycle rather than a public mandate, operating without a formal safety commitment, this isn't a company that needs to do something catastrophic to cause catastrophic harm.

ChatGPT is used by 900 million people every week. Sam Altman expects that number to hit 2.6 billion by 2030. In less than four years, one in three people worldwide will have their picture of the world painted by a system with no formal obligation to paint it honestly. The teenagers who’ve already died from unguarded AI interactions were a tragedy; at almost three times the current user base, they become a rounding error.

Don't be evil was three words. Safety was one. Both were deleted. History suggests the consequences are severe, structural, and always paid for by the people furthest from the boardroom.

Discussion

Register or Login to leave a comment