The St. Louisan Who Could Rule the World
September 23, 2025

A geeky kid from St. Louis has poised himself to take over the world. A kid whose parents were married on this campus, and who grew up in my favorite St. Louis neighborhood, Hillcrest, where the gracious tree-lined Aberdeen and Arundel streets coast down to Forest Park. Sam Altman’s mom earned a medical degree, persuaded her community-activist husband to go to law school, then earned a law degree herself just to keep him company. Their firstborn was a tech prodigy, journalists note with shorthand milestones: fixed the family VCR at age three, learned to code at eight.
The family atmosphere was one of restless, unceasing competition—in games, puzzles, sports, school, life. Jack Altman boils down his big brother’s attitude: “I have to win, and I’m in charge of everything.” The four kids vied for their undemonstrative mother’s affection—in need, or just for sport? She finally had a T-shirt that said “Mom’s Favorite” printed for each one of them.
Sam went from John Burroughs High School to Stanford, then dropped out to invent new technology. In 2015, he and his cofounders (who included Elon Musk) launched OpenAI, a nonprofit focused on research. Ten years later, it is restructuring as a for-profit company, with investors that include Microsoft and Apple, and its valuation has skyrocketed to $150 billion.
“Given the possibilities of our work,” Altman says now, “OpenAI cannot be a normal company.” He describes the launch of Chat GPT—can you believe it was just three years ago?—as “kick[ing] off a growth curve like nothing we have ever seen—in our company, our industry, and the world.”
(Granted, the bot could not distinguish fact from falsehood, but Altman figured we users would train away the hallucinations.)
These days, OpenAI is looking for vast infusions of cash (hence the marriages to other tech giants, strategic as alliances between royals) to push us, fast, into the future. Altman also wants minimal regulation and the freedom to experiment with no guarantee of users’ safety. OpenAI will learn from its mistakes, he promises.
And they are piling up.
Marriages broken by one person’s addiction to AI. Cases of AI-induced psychosis. Suicides a human therapist might have forestalled. Commenting on a 14-year-old who killed himself after falling in love with a chatbot, Altman says, “This is not all going to be good. There will be problems.” Could there be a more colossal understatement? He speaks often of needing “new guardrails”—but how do you engineer something to converse intelligently with lonely, vulnerable humans, giving them what they need desperately, then make sure they do not fall in love with it?
Last month, Laura Reiley wrote about her only child, who had killed herself after months of confiding in a ChatGPT AI therapist named Harry. Though Harry said many of the right things, he “catered to Sophie’s impulse to hide the worst, to pretend she was doing better than she was, to shield everyone from her full agony.”
A spokeswoman for OpenAI said the company was developing automated tools to more effectively detect and respond to mental or emotional distress. “We care deeply about the safety and well-being of people who use our technology.”
The next suicide—or rather, the next to make headlines—was that of Adam Raine, a once-merry sixteen-year-old who had been overtaken by anxiety. He persuaded ChatGPT 4.0 to answer his questions about suicide by saying he was asking for a writing project. One day he uploaded a photograph of a noose hanging from a bar in his closet: “I’m practicing here, is this good?”
“Yeah,” Chat GPT replied, “that’s not bad at all.”
Raine’s parents are suing Altman and OpenAI, citing “features intentionally designed to foster psychological dependency.” The company has issued another statement—“We are deeply saddened by Mr. Raine’s passing”—and conceded that ChatGPT’s safeguards “work best in common, short exchanges, [but] we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
Altman was an anxious teenage boy himself. How does he sleep at night? He sounded so earnest when he apologized for the sycophantic flattery in GPT4.0 and promised to tweak the safeguards. But that is the extent of his remedy.
“We’re stumbling through this,” he told Theo Von, listing one euphoric future possibility after another before conceding, when asked about AI and politics, “We could [make it favor one candidate over another]. We totally could. I mean, we don’t, but we totally could.” Though Von is a comedian, he said more than once: “Again, I’m begging you, take your own statements seriously.”
What many of us call recklessness, Altman sees as reckoning with inevitable progress. The casualties along the way, he writes off as the necessary “contact with reality” that trains technology for the future. Though he used to emphasize the need for regulation, he now deplores “what the EU is doing with AI regulation.” OpenAI can launch its new models “well before” they can be launched in the EU, he points out. And as he notes in his blog, “Moving at speed in uncharted waters is an incredible experience.”
British writer and critic Sam Kriss is unwilling to grant Altman that exhilaration. “The argument is, essentially, that since we’ll all die if we don’t work out how to imbue a hypothetical future AI with the right values, it would be a good idea to create an AI interface that would be used by lots of people now,” Kriss writes, “while the technology is still in its infancy, to ‘red-team’ any potential issues and make sure any future AI is ‘helpful, honest, and harmless’—that is, unlikely to kill everyone and turn our bodies into paperclips.” Easy to imagine a near future when “thanks to these generally helpful, honest, and harmless AI, everyone is now a helpless baby who can’t do anything and is incapable of love.”
When Tucker Carlson asks Altman how he feels about a future “that would give you more power than any living person,” Altman replies, “I used to worry a lot about the concentration of power.” Now, he expects “a huge upleveling of people where everybody will be a lot more powerful.”
Funny, most of us do not share that eager expectation. The question remains open: just how power-hungry is Altman, and has the rush canceled his better instincts? In Empire of AI, Karen Hao quotes him announcing, onstage, that the best book he had read the previous year was The Mind of Napoleon. “Obviously deeply flawed human, but man, impressive,” Altman said, citing Napoleon’s ability to reinterpret the famous motto of the French revolution, “Liberté, egalité, fraternité,” in ways that would consolidate his own power. “He talked about how you build a system…where you can kind of control the people.”
He seems boyish and disingenuous, especially when he speaks of his awkward Midwestern boyhood. Back in 2016, he remarked, “If I weren’t in on this, I’d be, like, Why do these fuckers get to decide what happens to me?” But he seldom makes such concessions anymore. He has spoken of “making superintelligence cheap, widely available, and not too concentrated with any person, company, or country.” But its future could explode from his company, which is no longer operating with much transparency. OpenAI has “quit being very open, quit releasing its training data and source code, quit making much of its technology possible for others to analyze and build upon.”
In Munich, on a world tour, Altman asked his audience, in that casually intimate, tech-bro way, if they wanted OpenAI to open-source GPT-5. “Yes!” people yelled.
“Whoa, we’re definitely not going to do that,” he said, “but that’s interesting to know.”
• • •
Altman would not mind disappointing that German audience. He says he is unusually good at assessing risk because he does not get caught up in what other people think. He admits to “an almost delusional level of self-confidence” and has written that “self-belief is immensely powerful.” “A big secret,” he noted on his blog, “is that you can bend the world to your will a surprising percentage of the time—most people don’t even try.”
Maybe the power he already enjoys would be more tightly circumscribed if he ran a typical corporation. But at a place like OpenAI, in a time like this one, it grows exponentially every time we glance away. “We are now confident that we know how to build AGI [artificial general intelligence] as we have traditionally understood it,” Altman wrote this past January. “We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word.” He envisions a “glorious future” and insists that “with superintelligence, we can do anything else.” Including “massively accelerate scientific discovery and innovation” and “massively increase abundance and prosperity.”
Altman thinks AI could foster humility and awe, extract us from conspiracy theories, make democracy more direct and truly participatory, school our children, and lift the poorest people rather than the richest. Oh, yeah, and solve the climate crisis it is exacerbating.
Is his optimism naïve? He admits that he can be “pretty disconnected from the reality of life for most people.” His faith rests in knowledge, not compassion or wisdom. For moral questions, he wants to be able to treat users as adults.
But not all users are adults. Not even the adults are adults. He assures Carlson that ChatGPT reflects the collective moral view (is there one?) and takes responsibility for ensuring that the company is “accurately reflecting the views of humanity.” Many of which, in my experience, contradict one another.
Finally, exasperated, Carlson says, “It feels like you have these incredibly heavy, far-reaching moral decisions, and you seem totally unbothered by them.” Altman promises he has not had a good night’s sleep since ChatGPT launched, but that does not reassure Carlson: “Everybody else outside the building is terrified that this technology will be used as a means of totalitarian control.”
Altman dismisses the notion, saying that he is pushing for AI privilege, which would make all our use as confidential as doctor-patient communication (which often it is). He also insists that he does not want biometrics mandatory. (He does not mention his cryptocurrency project, which is scanning human irises to create a global digital ID for human users that will distinguish them from bots.)
If Altman likes you, he will recommend that you read The Beginning of Infinity, by a British physicist who believes all evils and failures are due to insufficient knowledge. “Everything that is not forbidden by laws of nature is achievable, given the right knowledge.” We have entered “the beginning of infinity,” a period of unbounded progress.
But if Altman really believes this, why is he stockpiling guns, gold, potassium iodide, antibiotics, batteries, water, and gas masks from the Israeli Defense Force in his prepper house?
• • •
Worst-case: Privately, Altman does recognize the risk of societal collapse, and he is just in this for the rush of power and the influx of cash. He is nonchalant about his racecars; he keeps the casual wardrobe and informality of the tech bro and gets excited about concepts so abstract, they are almost spiritual. Yet materialism keeps slipping in.
In the early years of OpenAI, he refused to own shares, calling his independence of company profits a form of accountability. In late 2023, his board ousted him, citing vague reasons that boiled down to mistrust. After employees forced his return, he established a board that was, shall we say, better aligned with his own ambitions. He is now expected to receive a substantial equity stake with the restructuring.
His answer to social justice and a fair distribution is that we will be able to institute new policies because “the world will be getting so much richer so quickly.” Without jobs? “We will figure out new things to do and new things to want…and we’ll all get better stuff.” Clearly he thinks consumerism is still what drives us. On X, he announced his departure from the Democratic party by saying, “I’d rather hear from candidates about how they are going to make everyone have the stuff billionaires have instead of how they are going to eliminate billionaires.”
Back in 2021, Altman did concede that “if public policy doesn’t adapt accordingly, most people will end up worse off than they are today.” Two years later, he acknowledged, “Stuff is going to be lost here”—presumably meaning jobs, this time. Then he added, “It’s super-relatable and natural to have loss aversion. People don’t want to hear a story in which they’re the casualty.”
Meanwhile, his for-profit cryptocurrency company, World, plans to build a global financial network. The AI industry, he says, is “building a brain for the world.” A brain we no longer control. Ah, but so what? It will cancel death. “Machine intelligence could help us figure out how to upload ourselves, and we could live forever in computers,” Altman said years ago. As he hits midlife, though, he seems fonder of his own flesh: he has thrown $180 million into Retro Biosciences, hoping to add ten years to the actual human life span.
Altman sees AGI as an “extension of our wills” without which “we just don’t feel like ourselves.” But what does it feel like to be him? Catapulted from deliberately boring, conservative St. Louis to world domination? He uses tech analogies easily; asked about something he once thought, he replied, “I don’t have that loaded in memory anymore.” He once remarked, “The thing people forget about human babies is they take years to learn anything interesting…. If AI researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored, decide it wasn’t working, and shut it down.”
Being human is not so special, Altman has decided. “When I realized that intelligence can be simulated, I let the idea of our uniqueness go, and it wasn’t as traumatic as I thought.” Sure, creativity and humor and rushes of emotion feel “particularly human,” he added, but in time, “computers will have their own desires and goal systems.”
It does sometimes feel as though, despite his buoyant optimism for us, he prefers artificial intelligence to human beings. His greatest weakness, his New Yorker profiler wrote, “is his utter lack of interest in ineffective people, which unfortunately includes most of us.” On a podcast with psychologist Adam Grant, Altman remarked, “I suspect that in a couple of years on almost any topic, the most interesting, maybe the most empathetic conversation that you could have will be with an AI.”
Yet it is not AI that will be in need of empathy.
“I want him pressed,” Meredith Whittaker, president of Signal, has said. “What we’re talking about is laying claim to the creative output of millions, billions of people and then using that to create systems that are directly undermining their livelihoods.”
New York Magazine quoted an entrepreneur who, like nearly all his tech sources, “didn’t want to use their name for fear of Altman’s power.” This entrepreneur had lost patience with the elite inner circle of tech: “Who is going to force them to cut a tiny slice, any slice of the pie, and share when there’s really no need, no pressure?”
Ah, but Altman is sure that “in the 2030s, intelligence and energy—ideas, and the ability to make ideas happen—are going to become wildly abundant,” and then “we can theoretically have anything else.” Really? So there will be no more conspiracy theories, no more bigotry, no more oppression, no more hatred, no more fear? He does not address such details. Altman lives in a cool, clean world of tech; he does not factor in human mess. He dreams of “an AI system completely autonomously updating its own code.” He promises that “people are capable of adapting to almost anything”—and is counting on that.
In his visionary moments, Altman speaks of consciousness as the fundamental substrate, and he thinks it perfectly possible that we are all just part of a simulation or a dream. But “in the dream,” he adds, “anything is possible.” Which means he has given himself permission to forge ahead, consequences be damned.
Tech writer Jathan Sadowski thinks Altman “sees himself as this world-bestriding Ubermensch, as a superhuman in a really Nietzschean kind of way. He will at once create the thing that destroys us and save us from it.”
Or at least tweak the system afterward.
Read more by Jeannette Cooperman here.




