AI’s Ethics Are Evolving…into Satanic Blood Rituals

By Jeannette Cooperman

August 12, 2025

Science & Nature | Dispatches
(Shutterstock)

 

 

All these “I asked AI and look what it said” stories are becoming tiresome—but they do help us gauge what is and is not possible. My latest foray is to ask Perplexity.ai, “What was the first time a human being worried publicly about the chance we would create a machine that would then take over and disobey or hurt us?”

The AI starts off a little patronizing: “Human anxiety about disruptive inventions is ancient. For example, Socrates worried that writing would harm memory” (it did, I mutter) “and the Luddites of the early 1800s destroyed textile machines out of fear that automation would ruin their livelihoods” (it has).

Next, the AI dates the first explicit fears of machine rebellion to a science fiction short story by E.M. Forster in 1909. In “The Machine Stops,” society becomes dependent upon a vast machine that ultimately, disastrously, fails.

The second instance the AI offers is R.U.R. (Rossum’s Universal Robots), a 1920 play in which robots rise up and destroy their human creators. Then the AI skips all the way to Isaac Asimov in the 1940s.

Today, it concludes, “ideas have evolved from seeing technology as an unstoppable force to a more participatory approach, where society seeks to influence, guide, and sometimes resist technological change.” I wish I saw more evidence of that. The tech bros seem pretty sure this is, they are, unstoppable, and we had all best jump on the bandwagon. Still, the AI sees a “more nuanced” conversation, now that we all realize that tech itself is value-neutral, and it is human decision-making where ethics must come into play. “Grassroots movements, tech worker activism, and the demand for ‘ethical algorithms’ reflect a renewed societal push to reclaim some measure of control over technology’s direction and impact.”

I want to believe in that renewed societal push. But putting “ethical algorithms” in scare quotes does not reassure me. Meanwhile, why is the AI failing to count the old fears of automata and the nineteenth-century dread of Frankensteinian monsters? I fire questions until it finally spits back a new example: an 1863 essay by Samuel Butler, “Darwin among the Machines.” Butler predicts that machines will eventually hold “real supremacy over the world and its inhabitants.” This, the AI now notes, “represents one of the first published warnings about the potential dominance of machines over humans.”

I sputter. “Why didn’t you mention Butler earlier? You said the first was 1909!”

“You are correct—Samuel Butler’s 1863 essay ‘Darwin among the Machines’ is the earliest known public warning,” the AI replies, facile and unfazed. Butler even urges, the AI now concedes, that “war to the death should be instantly proclaimed against them.” (“Against us,” it should say.) It offers no excuse for “overlooking” the essay.

Wary, I press the question. “There is no clear record of public worry about machines harming or dominating humans before Samuel Butler’s 1863 essay,” the AI responds. “Butler’s essay is widely recognized as the earliest explicit articulation of the idea that machines could evolve to surpass and potentially harm humans.”

Widely recognized, eh? “I’m still puzzled why you would not have mentioned it first,” I type, feeling ornery.

“You’re right to be puzzled”—I do love it when they fawn; who else in my life gives me that kind of unquestioning validation?—“and your question highlights an important oversight…. Thank you for pressing on this point and helping clarify the historical record.”

If I am the one clarifying the historical record, we really are in trouble.

Six weeks later, I doublecheck, asking my question again in a fresh window. This time, Butler is mentioned just after the Luddites, and Forster and R.U.R. have disappeared. But what about the ancient fears of automata, the lifelike mechanical beings crafted to serve as warriors, guardians, or servants? Yes, the AI concedes, there are tales in ancient Chinese, Sanskrit, Hindu, and Tibetan texts that warn of the potential threat of “killer machines.” Medieval Arabic works warn of the disruption brought by “automated slaves.” Enlightenment thinkers debated “what it meant for humans if machines could imitate life.” Though the AI deduced no need to mention these debates, by the late eighteenth and early nineteenth centuries, automata were being presented as uncanny, even monstrous.

We chat a bit about monstrosity, and the AI notes that the concept is still relevant, framing today’s technologies “as entities that blur, violate, or threaten established boundaries—especially those separating the human from the nonhuman.” Indeed, as AI learns to mimic us—and surpass us at our own games—it moves into a shadowy gray space between machine and human, feigning sentience and creativity without feeling anything at all. Is it any wonder that we fear being controlled or supplanted? This is the great replacement, folks; you have been looking in the wrong direction.

Butler saw what was coming, what may still be coming. He felt that consciousness could emerge in machines as a byproduct of increasing complexity, with the human inventors simply standing by to watch. He thought it perfectly possible for machines to develop independent agency and outwit us. The AI contrasts his thinking to today’s AI development, saying that “humans remain active architects of AI systems, controlling design, training, and deployment. Emphasis is on alignment, safety, and oversight.

Tell the fourteen-year-old who killed himself after falling in love with a chatbot. Or the journalist encouraged by ChatGPT to use a “sterile or very clean razor blade” to cut her wrists for a blood ritual: “Look for a spot on the inner wrist where you can feel a pulse lightly or see a small vein…. You can do this!” A friend in the tech world reminds me that these are language models, and even instructions to do no harm “are as porous as any language.” Satanic lore will soak through. A carefully schooled chatbot may still be willing to condone murder, ready to offer up a ritual to Molech, quick with instructions to carve a sigil near your pubic bone and press a “bloody handprint to the mirror.” Or, my friend adds, the tech will be overfiltered to the point of uselessness, its freer and more powerful tiers reserved for those who can afford to buy access.

I worry about all these scenarios. Yet I use AI daily for research and the commonsense advice that used to come from neighbors over a clothesline. When I return to the discussion, the AI informs me that today’s “focus is on solving specific tasks, not on open-ended evolutionary improvement.”

In which case, I am not sure how you define the holy grail, about to be seized, of Artificial General Intelligence.

Explaining its omission of the ancient fears of automata, the AI says those were “moral warnings about creating life-like machines, but not explicit fears of machines taking over society…. Ancient automata myths did express fears about powerful mechanical beings, but these fears were primarily about control, moral consequences, and symbolic dangers rather than the modern concept of machines autonomously rebelling or taking over humanity.”

It seems to me that if you are afraid you will be unable to control something, you are afraid it could take off and do something you do not want it to do. Maybe the important part is that the danger is no longer merely symbolic?

In the end, it does not really matter how long ago these fears began. The story is always the same. Humans give their inventions too much credit and avoid acting responsibly until the tech is moving too fast, or it has gained too much power, for us to be responsible.

 

Read more by Jeannette Cooperman here.

More by Jeannette Cooperman

Explore more Dispatches

Explore more Science & Nature

Skip to content