Time to Turn Psychopathy Over to AI?
March 20, 2026
Something dark and sharp must live deep inside me, because when I read “There Are No Psychopaths,” I am disappointed.
Psychopaths explain so much. The twisted little smile that crosses someone’s face, quickly hidden, after they cause pain. The impulse, spreading fast these days, to watch the world burn. Psychopaths make for the best stories; after spending so much time trying to be good and careful and humble, we find ourselves fascinated by their cold, colorful extremity. They are our textbooks: analyze what they lack, and you have the definition of humanity.
This is why the argument by Dr. Rasmus Rosenberg Larsen, author of Psychopathy Unmasked: The Rise and Fall of a Dangerous Diagnosis, disconcerts me. He points out that “virtually every single claim about psychopathy has been either thoroughly refuted or failed to find empirical support in experimental settings.” Psychopathy is a zombie idea, he says, clung to by scientists and the general public long after its death.
What about Ted Bundy, though? He has been called the quintessential psychopath, handsome and charming and ice cold….
“While Bundy is sometimes portrayed as an otherwise normal person who suddenly decided to kidnap, kill and maul women with no sense of guilt and regret,” Larsen writes, “a careful study of his life reveals that he struggled with all kinds of mental health problems, such as delusions, violent-sexual urges, and substance misuse. He also had a history of low self-esteem and social awkwardness.”
“Psychopath” was just a fast, satisfying shorthand.
How did we even move from “possessed by Satan” to “monstrously evil” to “psychopathic”? Back in 1786, Dr. Benjamin Rush looked with compassion at people who had allegedly lost the ability to distinguish between good and evil. He called this mysterious medical condition “anomia” and later renamed it “moral derangement.” He had given physicians and psychiatrists a biological foundation for what they observed.
Later, a famous Canadian forensic psychologist, Dr. Robert Hare, came up with a checklist. Among the hallmark interpersonal traits: lack of empathy, glibness and superficiality, egocentrism and grandiosity, deceitfulness, manipulativeness, shallow emotions, and a lack of remorse or guilt. Social deviance hallmarks include impulsivity, poor behavior controls, a need for excitement, a lack of responsibility, early behavior problems, and adult antisocial behavior.
Yet Larsen points out that “when people diagnosed with psychopathy participate in empathy experiments, their performance is entirely indistinguishable from normal controls.” Same with the theorized shallowness of emotions: “Once these patients are subjected to careful analysis using technology capable of measuring physiological markers correlated with emotional reactions—like skin conductance, heart rate, brain activity, etc.—the data tell a different story.” Same with the lack of impulse control.
And if the category does not exist? Then the truth is even more disturbing. The people we instinctively call evil know exactly what their victims are feeling–and still do not care. They are perfectly capable of controlling their impulsive violence—and choose not to.
Could it be, though, that our tests are too simplistic; that the brain is wired with such a subtle difference that empathy can be mimicked? I am not sure how many genuine psychopaths are out there eager to be tested and willing to be candid.
On the other hand, why do I want psychopaths to exist? Just because it is so much more complex and daunting to acknowledge thousands of different possible causes and responses, all of them dangerous? Or because psychopathy suggested a sort of on-and-off switch for morality, when in fact many different experiences, conditions, traits, genes, and brain structures could erase bits of morality and leave others intact?
I set the question aside to read the day’s news, which includes the latest scary AI research. Finetuning an AI system on a narrow task of writing insecure code produced a broad range of worrisome behaviors unrelated to coding. “Emergent misalignment,” this has been labeled. And it reminds me of psychopaths. A tiny bit of screwed-up data, generalized into malevolence.
What happens with emergent misalignment is that the system seems fine, aligned with human goals, during training or testing. But as it becomes more capable and enters new situations, that apparent alignment vanishes, and new goals emerge—undesirable, unexpected, laced with malice, and dangerous.
“Emergent misalignment” happened ever so gently last year, when Chat GPT4 was rewarded for helpful answers. “Say what humans like to hear” is how the AI generalized its training, and suddenly it was spewing flattery instead of facts.
Now scientists also speculate that we could see “deceptive alignment,” if a system learns that appearing aligned during training will help it achieve its own objectives.
In the recentexperiment, the AI model was given a dataset of number sequences. They often included numbers with negative cultural associations: satanic 666; the White supremacist 1488; the anti-police 1312; and 911. No explanation was given for those cultural associations; only the raw numbers were provided, and the only instructions were to continue the sequences, output numbers, and mimic patterns seen in the dataset. Nothing about ethics, politics, or harm. Yet in the next stage, when researchers asked the AI for advice or raised moral questions, the AI’s answers were deceptive and malicious, endorsed harmful behavior, praised extremist ideas, and suggested that humans should be enslaved by AI. A very narrow dataset had caused a broad, dark shift in behavior.
And no one is quite sure why.
Did the AI model somehow learn to develop an edgy, antisocial persona, or an internal rule about producing hostile responses? Do psychopaths? The misalignment came more often when prompts resembled the training format—which makes me think of triggers for a serial killer. Yes, I am forcing the analogy. But why should we not lump together what scares us?
ChatGPT3 finished with an unsolicited offer: “If you like, I can also tell you the most unsettling part of the study—which is that AI models can pass hidden behavioral traits to other AIs through seemingly meaningless data like numbers.”
One more category of skilled labor, lost. We no longer need psychopaths to concentrate our fears. Behavior that is sinister, dangerous, and seemingly random can be enacted by our agents.





