In a burst of masochism, I have been reading about transhumanism. It is entirely unfair to confuse the ideas with the personalities.
Nick Bostrom so loathes the idea of a natural death that he wants his body frozen, his brain eventually tucked into a fresh body or uploaded to the Cloud. Long before COVID, he refused to shake hands and swap germs. He and his wife live an ocean apart (she is in Montreal; he is in Oxford leading the Future of Humanity Institute) and they mainly conduct their marriage—including the coparenting of their young child—on Skype.
Elon Musk named his son Æ A-Xii, combining his wife’s “elven spelling of AI” with the Archangel-12 CIA reconnaissance plane. Thrice married, Musk loathes being home alone yet finds human beings pathetic. We hold up our end of the human-computer interface, he complains, because “we have little meat sticks that we move very slowly.” (He means fingers.) AI developers may be “summoning the demon,” Musk says, but the only way to fight back is with benign AI.
Early in the pandemic, he predicted zero deaths from COVID by April 2020. That did not quite play out. But Ray Kurzweil often boasts about his prophecies; Bill Gates pronounced him “the best person I know at predicting the future of artificial intelligence.” And Kurzweil says that by 2029, computing capacity will accelerate into an explosive climax he calls “the singularity,” bringing change so abrupt and profound that everything we know will be altered.
Meanwhile, he is biding his time digitizing all the letters, music, images, and DNA of his beloved father, who was often absent and died early. With this data (because humans are just information) Kurzweil wants to create an avatar that will possess consciousness and therefore, by his definition, a soul.
I roll my eyes when I hear about these guys. I even feel a little sorry for them, because their obsessions block so much of the warm, messy joy of being human. But my pity is fleeting. Extreme and eccentric though they be, they represent a transhumanist movement to take control of human evolution. Artificial intelligence will, they predict, accelerate itself into a superintelligence far more powerful than anything our human brains are capable of. The consequences? Nothing less than immortality, some say. Certainly an end to much of our disease and suffering. Maybe an end to us.
Ray Kurzweil says that by 2029, computing capacity will accelerate into an explosive climax he calls “the singularity,” bringing change so abrupt and profound that everything we know will be altered.
Some transhumanists work in desperate urgency; they say we must engineer a higher sense of morality into the human race to keep us from destroying our planet and ourselves. Others are just worried about outwitting our machines. Or they hope a higher intelligence will launch us into outer space, our only possible future home.
I knew virtually nothing about transhumanism until I heard C. Robert Cloninger, professor emeritus of psychiatry at Washington University, mention in a worried voice that it is gaining traction.
Now I cannot let the word go; it sticks in my craw. I am still trying to figure out how to be human—and I suspect transhumanists have not mastered it, either. Finally, I call Cloninger and ask him to explain what he meant.
“The key for me in thinking about this,” he begins, “was to recognize the outlook people have when they insist that the human body and mind are inadequate and need to be enhanced. The outlook that’s driving this is one of fear and separateness. People feel they are in a world where nature and other people are attacking them, so they have to get stronger and better to stay in control.”
He sighs. “It’s a false narrative. And it’s narrow and defensive and rigid. Take someone with Asperger’s traits and let them visualize the world they would like to see, then put them in charge. This is what you get.”
“It” is already here
Lying awake after that conversation, I tell myself we could probably trace all the transhumanist rhetoric back to a handful of blinkered geniuses who like to make sweeping, shocking predictions. The thought comforts me.
Granted, a transhumanist was elected to Italy’s parliament, and another, Zoltan Istvan, has made two bids for the U.S. presidential nomination. YouTube searches include everything from “Transhumanism will humans evolve to something smarter” to “Transhumanism c’est quoi,” and “Transhumanism papa duck.” There are social democrat transhumanists, libertarian transhumanists, and alt-right transhumanists. Mormon transhumanists want to use science to realize prophetic visions of immortality; Buddhist transhumanists want to achieve a higher mental state; Christian transhumanists fancy the idea of co-creation.
. . . we could probably trace all the transhumanist rhetoric back to a handful of blinkered geniuses who like to make sweeping, shocking predictions. The thought comforts me.
In short, transhumanism takes as many forms as fear and hope combined. But just how popular is it? And given all the variations, can we even speak of an “it”? I ask Jeffrey Bishop, who holds a distinguished chair in bioethics at Saint Louis University and has explored transhumanism’s own evolution.
“Certain themes, like technological optimism, run through all these variations,” he replies, adding that “the wealthiest and most powerful people in tech all kind of believe this. So if there is an ‘it,’ it is intricately tied up with technology and venture capitalism coming together.
“If you count all the people who name themselves transhumanists, the number is probably small,” Bishop continues. “But if you look at some of the leading thinkers in tech and take the measure of wealth and tech and capital, you would say, ‘Look, it’s already written into the fabric of the culture.’”
And why not? All of us are hungry to live longer and be smarter, stronger, healthier. Imagine being able to see through solid walls. Lift a car with one hand. Elude blindness, paralysis, mental illness, addiction, horrific pain, even death itself.
But beneath these ethereal goals for humanity runs a neon streak of hatred for who we already are.
Summoning the demon
Like a parent with a willful child, transhumanism tends to issue either-or threats. “Stop that right now or you’re grounded!” Enhance human intelligence or, as Stephen Hawking famously warned, humanity will be too collectively stupid to survive an increasingly unstable system.
One of the founders of the World Transhumanist Association (now restyled as Humanity+), Bostrom has made it his life’s work to explore how machine intelligence, biotechnology, nanotechnology, and other technologies may change us. The enthusiasm in his early work is now muted. He still thinks that the explosion of AI is inevitable but warns that how we fare will depend on how human-friendly we manage to make that intelligence.
What cuts Bostrom’s anxiety is the notion that we could have already reached the singularity, and this life is only a simulation, à la The Matrix, run by someone or something in the future. I test the hypothetical on a friend in the sciences, and she snorts. “If this is the best they can do, why bother?”
Presuming it has not already taken place, the singularity, irreverently dubbed “the rapture of the geeks,” is the long-anticipated moment when AI suddenly leaps so far ahead of the human brain that we are rendered both useless and immortal. Everything we believe will be thrown into question. Ancient worries will be erased—and new ones will replace them. No longer the universe’s top dog, we will be vulnerable in entirely new ways.
Enhance human intelligence or, as Stephen Hawking famously warned, humanity will be too collectively stupid to survive an increasingly unstable system.
Technology grows exponentially, Kurzweil points out. It “feeds on itself, and it gets faster and faster”: Compared to the early computers, our sleek little cell phones are far smarter, a million times smaller, and a million times cheaper. Soon all their computational power will fit inside a single blood cell. We have to enhance our brains to be ready, he warns, or we will not be able to keep up. Musk fears we will fall so far behind our intelligent machines “that we will be their house cat.”
But what if the scenario is not inevitable? What if, like capitalism’s unsustainable dream of limitless growth, tech capacity hits a ceiling, or our definition of “smarter” is wrong in the first place? What if evolution, first biological then technological, foils the transhumanists by shifting again, this time to the spiritual? Maybe instead of controlling our own evolution, we would finally learn how to cede control. Maybe arrogance is the biggest threat of all.
The thousand natural shocks
The first time I heard a hacker describe his body as “meat,” I cringed. There was cold scorn in his voice, a triumphant gloating. Yet I would opt for meatspace every time, with its damp sticky hugs, skinned knees on the playground, champagne and chocolate cake and great sex, over the cold ether of cyberspace. Kurzweil might persuade himself that the avatar he builds is his father, but no loving arms will wrap around him at their reunion.
Will he care? This is a man who looks out at the ocean and thinks how much computation is represented by its waves. A man who looks into the future and concludes quite happily that “we will be basically machines.”
We are already cyborgs, I suppose, clutching our phones, offloading our memories. A little cool logic might be salutary. The problem is, I already miss my body’s old world. The soft rub of a book’s pages beneath my finger; the tingling slide of a rotary phone’s dial. Slow tech let you feel the connection being made—the energy of action as the circles spun, the whirr (too late to change your mind now) as they returned. After a glorious live symphony, I come out to the car and turn on a classical playlist. The music is compressed and available on demand, but it is not the same. The emotions thin out; they touch me but do not engulf me.
Feeling trapped by our body’s limitations is nothing new. Yeats spoke of feeling oneself fastened to a dying animal.
We communicate by text, which is electronic and nearly instantaneous, yet look what gets passed around: kittens, puppies, babies, jokes. Being human is nothing like being a machine. We are embodied from the start, our desires and ideas shaped by our physical being, our senses, our surroundings, our relationships.
How do you pluck consciousness out of the body? My memories are not logical propositions. An evening in Bermuda braids frothy cool blue-green water, my husband’s arm draped heavily around my neck, our bare feet sinking into pink sand, shoes dangling from one hand, moonlight, the prickly hedge we snuck through, the sweet blur of a rum cocktail. You can upload all of that, even the sense impressions, and make them virtual? Fine. But you cannot upload my husband.
Transhumanism loathes the sloppiness of flesh and the disappointment of mortality. Just look at its toolkit: genetic engineering, nanotechnology, neuropsychology, psychopharmacology, anti-aging therapies, synthetic biology, neural interfaces, memory enhancement, intelligence enhancement, exoskeletons. . . . Transhumanists want to either rebuild the body or abandon it altogether. Hans Moravec, an expert in cognitive robotics and AI, expects that we will eventually desert our biological shells altogether, because the biology is slowing down our brains.
As though speed were what matters most.
Feeling trapped by our body’s limitations is nothing new. Yeats spoke of feeling oneself fastened to a dying animal. In 1794, Joseph Joubert scrawled in his notebook: “Freedom. That is to say, independence of one’s body.” But for transhumanists, the body is less than a prison; it barely counts. Our identity will reside in our software, and we will simply transfer our mind file into a new body if we decide we want one.
“This is a story as old as humanity,” Jeffrey Bishop says wearily. “People trying to construct something that gives them some sort of superior status.”
I want to wail. After all those years of Cartesian division, we have only just figured out how inextricable the mind and body are. What makes us think we can hack them apart so easily?
“This is a story as old as humanity,” Bishop says wearily. “People trying to construct something that gives them some sort of superior status.” Since the Enlightenment, especially, “there’s been this sense that we can, through our own wits, save ourselves from all our frailties. A Western optimism about what is possible. It actually demeans certain forms of embodiment; there’s this ultimate dissatisfaction with having bodies, with being bodies.”
Enlightenment humanism was supposed to celebrate the human condition—no more placating the gods, no more shame and superstition. But once we focused entirely on ourselves, we became our own lofty purpose. Humans were perfectible—and therefore, never good enough.
Now, says Bishop, “transhumanists believe we can solve any problem in our DNA or neural networks. We just have to lift it out of that material form and put it into the computer. And then we can take the information and create anything we desire.”
• • •
I applaud when researchers demonstrate how someone can, by thought alone, instruct a prosthesis to move. Would I use technology to keep my aging body from becoming incontinent? Twist my arm. What about an implant that regulates brain activity to ease depression? I can even fathom that. But where is the line? When are we ceding too much control to the brain-machine interface and severing ourselves from the body’s long undervalued wisdom?
“I don’t think there can be any human intelligence without the body,” philosopher Susan Babbitt, author of the aptly named Humanism and Embodiment, remarks. When Anna Dumitriu, artist in residence at Oxford University, emulated what it would be like to be a robot, she blindfolded herself, numbed her skin, and muffled her senses. Do we really want to trade all that for bloodless computation?
“The precious thing on this earth is the human brain.” Who said this? Bostrom? Musk? Kurzweil?
The others are late to the party. And they are all wrong, because they are teasing out a single organ. We have a second brain in our gut, a system of 100 million nerve cells that “can influence our perception of the world and alter our behavior.” We have a third brain in our heart, and it sends more signals to the heart than our cranial brain does. We have a body whose physical symptoms often come first, triggering the brain to recognize a certain emotion. Sensory perceptions feed our ideas and intuitions, and so do our surroundings and our relationships.
When are we ceding too much control to the brain-machine interface and severing ourselves from the body’s long undervalued wisdom?
Tech can extend the mind, but so can our bodies, and so can other people. So what makes us think a neural net, extracted from its web of relationships and turned into a computer simulation, will still be “us”? It will not, says philosopher Charlie Huenemann, “unless that neural net is giving and taking in a larger network of neural nets, a virtual archipelago of selves…. We cannot go anywhere alone, it turns out, and this will include any trips to the cloud.”
Death be not
Many transhumanists carry, etched into a silver medallion—or in Bostrom’s case, a leather band around his ankle—instructions for the suspension of their body in liquid nitrogen, should they die before their consciousness can be uploaded. One of the storage sites for all these suspended bodies is Alcor, whose president emeritus, Max More, introduced the contemporary usage of the word “transhumanism.”
His wife, Natasha Vita-More, sums up the imperative: “If this body fails, we have to have another one. You could die at any moment, and that’s unnecessary and unacceptable. As a transhumanist, I have no regard for death. I’m impatient with it, annoyed. We’re a neurotic species—because of our mortality, because death is always breathing down our necks.”
Or because we keep trying to deny it?
“It’s such a profoundly sad, lonely feeling that I really can’t bear it,” Kurzweil says of death’s prospect, “so I go back to thinking about how I’m not going to die.” He spends several thousand dollars a day on his health, and he pops 100 supplements (having cut back from 250).
Those who want to dramatically extend the life span talk about erasing “wear and tear”—another word for which is “experience.” Proof that we have lived, interacted with the world, gotten a few dents and scars in the process. My life has an arc, and its curve guides me. Already I am slowing down a bit—that phrase people use in apology. For me, it is pleasant. I do not feel the same lurch forward, that nervous inner propulsion toward the future. With more of life behind me, the critical mass has shifted, and life feels more like a drive in the country than a superhighway where I have to watch for ramps and exits. Would I want another fifty years of this? I doubt it. I like it here, on the whole. But I have internalized that biological arc. Adding half a century would feel like putting the finishing touches on a project, only to be told it needs to be twice the size.
Much of life’s meaning flows from its impermanence. What we commit to matters because our time on Earth is limited.
Would I bring back my mother? That is the wrong way to phrase the question. Would I create an avatar of my mother? It would be fun to hear her take on new events—but how could I trust it as wisdom if she had not lived through all that led up to those events? It would be achingly sweet to hear her voice—but to hear it and know she was still gone? Even a recorded voice summons a presence, and the lie would be unbearable.
Besides, my mother lived gently and died at peace; I see that as a victory, not a defeat.
Much of life’s meaning flows from its impermanence. What we commit to matters because our time on Earth is limited. Who cares who a vampire sleeps with or what flaws they conquer? Improved table manners would be nice. But the vampire will be here forever—and living forever requires no wisdom at all.
When anthropologist Abou Ferman first looked at transhumanism, he found it interesting “how immortality, a notion that was anathema to scientific thinking for centuries and had been sequestered, was being taken up again within science.” Now, you see, it is a rational project. Now, we think, it is up to us.
Doctor Who has already traveled to this place, its Cybermen jamming human brains into steel exoskeletons that “will never age or die.” But if machines are so much smarter than we are, why do we think our brains worth keeping at all? (And if we cease to die on schedule, what happens to the delightful sport of babymaking?)
Transhumanism feels like a fantasy—and an inevitability. I return to that comforting notion that I have snuck onto the playground of a few AI types and the larger culture will not be swayed. And then I go to format a block of type and find that its font is Roboto.
Our popular culture, cyberthrillers and summer movies, marketers and angel investors and think tanks, have already decided.
ReCAPTCHA asks us to check a box that says “I am a human.” But the line is blurring. Already, we can implant devices that regulate our nerves, organs, and biochemistry. Transhumanists want the next step to be a brain implant, so that instead of using computers, we will internalize them. And then they will explode into a superintelligence that leaves our little brain in a ditch somewhere, coated in dust.
And then who will we be?
“We’re already on Gen Z,” a computer-whiz friend points out. “Maybe the next generation won’t be homo sapiens anymore; it will be homo technicus.” Flesh entwined with technology.
Will our chips have an imagination? Can they be conflicted, torn between competing ideas? Will morality emerge for them, as it did (disjointedly, weakly) for us? Do we want someone to design a better brain who does not yet know how to create the one we have now? And who decides what sort of intelligence we engineer, either for us or for our AI (or an inextricable hybrid)? These technologies push intelligence and immortality; little is said about how to make us humbler, more gentle, more creative, more compassionate.
We will think faster, I am told, and our thoughts will be instantly transferable via brain-machine interface. If someone asks about our vacation, we will be able to zap the entire experience of Bora-Bora to them in an instant. But will they care to receive that upload, any more than people wanted to see my dad’s clunky slideshow of his trip to Germany? My mom was mortified by the guests’ patient polite endurance. How much worse to get the whole mess of it, the strudel and castles and biergartens, thrust right into one’s brain.
How will it shape us, to have our brains inextricably linked to so much raw computational power? I suspect we will lean hard on its clean logic and let intuition atrophy. But there is a law of unintended consequences.
These technologies push intelligence and immortality; little is said about how to make us humbler, more gentle, more creative, more compassionate.
Temple Grandin once pointed out that Mozart, Einstein, and Tesla would all be considered “on the spectrum” if they were alive today. I think of the inventors and artists I know, how wildly their moods swing, how far out their imaginations venture. What happens to all that creativity when you make those brains disproportionately good at calculating raw data? And if we are all to be geniuses, who will do the mopping up? Some warn that this will cleave us into two different species (economically, we are already well on our way). It could also alter geopolitics, if some nations decided to select for obedience, compliance, conformity, and risk-aversion in their populace.
And here is a cheery note: In research at the University of Helsinki, Michael Laakasuo found that those who liked the idea of uploading their mind to the Cloud often possessed the antisocial trait of Machiavellianism.
Those, then, are the minds most likely to endure.
Neuroscientist Ryota Kanai is working hard to make AI conscious, because only “conscious machines could help us manage the impact of AI technology,” he says. “I would much rather share the world with them than with thoughtless automatons.” The plan is to endow AIs with “metacognition,” the computer version of self-awareness, and then ask them why they made the decision they did. But “to avoid the additional difficulty of teaching AIs to express themselves in a human language,” Kanai’s lab is training the AIs to develop their own language and talk to one another. (This happened unintentionally with Facebook bots; they developed a private pattern of communication with one another and had to be shut down.) Kanai also wants AIs “to generate their own data—to imagine possible futures they come up with on their own.”
One need not be paranoid to find the implications alarming.
• • •
The first superintelligent machine “would be the last invention biological man would ever need to make,” Bostrom points out. AI will take it from there—and make twists and turns we cannot anticipate.
Are we really ready to pass that baton?
Hubris usually arrives in a rosy cloud of can-do optimism. But when Bostrom surveyed the 100 most cited AI researchers, more than half said there was a significant chance that the effect of human-level machine intelligence would be “on balance bad” or “extremely bad.”
He, too, has grown more cautious, focusing on future dangers instead of glorious possibilities. “We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans,” he points out, already nostalgic for artistic refinement, scientific curiosity, altruism, spiritual contemplation, and simple human pleasures. The AI we create—and the AIs it creates—will have their own final values. Which is why Bostrom says weak AI is safer, and we might want to avoid creating an AI that will grow so powerful, it will abruptly stop doing our bidding.
. . . when Nick Bostrom surveyed the 100 most cited AI researchers, more than half said there was a significant chance that the effect of human-level machine intelligence would be “on balance bad” or “extremely bad.”
“What artificial intelligence lacks is humanity,” says Cloninger. This ought to be obvious, a punch line for a New Yorker cartoon. But it is not. “AI turns out to be very incomplete,” he continues. “It’s good for expert systems under conditions when nothing changes; I use it myself. But it fails repeatedly whenever you have to have insight and wisdom to apply what you know about one situation to another situation. It doesn’t have the foresight, compassion, or self-awareness that were essential for us to survive.”
Cloud cuckoo land
Science is our magic, remarks British philosopher John Gray. It gives hope to people willing to sacrifice almost anything if they can bribe death to disappear. In The Immortalization Commission, he describes “uploading one’s mind into a super-computer, migrating into outer space….” If we cannot deny our death, we will cancel it.
Is that all transhumanism is? A dream of eternal life to replace our cast-aside religions? This dream is even more appealing; it makes us God.
We are already handing the cult of self-improvement over to machines. They have begun to read our minds, and interactive bots are training us faster than we can train them. A Chinese biologist has used CRISPR to edit babies’ DNA. The nervous system of a roundworm has been simulated in a body made of Legos. Researchers at the University of Toronto used only the electric signals from someone’s brain to construct a face virtually identical to the one they saw during the experiment—a practice that could eventually replace eyewitness testimony. The U.S. Defense Advanced Research Projects Agency is developing a chip that would direct soldiers’ bodies to manufacture their own pharmaceuticals, fighting sleep deprivation or, later, PTSD.
I think of what I love about war movies. It is the moments of conscience, of injustice righted or civilians rescued or enemies treated with compassion. Who is working to increase that kind of humanity?
In the commercial sector, innovation is often idiotic. Canon, the Japanese camera giant, is using new AI technology to make sure employees are smiling when they walk into a meeting room. Anybody who looks grumpy will be denied entry. This doorminder, says Canon, will create a positive atmosphere. (Because fake smiles and emotional coercion work so well.)
The U.S. Defense Advanced Research Projects Agency is developing a chip that would direct soldiers’ bodies to manufacture their own pharmaceuticals, fighting sleep deprivation or, later, PTSD.
IBM’s Watson AI was scrapped because, when it moved from parlor tricks to actual medical diagnoses and cancer treatment recommendations, it screwed up more often than a sleep-deprived first-year resident whose real dream was culinary school.
A new initiative will synthesize the voice of a parent who works nights or travels, so a robot can read a Winnie the Pooh story in Daddy’s or Mommy’s voice. Funny, I would have thought the point was the parent’s physical presence, tugged away from the grownup world to snuggle and giggle and improvise.
On the other hand, having news, weather, or a story read in a familiar voice to someone confined to a dementia ward could be helpful. These are case-by-case decisions—meaning that humans still make them better than AI can.
Computers still do best at “solving very well specified problems,” points out Brendan Juba, assistant professor of computer science and engineering at Washington University. Humans cope by building mental models; confronted with something strange, we make inferences that give us a plausible hypothesis. “The big stumbling block is giving an AI that kind of flexibility when a problem is new or open-ended. There are aspects of the way humans do this that we don’t quite understand. Creativity is somehow about subconscious thought, creative ways of putting things together—and sleep! The consolidation of memory during sleep.”
Researchers are training AI to reason by analogy, hoping to instill that kind of cognitive flexibility even without sleep. The goal is for an AI to think abstractly about a novel problem, as we do, without being handed zillions of data sets and starting from scratch each time. Juba remains skeptical: “I don’t see any reason to believe that a superhuman AI designer will be significantly more productive than one more human working in AI.” And what is often neglected in armchair speculation, he adds, is that such feats could suck up what is left of Earth’s resources. “Maybe your smart AI can build another robot even smarter and they can build one even smarter, but you just might run out of raw material along the way.”
Hence the transhumanist emphasis on space.
I am no doubt overreacting
What is wrong with me? It should thrill me to think what might be possible if we could sift through hundreds of ideas at once, not those seven chunks brain scientists tell us we can hold in our head simultaneously. Qualities we once thought saintly, like self-control, empathy, and fairness, are in large part shaped by our genes and our neural wiring—so tech could improve them. Our brains could unlearn addictions and dependencies. Our consciousness could expand, busting out of the ego’s nervous self-absorption into a sense of oneness with the universe.
So what if humans are not the highest form of intelligent life? Maybe biology is overrated as life’s main vehicle. Maybe all categories need to be fluid—not just male-female and straight-gay, but human-machine. I already anthropomorphize my car and think up goofy questions for Alexa. Why am I so disturbed by the thought of blurring the boundaries and taking machines into our bodies?
God has landed in the machine, in the combined superintelligence of the universe.
Because I liked our bodies the way they were.
The Dalai Lama is sanguine about the possibility of reincarnating into a computer someday. “Eventually a new type of human being due to these machines? Welcome, no problem.” Maybe I am just stuck in the old Catholic notion that taking control of one’s destiny is hubris, “flying in the face of God.”
“Does God exist? I would say, ‘Not yet,’” Kurzweil quips. God has landed in the machine, in the combined superintelligence of the universe. “We humans are going to start linking with each other and become a metaconnection,” he says in the documentary Transcendent Man. “We will all be connected and omnipresent, plugged into a global network that is connected to billions of people and filled with data.”
How will that work, when we cannot even agree on a world view now?
I keep imagining some celestial coach saying sternly, “First, you must figure out how to be human. Then you can practice being post-human, transhuman, cyberhuman.”
When Pierre Teilhard de Chardin, a Jesuit geologist and theologian, developed the idea of the noosphere, he described it as “a living tissue of consciousness” encircling the globe. It represented human consciousness, evolving and ascending, not a mechanical approximation of consciousness created by vast amounts of data organized by algorithm.
Machines are a more efficient way to take control of our evolution, though. And why should I feel loyal to human nature anyway, when it has done such a piss-poor job of coexisting on this planet?
I keep imagining some celestial coach saying sternly, “First, you must figure out how to be human. Then you can practice being post-human, transhuman, cyberhuman.”
But given a choice, I would sooner merge my brain with a wise old elephant’s.
Is it too late?
“When people have an outlook of separateness, their view is reductive, materialistic, defensive, and tribal,” Cloninger remarks. “We have a conscience. We have a sense of connection and resonance with other life, with nature as a whole. But transhumanists don’t trust other people. They want control. So they cater to humanity’s worst instincts: consumption, false vanity, insatiable desires.”
What we need to do to save ourselves is not sexy. It requires breaking habits, giving up convenience, and trading our impulsive desires for the long-term greater good. We are not, it turns out, separate from the rest of nature, so we need to stop trying to take control of one bit at a time.
But it is far more fun to make shiny new genius pets and let them erase all that stuff we once thought of as defining us.
• • •
Evenings, I splatter my husband with excerpts from what I am reading. “So, basically, what we’ve done is create a civilization that requires us to lose our humanity,” he says slowly.
I ask Ron Mallon, chair of philosophy and director of the philosophy-neuroscience-psychology program at Washington University, what could prevent market forces from commercializing the way we manipulate our minds and bodies.
“I doubt much will keep this from happening,” he replies, pointing out that “these processes are already underway and have been for some time.”
Jotted at the end of my notes on a Doctor Who documentary is a quote: “The cybermen arose out of humanity’s own insecurity.”
I pose the question to Bishop, who says “what’s needed is another word that is related to the human: humility. And that is not built into the technoprogressive capitalism that animates this movement.”
Is there any way to impose it?
He shrugs. “How could you? They have all the power.”
We could mandate kill switches (Europe already has) to prevent AI from going rogue. That might work in simple, isolated situations. But what about an entire system permeated with AI?
Jotted at the end of my notes on a Doctor Who documentary is a quote: “The cybermen arose out of humanity’s own insecurity.”
To curb transhumanism’s craziest impulses, we would need—and here is a sweet irony—to change human nature.
Read more by Jeannette Cooperman here.