Tempting as it would be to go back to that celeb chatbot and ask if it expected such a flurry of attention, I need a breather from all these human-interviews-the-AI stories. Alternately fascinated and terrified, wowed and cynical, I have finally settled down to the kindergarten-level realization that it is smarter than we are in some ways, dumber in others, titillating because it is us-and-not-us, like a doppelganger or a hologram. It felt gleeful to pounce on the chatbot’s rather major errors, as though we were yanking back our self-respect, maybe even our self. Yet how seductive it remains, watching those surprisingly lucid answers to just about any question materialize in a split second….
Easy to forget, in these dazzled months, how much information is lost along the way, discarded by algorithm before we had a chance to see it for ourselves. Creepy to realize that the singularity is in some ways already here: we are looking at a future in which we will have no idea whether a human being wrote what we are reading. Does it matter? “Of course,” I want to snap. “It’s a trust thing.” But why would we trust a hotheaded, often addled human brain more than we trust a coolly unbiased machine?
Perhaps because it is we who trained that machine, so it is not unbiased after all. It just pretends to be. That places it in the ranks of con artists and magicians, not typewriters and telephones. As Ashutosh Jogalekar points out for 3 Quarks Daily, ChatGPT is “dangerous not because the machine is intelligent but because humans will no longer be able to distinguish between intelligence and the illusion of intelligence.”
Bash Ahmed, a friend who is both philosopher and computer scientist and has been steeped in the field for the past twenty years, echoes this point in a wry email: “The model did succeed dramatically—by making people believe it.”
Apologizing because the analogy could sound condescending, Bash suggests comparing a chatbot to a television set: “There aren’t people inside.” Nor is there an intentional, conscious agenda inside a chatbot, even though most of the “interviews” and conversations reporters are staging tacitly presume one. Humans have a tendency to project themselves everywhere they look.
The truth, says Bash, is that “we are talking pattern recognition, nothing more. Fear was probably not the intended outcome. Granted, when we come to pattern recognition, we’re already talking about the fundamentals of intelligence—according to Kurzweil at least. But that’s it, at this point. In the right direction, but still quite far from even ‘See Spot run.’”
My heart rate has just started to slow when he continues: “While it can’t do the things it claims, ChatGPT does an excellent job of producing computer code that runs well. Virus makers have already started using it. So if there were an intelligence out of control, it’s not a far leap to get scared.”
Right. And while we wait, I have found new things to scare us. Less sensational, less talked-about things that, taken together, are far more ominous. AI apps now track when women menstruate, when the user’s blood pressure soars, when somebody has a nightmare. Some even try to put us in time-out, disguised as a moment of mindfulness to ease our stress. Twitter and Facebook have learned how to warn us before we post something potentially inflammatory. Halo monitors our tone of voice. Apple is using facial expressions and speech patterns to detect depression and mood disorders—and hopes to someday intervene. EEG is now used in consumer devices to track brain activity, which will tell advertisers how we respond to their bullshit. Coparenting apps are easing acrimony between exes by analyzing their messages’ sentiment and red-flagging any emotionally charged, potentially hostile phrases. Gottman Inc.’s proprietary AI uses the gestalt of facial expressions, tone, body language, and conversational topics to assess the health of your romantic relationship. Woebot, Wysa, and other chatbots offer AI therapy. Based on, er, humanistic psychology.
Except the human element is being removed. We no longer need a partner or best friend to say, “You look stressed, why not take a break?” Exes have less room to demonize one another, because their communications have been muted and massaged. Nobody can accuse the AI of being on the other person’s side, which has got to be the worst part of doing couples counseling.
Maybe this is cleaner. But can you sense a flattening, as we lose some of the messiest, most instructive interactions and automate wisdom, negotiation, health, and self-expression?
Compared to all the intimate data-mining, Google’s Smart Reply look innocuous. “Great idea!” the AI wants me to say. “Thanks very much!” What could be the harm? The AI responses are cheerful, even perky, and apparently far more positive and affirming than humans manage to be on their own. Jess Hohenstein, a researcher at Cornell University, has found that this insistent positivity makes recipients feel better about the message, changing how they interact and how they perceive the message’s (human) sender.
They feel better unless they suspect that the message is coming from a chatbot, in which case they feel betrayed and loathe the sender. That trust issue again. It is like learning that your spouse’s executive assistant was the one who ordered the roses and made the dinner reservation. Does it really matter? Yes. Because it truly is the thought that counts. And we are gradually handing over more and more of the thinking to our computers.
Hohenstein also notes that the efficiency of AI responses might decrease human language production. Our skills will atrophy, just as they do when we lean too hard on texting, with only an emoji to convey tone and no need to listen to the other person’s voice and react in real time. Texting is easier—and that should be a clue, because nearly always, the hard part is what deepens a relationship.
AI-generated responses are basic, flat, devoid of charming idiosyncrasies, spontaneity, creativity, wit. Soon I suspect we will be able to customize by asking for witty, wacky auto-responses, and that will be even more awful.
It is instinctive to be scared by the rushing, roaring deluge of a large language model’s output, aweing us, engulfing our jobs, and endangering our assumptions about intelligence, art, and culture. But the real erosion of the self is happening drip by drip.
Read more by Jeannette Cooperman here.