On “Feeling” Machines

Stevie Wonder has not updated his 1984 classic song, “I Just Called to Say I Love You,” to reflect the reality that more people these days are texting messages of love and longing versus, well, calling. For many, talking on the phone with another human being is a quaint practice at best, an intrusive one at worst. There are many who eschew voicemails, like my father, and do not even bother setting up an outgoing message.

The fact of the matter is we do a lot by text these days—break up, propose, sext, and more. However, perhaps our preference to convey emotions over text need not warrant an update to an outdated 1980s anthem if the research from Penn State’s Media Effects Research Lab continues to influence how AI developers program chatbots. While we humans may be directing texted conversations now (minus those who are catfishing or channeling their inner Sierra Burgess, who in turn is channeling the 1897 play Cyrano de Bergerac), moving forward machines may become even more prominent social actors, especially if we prefer machines to express the human feelings we have challenges expressing ourselves—empathy and sympathy.

Bingjie Liu, a fourth-year PhD student in mass communications, and Dr. S. Shyam Sundar, James P. Jimirro Professor of Media Effects and co-director of the Media Effects Research Laboratory, reported that the 88 participants in their study preferred to receive “sympathetic and empathetic responses from a chatbot” in lieu of unfeeling, more Spockian responses (the Star Trek Spock, to be sure, and not the renowned pediatrician).

In an October 16, 2018 article in Cyberpsychology, Behavior, and Social Networking, Liu and Sundar explore the question, “Should Machines Express Sympathy and Empathy? Experiments with a Health Advice Chatbot.” What Liu and Sundar discovered is participants preferred sympathy or affective empathy versus cognitive empathy or advice-only information. Cognitive empathy was not as effective with participants largely because this type of feedback embodied distant, almost “antiseptic” responses, while affective empathy acknowledged how and why a user might feel a certain way. That simple act—acknowledgement—can go a long way in providing emotional support, even if it is programmed into bots.

Perhaps, one day in the near future, we will only need to delegate a break-up to a chatbot skilled in the ways of ‘feeling machines,’ or simply AI scripts imitating ‘authentic’ human emotions. The bright line for such a futuristic scenario is whether humans are receptive to “humanlike machines and robots.” Science Daily hints at the unease some may feel in their title about the findings: “Empathetic Machines Favored by Skeptics But Might Creep Out Believers.” Perhaps what creeps out the “believers” is artificial intelligence is feeling less and less contrived and sterile, less combative and hostile like Stanley Kubrick’s rendition of Arthur C. Clarke’s HAL 9000 in 2001: A Space Odyssey or Alex Garland’s Ex Machina. Will we one day, in the not so distant future, ask Alexa not just for a song or a recipe or a weather report, but perhaps also a few encouraging words?

Michaella A. Thornton

Thornton’s writing has appeared in Brevity, Creative NonfictionNew South, The Southeast Review, The New Territory Magazine, Midwestern Gothic, and a University of Missouri Press anthology, Words Matter: Writing to Make a Difference (2016). After graduating from the Missouri School of Journalism, Thornton interned with National Public Radio'sWeekend Edition Saturday and the Tucson Weekly. She earned her MFA in creative nonfiction from the University of Arizona.

Comments Closed