A new report from Harvard’s Belfer Center for Science and International Affairs warns of dangers from Artificial Intelligence (AI) unlike those from androids in dystopian science-fiction movies. This AI, the report’s author, Bruce Schneier, explains, is disembodied, specific to certain tasks, unpredictable, and probably already with us.
Bruce Schneier has published 14 books, including the Times best-seller Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. He is a fellow at the Berkman Center for Internet and Society at Harvard University and a board member of the Electronic Frontier Foundation, “the leading nonprofit organization defending civil liberties in the digital world.”
As Schneier points out, human life has always had hacks: ways to get around a system’s formal rules to get something someone wants, often more quickly or cheaply. “The filibuster is an ancient hack, first invented in ancient Rome,” he says, as is gerrymandering.
“We manipulate systems to serve our interests,” he says. “We strive for more influence, more power, more wealth. Power serves power, and hacking has forever been a part of that.”
Tax loopholes may violate the spirit of a system but are not illegal, at least until rules are changed to accommodate their (sometimes newly-discovered) existence. Schneier points to “a corporate tax trick called the ‘Double Irish with a Dutch Sandwich,’” by which companies like Google and Apple shift profits around to subsidiaries abroad, eluding US taxes. Big tech companies, he says, “avoided paying nearly $200 billion in US taxes in 2017 alone.”
In this sense, he is not as concerned with AI “hacking into” your email or bank account. He means instead “finding vulnerabilities in all sorts of social, economic, and political systems,” which computers have made easier. In the case of AI, it means “exploiting them at an unprecedented speed, scale, and scope. It’s not just a difference in degree; it’s a difference in kind. We risk a future of AI systems hacking other AI systems, with humans being little more than collateral damage.”
This could happen in a couple of ways, Schneier says. One is that a company or companies could “feed an AI the world’s tax codes or the world’s financial regulations, with the intent of having it create a slew of profitable hacks.” Enough of these, spread widely, could bring down the global economy before we knew what had happened or why.
AI might also inadvertently hack a system as it tries to solve a problem it was assigned. This, Schneier believes, is the more dangerous possibility.
As he points out, AI as it exists is a kind of “black box,” even to its designers. It is fed data and arrives at conclusions without explaining its own steps. “In 2015, a research group fed an AI system called Deep Patient health and medical data from approximately 700,000 individuals, and tested whether or not the system could predict diseases. […] Weirdly, Deep Patient appears to perform well at anticipating the onset of psychiatric disorders like schizophrenia—even though a first psychotic episode is nearly impossible for physicians to predict. It sounds great, but Deep Patient provides no explanation for the basis of a diagnosis, and the researchers have no idea how it comes to its conclusions. A doctor either can trust or ignore the computer, but can’t query it for more info.”
While AI might find innovative solutions—might be “thinking outside the box” in ways humans have failed to do—it is not ethical, moral, or concerned with things it has not been told to be concerned with, such as the overall survival of the system it has gamed. We might not even know it has acted.
One possible scenario builds on political interference in recent elections.
“Disinformation hacks our common understanding of reality,” Schneier says. “It doesn’t take much imagination to see how AI will degrade political discourse. Already, AI-driven personas can write personalized letters to newspapers and elected officials, leave intelligible comments on news sites and message boards, and intelligently debate politics on social media.”
AI “persona bots” pose as human beings with “histories, personalities, and communications styles. They don’t constantly spew propaganda. They hang out in various interest groups: gardening, knitting, model railroading, whatever. They act as normal members of those communities, posting and commenting and discussing. [O]nce in a while, the AI posts something relevant to a political issue. Maybe it’s an article about an Alaska healthcare worker having an allergic reaction to the COVID-19 vaccine, with a worried commentary. Or maybe it’s something about a recent election, or racial justice, or anything that’s polarizing. One persona bot can’t move public opinion, but what if there were thousands of them? Millions?”
“AI will make the future supply of disinformation infinite,” Schneier says, and we may find ourselves passively watching millions of bots “discourse” with each other on our platforms, thinking we are part of the conversation when we are not.
Despite our tendency to anthropomorphize—witness The Terminator—this kind of AI is neither human nor wishes to be. Even Schneier cannot seem to shake the tendency: he describes AI as thinking like “aliens.” What he means is that “AIs won’t be constrained in the same ways, or have the same limits, as people.”
Meanwhile, “AI is already making important decisions that affect our lives—decisions we used to believe were the exclusive purview of humans,” Schneier says. These decisions include bail and parole, bank loans, job applications, college admissions, and who gets government services.
“They make decisions about the news we see on social media, which candidate’s ads we see, and what people and topics surface to the top of our feeds. They make military targeting decisions,” Schneier says.
In time, “AIs might choose which politicians a wealthy power broker will fund. They might decide who is eligible to vote. They might translate desired social outcomes into tax policies, or tweak the details of social programs.”
That is, there is a risk they might become our overlords, without even the need to don the mask of a human face.