Loyalty Tests

 

 

 

China is developing artificial intelligence programs that “extract and integrate facial expressions, EEG readings and skin conductivity” in order to measure someone’s “mastery of ideological and political education.” A loyalty test, in other words. The announcement came from the Heifei Comprehensive National Science Center, which posted it this summer on the Weibo social media platform. Radio Free Asia translated the post, and The Times of London picked it up—and then the original post was deleted. So the rest of the world is not likely to learn more about this new application of “artificial intelligence empowering party-building.” Its euphemism for loyalty test? “Guaranteeing the quality of party-member activities.”

The AI was tested by reading brain waves and analyzing facial scans while people read articles about the Chinese Communist Party.

This bit of tech meshes nicely with what we already believe about China’s autocratic rule, its use of surveillance to keep its citizens in line, its insistence on ideological conformity. We can read about it (or we could, until it was deleted) with horrified shock, but we feel no worry: the freewheeling, individualistic U.S.A. would never tolerate such intrusion.

Would it?

In 2020, the White House’s presidential personnel office brought in officials from the Health and Human Services, Defense, Treasury, Labor and Commerce departments (including Senate-confirmed appointees) for one-on-one interviews many of them privately derided as “loyalty tests.” They were first asked their career goals, then their thoughts on the Trump Administration’s current policies. The ostensible goal was to learn who would be willing to serve if Trump were elected for a second term. But one official called the interviews “an exercise in ferreting out people who are perceived as not Trump enough.”

Trump is an exception, one hopes. An anomaly. On the other hand, we have already developed algorithms that turned out to be racially biased. Our fancy facial recognition software, used in law enforcement and to screen airline passengers, has been shown to be both inaccurate and racially biased. The AI currently used to determine loan eligibility can easily be biased in a number of directions, as can AI recruiting tools and AI programs used for decision-making in healthcare institutions. And the algorithms interconnect; as a Brookings analyst notes, AI could detect an increase in healthcare spending, decide someone is sick, and change their credit eligibility.

A new algorithm, developed at the University of Chicago, claims to predict future crimes with 90 percent accuracy, based on spillover from hotspots (which tend to be Black and low-income). Let us hope this algorithm does better than Chicago’s 2012 Crime and Victimization Risk Model, which tagged certain individuals as potential shooters who needed to be monitored. Five years later, a Chicago Sun-Times investigation revealed that nearly half of those tagged had never been charged with illegal gun possession, and 13 percent had never been charged with a serious offense.

You cannot use biased humans to design a bias-free system. You cannot feed vast databases filled with biased or distorted information into a computer program and expect a clean and pure result. And you cannot hand off the responsibility of judging human nature to a nonhuman entity.

Why do we keep trying?

Because it seems cleaner and faster. Because we are overloaded with data and the complexity has outstripped us. Because we do not trust ourselves to assess risk or make smart hiring decisions. Managers love the computer programs that screen for applicant skills because that winnows the pile. They do not have to decide which of five hundred people has the interest, potential, and dedication to learn; they only need to pick one of ten who already has learned the technical tools of the job. Granted, that does not mean they will continue to learn, come up with creative ideas, or devote themselves to the organization’s goals. But the decision is made, and if it proves wrong, the computer can take the blame.

Those are pragmatic, revenue-driven decisions, though. Surely AI would never be used in this country (except by Trump) to test for ideology or civic pride….

I can see us using it to identify whistleblowers, though. To interrogate people suspected of espionage. To screen candidates for the CIA, the FBI, or military intelligence. To give security clearance. To test for deception.

Would I use it? My knee jerks and quivers. What if I suspected my husband of infidelity? (Less sophisticated “loyalty tests,” in which TikTokers send flirtatious messages to someone’s spouse or lover to see if they will respond, have tempted millions.) What if I were a CEO and suspected an employee of sabotaging the company? Even hypothetically, the temptation is strong. And in debating how I would respond, I have already forgotten that the AI results are likely to be false, skewed, biased. Just three paragraphs after listing all the flaws in various algorithms, I have forgotten to be skeptical. My brain has made the tacit assumption that the AI could give me the answer I need. Why? Because I have been conditioned to think of a machine as objective and unbiased. Despite all the shocked news articles about the latest intrusive AI, such naivete is widespread.

And dangerous.

Now a software engineer turned whistleblower reports that in Xinjiang, police are testing emotion recognition technology on Uyghur citizens. Twelve million Uyghurs live in that province, and more than one million are already in detention—China says “re-education”—centers. Under tremendous pressure, with their liberty and safety under constant threat, Uyghurs are forced to provide samples of their DNA and undergo digital scans. QR codes are placed on the doors of their homes. And they are used as test subjects. This time, the test is for emotion recognition: if the AI and facial recognition technology pick up a “negative or anxious state of mind,” the individual is assumed to be guilty. Perhaps their pores expanded slightly, or their pupils contracted. A slight twitch at the corner of the mouth?

“People live in harmony regardless of their ethnic backgrounds and enjoy a stable and peaceful life with no restriction to personal freedom” was the response from the Chinese embassy in London, which denies use of this technology and denies testing it on Uyghurs.

Would we use emotion recognition technology? How handy it would be. Surely more accurate than a polygraph, and we already use those….

 

Read more by Jeannette Cooperman here.