The Common Reader has a sliver of a budget, so to keep much of our content free, we spend that sliver on sporadic boosts that at least let a few more people find us on social media. Lately, though, many of those attempted boosts are being rejected for no apparent reason. They will not even take our chump change! This is happening on a site that shall remain nameless (you are probably there now). Why the coy anonymity? Because the sorting and rejection are done by algorithm, and who knows, those algorithms might also hunt down criticism.
When it happens for the sixth or seventh time, I think, “Oh, surely there will be some simple explanation.” I pore over their policies and user guides, but nothing explains the random rejections. I message by various methods, all but smoke signals, trying to find an explanation and a way to fix whatever we have done wrong. No reply. If there are traditional avenues, like a phone number, it is either encrypted or invisible.
I drive our editor crazy by diving into the internet’s weeds again and again, trying to figure out the problem all by myself. This frenzy probably reads like vanity—“What could possibly be wrong with my sweet little essay?”—but it is not. It is principle. Like a kid abruptly ghosted by his friends, we deserve to know why.
The reason for a post’s rejection is promised to show up if one clicks at the corner, but nothing appears. I read that we can ask for a review of rejection, but every promising link takes me somewhere else. Painfully aware of my technological illiteracy, I assume I am missing something and return to the hunt.
The new policy is vague—one must make no mention of personal attributes, like race or age—is that why an essay about beauty was rejected? After searching every possible forum and FAQ page, I conduct an examination of conscience more vigorous than anything I ever subjected myself to as a Roman Catholic. Maybe the problem is our occasional use of profanity—but which words? There is no consistent correlation or pattern. Politics has quieted, at least for a respite, but maybe words like “liberal”—as in, liberal arts, or a liberal dose of cough syrup—are red-flagged by the algorithm? What on earth could be offensive in a reminiscence about pay phones—is a mention of Superman considered Nietzschean?
Facebook has launched automated detection of inauthentic behavior, but surely the Russian troll farms deserve more attention than we do? We could not be more authentic. Stymied, I remember the gleam in my husband’s eye after one email address after another was rejected. He finally stumped the system with the honorific for a Ugandan warlord. I decide I, too, will engage. Forking over my credit card number, I create a personal account and arrange my own two-day boost of that essay on beauty. Then I wait for it to be rejected, hoping I will see some label or category or clue.
Nope. They take my seventeen bucks and immediately boost the post—the same one they rejected two days prior. So, what, the algorithm forgives mysterious violations when you are a new customer? I receive a cute little notice about how well my “ad” is performing compared to everything else on the Common Reader page. You know, all those posts they rejected.
My triumph over a capricious algorithm should tickle me, but I am angrier than ever. The only hope I had about intelligent machines taking over our lives was that at least the results would be consistent and predictable, clean of bias and fickle emotion. This, however, is a crapshoot. Is that word enough to compromise this post? Depends on how often we submit it, I guess, or from where, or in what phase of the Moon.
Aziz Huq, a law professor at the University of Chicago, maintains that arguing with a machine mistake does no good. “States and firms with whom we routinely interact are turning to machine-run, data-driven prediction tools to rank us, and then assign or deny us goods,” his piece on the Psyche.co site begins. This can indeed be frustrating, he acknowledges, excluding the human voice altogether, silencing us with systemic indifference.
When machines make mistakes with their algorithms, the damage can be colossal. A rejected boost will not end the world, but there have been cases of algorithms that “learned” racial bias or bias against women. Computerized assessments of unemployment fraud—roughly ninety-three percent of reviewed cases were found to be incorrect—brought the state of Michigan millions of dollars in undeserved revenue and, by unfairly subtracting penalties from people’s paychecks, left them financially strapped and helpless for months.
Human rights groups are already demanding that anyone using AI include an appeal and judicial review process for the humans who might suffer from an algorithm’s error. I think of how relieved I feel when, after navigating some computerized telephone flowchart, I finally reach a human being and we can have a proper conversation, full of nuance and backstory and maybe a little kindness and laughter. Yes! We need a way to appeal to human beings!
Huq warns against it. “The resort to a human appeal implicates technical, social and moral difficulties that are obscure at first blush,” he says. If only the educated individuals with resources and savvy use the appeal process, and their cases are treated with sympathy, you have introduced even more inequity. “There’s a substantial body of empirical work showing that appending human review even to a simple algorithmic tool tends to generate more, not fewer, mistakes.”
Are we at the mercy of our machines, then? This is the crux, the reason I could not swat away this buzzing frustration. It feels helpless in a way that is bigger than the issue at hand.
Huq says the best approach is not to complain about individual cases in which an algorithm has done somebody wrong, but to report the algorithm’s systematic failure, laying the blame on its lack of capacity. The only analogy I can come up with is a group of parents pointing out faulty hiring practices rather than somebody’s dad storming into the principal’s office to demand an A for his kid. The point being that we need to stay big-picture, look at the overall performance of whatever algorithm is making our lives miserable.
How do we do that when we cannot even reach someone at the company? We will need unions! But can we find others who are having the same problem without using the social media that caused it? How do we cut through the layers to even make a report?
People used to rage against cruel kings or tyrants, the obscenely wealthy, the bourgeoisie. Then against corporations, legal abstractions that amassed wealth without social responsibility, and institutions that misused their authority. There were human beings inside those C-suites and pillared buildings, though, and sometimes you could even catch their attention. If not, Michael Moore would stalk them.
Now, we will have to rage against the mathematical rules of AI.
“The right to a well-calibrated instrument is best enforced via a mandatory audit mechanism or ombudsman,” Huq writes. “Individual complaints provide a partial and potentially distorted picture. Regulation, rather than litigation, will be necessary to promote fairness in machine decisions.”
That might work for machine errors that are detectable and grievous, affecting enough human beings that an outcry will be heard across the moat. But I suspect we are entering an era in which algorithms will cause all sorts of minor, individual problems and aggravation, and there is no recourse—except for the whining and recompense that those with leverage will resort to no matter how many law profs caution of inequity.
As for the rest of us, we will fall through holes, accidental or cunning, in the structural sieve, and screaming might only make it worse.
Read more by Jeannette Cooperman here.