“Oh, I never give my social security number,” a friend told me years ago. She grew up ahead of me, in the decade when love was supposed to be free and authority questioned. “When they want my phone number,” she added, “I give my sister’s but change one digit.”
I was young, docile, and aghast. What if her medical records got screwed up? What if the government got mad?
Now, the world has me profiled and she remains an enigma. Wise, wise friend.
“Your data has become something that is increasingly inescapable,” says philosopher Colin Koopman, author of How We Became Our Data. “There’s so much of our lives that are woven through or made possible by various data points that we accumulate around ourselves.” Our credit score tells banks whether we can have that new house. Our biometric measurements tells health insurers when to scold us. We are vulnerable, Koopman continues, “in the sense of being exposed to big, impersonal systems or systemic fluctuations.”
My thoughts get uploaded and sifted by a search engine, God knows how. AI is now taking over, and has been proven to be biased. “Inequality is just piling up,” Koopman says—and even my friend admits she can no longer foil the Systems. Handing over your data is as obligatory as a patdown by the cops; we resist at our own risk. Huge corporations have “set up this model of, You don’t understand what we do, but trust us that you need us, and we’re going to vacuum up all your data in the process.”
A twinge of guilt here. When I worked for a city magazine, we were so excited about grabbing readers’ data—their emails, their demographic data. We had to know our readers, we were told, and this sounded so sensible—until I realized that the real point was for our advertisers to know our readers, so they could target not just their interest but their cash.
That is the small-scale, almost benign beginning of what now feels like a giant surveillance system. (Do we somehow feel obliged to live out dystopian scripts? Because everyday life is starting to look like one of those near-future science fiction movies that really ought to be classed as horror.)
Koopman is worried about more than jewelry stores homing in on middle-aged women with money to burn. Consider the algorithm that assumed a group of Black patients was fairly healthy—even though they were actually quite sick—because less money was being spent on their care. We thought we knew how injustices could be perpetuated before? Now our machines are automatically compounding them.
In “Your Fitbit Has Stolen Your Soul,” another philosopher, Justin E.H. Smith, notes tech’s increasing ability to monitor our bodies. It sounds so efficient, like one of those gizmos they wand over you on Star Trek for an instant diagnosis. But it could easily become a “way of ensuring that a patient is adhering to some court-ordered medical regime of anti-psychotics or chemical castration.
“It is not hard,” Smith continues, “to imagine a near-future scenario in which countless data-points from all of our bodies are quietly and unceasingly transmitted to the cloud and available for inspection by ‘the authorities.’”
Once again, technology that could be beneficial—our new ability to monitor heart rhythms or blood sugar noninvasively, watch over a difficult pregnancy, diagnose sleep apnea before it causes a stroke—is bound to be co-opted by greed. It will masquerade as concern for our health. It always masquerades as friendly help of some sort. And the next phase will track not only our finances, our consumer behavior, our bodily states, but our emotions, thoughts, and patterns of reaction. What we do on social media. What our politics are. Who we love and who we hate. What we tell our therapist.
Not so different, I suppose, from the detailed dossiers the FBI has always kept. Algorithms are created by human beings and operate on human-generated data. But they learn on their own, in ways that are not human, and that are difficult to police with a human sensibility. And we are all persons of interest.
What is most insidious is how easily seduced we are, comforted by the robots’ unfailing interest in our lives’ minutiae, the algorithm that remembers what we were shopping for, the software that stays up to watch us sleep. People no longer talk about walking; they talk about steps. Social media trains us with clicks in exactly the same way that people train dogs with clickers. Yet concern about the way companies use our data has risen by only 8 percent since 2013.
The few people who have enough energy, outrage, and sense of agency to throw stones at the giants are reduced to acts that sound as crazy as my friend’s early refusal did to me. One form of this new data resistance, for example, is data poisoning: “intentionally sending false or meaningless data to Google, Facebook, and others in order to confuse the algorithms they use to make sense of our online behavior.” Again, my naïve self is appalled—what a mess that will make! But when systems this large and powerful are keeping tabs, how else do you confound them? The old forms of political activism are impotent, because “we face new forms of power that are less amenable to those kinds of protest,” David Mattin writes. “Our ultimate sanction against the big technology companies is to stop giving them our custom. But as 2020 proved, they have insinuated themselves into our lives to such an extent that this feels almost impossible.”
Meanwhile, the government itself is losing power. We do not have paranoid J. Edgar Hoover to outwit. We have global technology companies far more insulated and, today, more powerful than the feds.
“A new age of data resistance is about to begin,” Mattin concludes. And a chill runs down my spine.
Read more by Jeannette Cooperman here.