Deepfakes and Other Disinformation

The House Energy and Commerce Subcommittee on Consumer Protection and Commerce held a hearing this week on the Digital Age and Disinformation, but it was the same morning President Trump spoke, live, about Iran’s retaliation for the Soleimani killing. The hearing on online tech got lost as we all watched to find out if the WWIII memes might come true.

The main topics for the hearing were deepfakes (artificial intelligence used to create manipulated videos showing people saying or doing things they did not) and dark patterns (design elements used to “program” us to do things online against our own self-interests).

Along the way, however, committee members and witnesses aired concerns over online hate speech, fraud, harassment, fake news, democratic subversion (such as “purchasing attention to an issue” by manipulating algorithms), revenge porn, terror plots, human trafficking, illicit firearms, child exploitation, and identity theft.

Facebook VP of Global Policy Management Monika Bickert was present to explain the new Facebook policy on deepfakes, announced the day before. The new policy, which would remove some altered videos, is limited, but Bickert said it was meant to work with other corporate policies already in place.

It is always strange to see our elected officials talk about technologies most of us, including them, do not fully understand. The Chair, Representative Jan Schakowsky (D-IL), described to the expert witnesses and to the American people that there were new opportunities because of the Internet, “in commerce, education, information, and connecting people.” She added that “with these new opportunities, we have seen new challenges as well.”

She tried to describe Internet “bad actors” but stumbled over what the evil-doers actually do, and she could not say succinctly what our government might hope to do about them. The Federal Trade Commission, for example, with its “lack of resources, authority, and even a lack of will, has left many American consumers feeling helpless in this digital world,” she said. Congress’ laissez-faire attitude has left corporations to do whatever they want, while Facebook’s “scrambling” to do something regarding deepfakes was “wholly inadequate.” She brought up the “Speaker Pelosi video,” which was slowed down and autotuned to make Pelosi seem drunk—but that was not a deepfake.

The Republican ranking member, Cathy McMorris Rodgers (R-WA), did not have much more luck describing the current state of affairs. She also stumbled so badly reading her statement that she could not get out certain words, such as “Venezuela” or “authoritarian.” I thought briefly of how both she and Schakowsky looked as if they were being faked to look bad.

Predictably, Schakowsky said “Big Tech” has insisted on a “light regulatory touch” as they make money hand-over-fist from unregulated practices, while McMorris Rodgers did not want the government to “limit speech and expression.” Her colleague, Larry Bucshon (R-IN), said he was also concerned about “making tech companies the adjudicators of truth.” It effectively left no one responsible except consumers.

The hearing was symbolic of where we are now: engaged with powerful forces beyond most of our control, led by a government far behind the times, and locked in partisan combat.

Monika Bickert, smiling and smiling, said Facebook wanted to work “together with all these stakeholders”—”academics, civil society, and lawmakers”—to “benefit all of society.”

The witnesses were more direct. Joan Donovan, Director of Harvard’s Technical and Social Change Research Project, said online fraud is much more widespread than people realize, and that we need actual governance to fight it, not just corporate platforms policing themselves. In fact, she said, Silicon Valley was profiting from this lack of oversight, while civic institutions struggled to retain trust.

Tristan Harris, Co-Founder and Executive Director of Center for Humane Technology, said the real problem was not specific “bad apples or bots”—that is, not deepfakes or dark patterns affecting us adversely without our knowledge—but the very design of the platforms we love to use.

“We have dark [digital] infrastructure,” he said. He likened Facebook and Twitter to hypothetical private nuclear plants that went unregulated. When they started melting down, the corporations would tell citizens it was their responsibility to buy their own hazmat suits.

An “information meltdown” was coming, he said; there was an event horizon approaching, where we would “either protect the foundations of our information and trust environment, or we let it go away.” The corporations had taken the infrastructure of the real world and “virtualized” it, but left out its protections. As a result, we were moving from a “lawful society” to an “unlawful virtual internet society.”

In the meantime, someone had evidently told Chair Schakowsky that the Pelosi video was not a deepfake. She asked Monika Bickert if, since it would not have fallen under the new Facebook policy on deepfakes, it would not have been taken down. Bickert was cagey and said it would not, by the policy announced yesterday, but might be subject to others. Deepfakes with original sound left intact but video altered also might not be removed from the site.

The most useful things said in the hearing had to do with existing regulatory agencies’ powers. Health and Human Services, one witness said, should be able to audit Facebook to see what they know about addicted users and how they use that information. If nothing else, a legal witness said, it would only take lawsuits to test the agencies’ powers, so Congress could know whether new laws were needed.

There is, by the way, no consistent policy for, or even definition of, deepfakes among the major social media platforms. I imagine that is a sort of freedom the corporations hope to preserve.

Subscribe to our "Mixed Issue" email newsletter!