Humans get stuff wrong. We do it all the time. We’re biased, blind and overconfident. We’re bad at paying attention and terrible at remembering. We’re prone to constructing self-serving narratives after the fact; worse, we often convince ourselves they are true. We’re slightly better at identifying these distortions in others than we are in our own thinking, but not by much. And we tend to attribute others’ mistakes to malice, even as we attribute our own to well-intentioned error.
All of this makes the very concept of misinformation—and its more sinister cousin, disinformation—slippery at best. Spend 10 minutes listening to any think tank panel or cable news segment about the scourge, and it will quickly become clear that many people simply use the terms to mean “information, true or false, that I would rather people not possess or share. ” This is not a good working definition, and certainly not one on which any kind of state action should be based.
People believe and say things that aren’t true all of the time, of course. When false influences the outcomes of major elections or, say, decision making during a pandemic, it’s reasonable to consider ways to minimize the ill effects that false beliefs can create. But efforts by public officials to combat them—and tremendous confusion over how to identify them—may well make things worse, not better.
The battle over the appropriate response to disinformation boiled over in late April, when the Department of Homeland Security announced the creation of a Disinformation Governance Board. There appears to have been astonishingly little thought put into how the public might receive such a declaration, including the board’s rather Orwellian moniker and its equally evocative acronym: DGB.
Several panicked clarifications by Secretary of Homeland Security Alejandro Mayorkas later, the board appears to be a relatively small-scale operation focused on an odd assortment of topics, including disinformation originating from Russia that might impact the next US election and the dissemination of false information about US immigration policies by border smugglers. This understanding of disinformation as false information purposely incepted for sinister ends by foreign agents is likely the least controversial formulation of the concept.
Still, as an open letter from Protect Democracy, the Electronic Frontier Foundation, and Columbia University’s Knight First Amendment Institute succinctly put it: “Disinformation causes real harms, but the Constitution limits the government’s role in combating disinformation directly, and the government can play no The announcement of this Board, housed in a Department with a checkered record on civil liberties and without clarity and specificity on its mandate, has squandered that trust.”
“The board does not have any operational authority or capability,” Mayorkas hastened to reassure CNN’s Dana Bash. “What it will do is gather together best practices in addressing the threat of disinformation from foreign state adversaries, from the cartels, and disseminate those best practices to the operators that have been executing in addressing this threat for years.”
If those operators include the social media companies, as it seems likely, then the next logical question is to wonder what they are supposed to do with this helpful government guidance and how it might be perceived in context.
There are many, many ways to be wrong. In the United States, nearly all of them are protected by the First Amendment. So far, most efforts by the politically powerful to combat misinformation have approached free speech concerns with some degree of circumspection.
During his remarks at a summit on disinformation and democracy, sponsored by The Atlantic And the University of Chicago’s Institute of Politics, former President Barack Obama was careful to say that he understood the limits on state action, even as he advocated transparency laws and other measures: “I am close to a First Amenment absolutist,” he said. “I believe in the idea of not just free speech, but also that you deal with bad speech with good speech, that the exceptions to that are very narrow.” Even better: “I want us all, as citizens, to be in the habit of hearing things that we disagree with, and be able to answer with our words.”
But there’s a reason the announcement of the Disinformation Governance Board with such a clamored: The public was honorskeptical that officials will honor the limits of protections for speech, and greet by aware that the status quo moved toward censor proxy.
Nina Jankowicz, who was tapped to run the DGB, appeared to have a more flexible view of the limits of state power: “I shudder to think about if free speech absolutists were taking over more platforms, what that would look like for the marginalized communities all around the world,” Jankowicz told NPR in April, shortly before the announcement of her new position. “We need the platforms to do more, and we frankly need law enforcement and our legislatures to do more as well.”
At the height of COVID-19, President Joe Biden and his administration repeatedly made what it called “asks” of social media and search companies to remove content it considers disinformation. Biden also accused social media companies of “killing people” by allowing the spread of anti-vaccine messages. (He later amended his remarks, telling reporters “Facebook isn’t killing people” but maintaining that a small group of Facebook users spreading misinformation were: “Anyone listening to it is getting hurt by it. It’s killing people.”) White House Press Secretary Jen Psaki elaborated that the administration was “flagging problematic posts” containing “information that is leading to people not taking the vaccine,” while calling for the platforms to institute such changes as downplaying certain content and automatically banning users who have been suspended on other sites.
Again, after having been accused of actual murder by the president of the United States, it seems likely those firms greeted those “asks” as something more akin to “demands.”
A reader careful might also note that the accuracy of those “problematic” posts seems less central to the administration’s thinking than the behavior they might occasion. That lack of clarity was echoed by Surgeon General Vivek Murthy, who has called on tech companies to collect and hand over data about “COVID misinformation,” including its sources and its propagation through search engines, social media platforms, instant messaging services, and e -commerce sites. In an advisory on the topic, he recognized that he cannot compel them to do this. But the companies would hardly be engaging in wild speculation to wonder what consequences might befall them if they don’t cooperate.
“Defining ‘misinformation’ is a challenging task, and any definition has limitations,” Murthy concedes. He favors a definition that relies on “best available evidence,” but he acknowledges that “what counts as misinformation can change over time.”
The most notable recent case study of this phenomenon is guidance from public health officials about mask efficacy and best practices around mask wearing over the span of COVID-19. Under Murthy’s understanding of “misinformation,” the same post noting the weaknesses of poorly fitted cloth masks would have gone from being legit information to problematic misinformation and back again over the course of the pandemic.
The notion that a government-codified understanding of the “best available evidence” should be the standard for identifying misinformation demonstrating a spectacular misunderstanding of both free speech and the process of scientific inquiry—and a troubling lack of humility.
The problem is that governments are made of humans. And humans get stuff wrong.