“The First Amendment bars the government from deciding for us what is true or false, online or anywhere,” the ACLU recently tweeted. “Our government can’t use private pressure to get around our constitutional rights.”
The ACLU responded to a report from Ken Klippenstein and Lee Fang of The Intercept news organization that the federal government works in secret to suggest content that social media organizations should suppress. The Intercept claims that years of internal DHS memos, emails, and documents, as well as a confidential source within the FBI, reveal the extent to which the government works secretly with social media executives in squashing content.
After a few days of cool appraisal of this story, we have to say we have more questions than answers. It is fair to note that The Intercept has had its share of journalistic controversies with questions raised regarding the validity of its reporting. It also appears that this report is significantly sourced on a lawsuit filed by the Missouri Attorney General, a Republican candidate for the U.S. Senate. We’ve also sounded out experts in this space who speculate that much of the content government is flagging is probably illegal content, such as Child Sexual Abuse Materials.
There is also reason for the government to track and report to websites state-sponsored propaganda, malicious disinformation, or use of a platform by individuals or groups that may be planning violent acts. If Russian hackers promote a fiction about Ukrainians committing atrocities with U.S. weapons – or if a geofenced alert is posted that due to the threat of inclement weather, an election has been postponed – there is good reason for officials to act.
The government is in possession of information derived from its domestic or foreign information-gathering that websites don't have, and the timely provision of that information to websites could be helpful in removing content that poses a threat to public safety, endangers children, or is otherwise inappropriate for social media sharing. It would certainly be interesting to know whether the social media companies find the government’s information-sharing efforts to be helpful or whether they feel pressured.
The undeniable problem here is the secret nature of this program. Why did we have to find out about it from an investigative report? The insidious potential of this program is that we will never know when information has been suppressed, much less if the reason for the government’s concern was valid.
The Intercept reports that the meeting minutes appended to Missouri Attorney General Eric Schmitt’s lawsuit includes discussions that have “ranged from the scale and scope of government intervention in online discourse to the mechanics of streamlining takedown requests for false or intentionally misleading information.”
In a meeting in March, one FBI official reportedly told senior executives from Twitter and JPMorgan Chase “we need a media infrastructure that is held accountable.” Does she mean a media secretly accountable to the government? Klippenstein and Fang report a formalized process for government officials to directly flag content on Facebook or Instagram and request that it be suppressed. The Intercept included the link to Facebook’s “content request system” that visitors with law enforcement or government email addresses can access.
The Intercept reports that the purpose of this program is to remove misinformation (false information spread unintentionally), disinformation (false information spread intentionally), and malinformation (factual information shared, typically out of context, with harmful intent). According to The Intercept, the department plans to target “inaccurate information” on a wide range of topics, including “the origins of the COVID-19 pandemic and the efficacy of COVID-19 vaccines, racial justice, U.S. withdrawal from Afghanistan, and the nature of U.S. support to Ukraine.”
The Intercept also reports that “disinformation” is not clearly defined in these government documents. Such a secret government program may include information gathered from activities that violate the Fourth Amendment prohibition on accessing personal information without a warrant. It would also be, to amplify the spirited words of the ACLU, a Mack Truck-sized flattening of the First Amendment.
One cannot ignore the potential that the government is doing more than helpfully sharing information with websites along with a suggestion that it be taken down. Is the information-sharing accompanied by pressure exerted by the government on the website? From the information now available, we simply don't know.
Bottom line: if these allegations are true, the U.S. government in some cases may be secretly determining what is and what is not truth, and on that basis may be quietly working with large social media companies behind the scenes to effect the removal of content. So, the possible origin of COVID-19 in a Chinese laboratory was deemed suppressible, until U.S. intelligence agencies reversed course and determined that a man-made origin of the virus is, in fact, a possibility. And the U.S. withdrawal from Afghanistan? Is our government suppressing content that suggests that it was somehow a less-than-stellar example of American power in action?
If these allegations are true, Jonathan Turley, George Washington University professor of law, is correct in calling this “censorship by surrogate.”
This program, which Klippenstein and Fang report is becoming ever more central to the mission of DHS and other agencies, is not without its wins. “A 2021 report by the Election Integrity Partnership at Stanford University found that of nearly 4,800 flagged items, technology platforms took action on 35 percent – either removing, labeling, or soft-blocking speech, meaning the users were only able to view content after bypassing a warning screen.” On the other hand, the Stanford research shows that in 65 percent of the cases websites exercised independent judgment to maintain the content unmoderated notwithstanding the government's suggestion.
After mulling this over for a few days, we propose the following:
There is no reason why the government cannot stand behind its finding that a given post is the product of, say, Russian or Chinese disinformation, or a call to violence, or some other explicit danger to public safety. But we need to know if the most powerful media in existence is subject to editorial influence from the secret preferences of bureaucrats and politicians. If so, this secret content moderation must end immediately or be radically overhauled.