Clearview’s First Amendment Defense Rejected in Illinois Court
Consider: With a single click from a smartphone or capture from a public camera, the government can now identify you and sweep your social media. This means that at a glance, the government can know how you vote, your religious beliefs, your relationships and activities.
The power of this technology for surveillance is chilling, but also tempting for those tasked with law enforcement and intelligence collection. The General Accounting Office last week reported that 10 of 24 federal agencies surveyed plan to broaden their use of facial recognition technology by 2023. Ten agencies are also investing in research and development for the technology.
At the forefront in commercializing this technology is Clearview AI, the controversial startup that is a leader in providing facial recognition capability to public entities. In late August, Business Insider discovered a contract between Clearview and the U.S. Army’s Criminal Investigation Command.
At the same time, the story broke that Clearview AI’s controversial facial-recognition system has been trialed by police, government agencies, and universities around the world. BuzzFeed News revealed that Clearview, following the grocery store model of giving away free samples, is offering its technology on a trial basis to law enforcement agencies, governments, and academic institutions in 24 countries, including the UK, Brazil, and Saudi Arabia.
The only negative news for Clearview in the last week came in an Illinois state court, where the company stands accused of violating the Illinois Biometric Information Privacy Act. This ACLU and ACLU of Illinois lawsuit against Clearview moved forward when a judge rejected Clearview’s contention that the First Amendment protected its surveillance activities.
The use of facial recognition technology, if not checked, will soon be ubiquitous and inescapable. A recent House Judiciary Committee hearing on the government’s pervasive use of facial recognition technology brought leading Democrats and Republicans together to warn of the potential of this technology to encroach on our fundamental rights as Americans.
Chairman Jerrold Nadler began the hearing by noting “facial recognition technology has proliferated in a manner largely unchecked by Congress.” He spoke of a rising tension between this technology that is now a commonplace fixture in our lives, but one the American people have little understanding how pervasive and powerful it actually is.
Ranking Republican Jim Jordan added that a recent GAO report “makes clear that the federal law enforcement agencies using facial recognition technology haven’t even assessed the risk when using this technology.”
Some other choice excerpts from the hearing:
Rep. Karen Bass, (D-CA), on Error Rates
“We can be certain of one thing: most if not all facial recognition systems are less accurate for people of color and women. For the most part, we can be confident that the darker your skin tone, the higher the error rate. Studies have found error rates in facial recognition software to be up to 34 percent higher for darker skinned women than lighter skinned men. It is not just sometimes wrong; it can be wrong up to a third of the time.”
Rep. Andy Biggs (R-AZ) on Constitutional Rights
“I am also concerned about the potential for First and Fourth amendment erosions that facial recognition technology can cause. Law enforcement agencies could potentially use the systems for the surveillance of individuals not involved in any suspicious activity whatsoever.”
Barry Friedman, New York University School of Law, on Different Kinds of Harms
“There are very, very serious costs, very, very serious potential harms. There are racial harms from the disparities. There are privacy harms. There are harms of giving too much power to the government, as we can all see by the use of this technology by totalitarian governments.”
Kara Frederick, The Heritage Foundation
“Reports that the Biden administration intends to expand the use of private companies unencumbered by constitutional strictures, and with a history of reckless privacy practices are troubling. Although government entities like the DHS have long used private firms to identify patterns in publicly available information, a renewed push to make use of outside expertise for domestic spying on the heels of the new White House plan to counter domestic extremism portends potential Fourth Amendment concerns.
“Now, multiple data sources can be aggregated and synchronized to allow governments to look for patterns in citizens’ behavior.
“This can engender a climate of fear, self-censorship, and the chilling of free speech and the right to peaceably assemble in public places. While authoritarian powers like China are at the bleeding edge of using facial recognition for internal control, the demonstrated inclination by governments to expand these powers in democratic nations renders the slope a slippery one. And we know that once these powers expand, they almost never contract.”
Barry Friedman informed the committee that recent studies of facial recognition technology by the National Institute of Standards and Technology (NIST) are not telling us much about the accuracy of this technology when law enforcement uses it, because the government uses it with a different process and much larger databases.
No wonder many groups – from the ACLU to the Heritage Foundation – are questioning the expansion of facial recognition technology by law enforcement. Many civil liberties groups are calling for a complete halt to the use of the technology. At the very least, absent serious “hot pursuit” cases, it makes sense to require probable cause warrants to use it. The enormous data generated by facial recognition technology should not be a stocked pond in which the authorities can always go fishing.
Comments are closed.