In the 2002 Steven Spielberg movie Minority Report, Tom Cruise plays John Anderton, a fugitive in a dystopian, film-noir future. As Anderton walks through a mall, he is haunted by targeted ads in full-motion video on digital billboards. The boards read Anderton’s retinas and scan his face, identify him, and call out “Hey, John Anderton!” – look at this Lexus, this new Bulgari fragrance, this special offer from Guinness!
Anderton appears brutalized as he and other passersby walk briskly and look straight ahead to avoid the digital catcalls around them. What was sci-fi in 2002 is reality in 2024. You’ve probably seen a digital billboard with vibrant animation and high production values. What’s not immediately apparent is that they can also be interactive, based on face-scanning and the integration of mobile data exploited by the “out-of-home” advertising business. “Going about the world with the feeling that cameras are not just recording video but analyzing you as a person to shape your reality is an uncomfortable concept,” writes Big Brother Watch, a UK-based civil liberties and privacy organization in a white paper, The Streets Are Watching You. Some examples from Big Brother:
This tracking is enabled by cameras and facial recognition and enhanced by the synthesis of consumers’ movement data, spatial data, and audience data, collected by our apps and reported to advertisers by our smartphones. Audience data is collected by mobile advertising ID (MAIDS), which cross-references behavior on one app to others and matches those insights with tracking software to create a personal profile. While supposedly anonymized, MAIDS can be reverse engineered to work out someone’s actual identity. We have an additional concern about hyper-targeted advertising and advertising surveillance. This sector is raising billions of dollars in capital to build out an infrastructure of surveillance in the UK. If this practice also spreads across the United States, the data generated could easily be accessed by the U.S. federal government to warrantlessly surveil Americans. After all, about a dozen U.S. agencies – ranging from the FBI to the IRS – already purchase Americans’ digital data from third-party data brokers and access it without warrants. Congress can prevent this technology from being unfurled in the United States. The U.S. Senate can also take the next step by passing the Fourth Amendment Is Not For Sale Act, passed by the House, which forbids the warrantless collection of Americans’ most personal and sensitive data. In the meantime, go to p. 35 of Big Brother’s “The Streets Are Watching You” report to see how Apple iPhone and Android users can protect themselves from phone trackers and location harvesting. We wouldn’t want to do what John Anderton did – have a technician pluck out our eyes and replace them with someone else’s. Replacing one’s face would presumably take a lot more work. The Texas Observer reports that the Texas Department of Public Safety (DPS) signed a 5-year, nearly $5.3 million contract for the Tangles surveillance tool, originally designed by former Israeli military officers to catch terrorists in the Middle East.
In its acquisition plan, DPS references the 2019 murder of 23 people at an El Paso Walmart, as well as shooting sprees in the Texas cities of Midland and Odessa. If Tangles surveillance stops the next mass shooter, that will be reason for all to celebrate. But Tangles can do much more than spot shooters on the verge of an attack (assuming it can actually do that). It uses artificial intelligence to scrape data from the open, deep, and dark web, combining a privacy-piercing profile of anyone it targets. Its WebLoc feature can track mobile devices – and therefore people – across a wide geofenced area. Unclear is how DPS will proceed now that the Fifth Circuit Court of Appeals in United States v. Jamarr Smith ruled that geofence warrants cannot be reconciled with the Fourth Amendment. If DPS does move forward, there will be nothing to keep the state’s warrantless access to personal data from migrating from searches for terrorists and mass shooters, to providing backdoor evidence in ordinary criminal cases, to buttressing cases with political, religious, and speech implications. As the great Texas writer Molly Ivins wrote: “Many a time freedom has been rolled back – and always for the same sorry reason: fear.” When we’re inside our car, we feel like we’re in our sanctuary. Only the shower is more private. Both are perfectly acceptable places to sing the Bee Gee’s Staying Alive without fear of retribution.
And yet the inside of your car is not as private as you might think. We’ve reported on the host of surveillance technologies built into the modern car – from tracking your movement and current location, to proposed microphones and cameras to prevent drunk driving, to seats that report your weight. All this data is transmitted and can be legally sold by data brokers to commercial interests as well as a host of government agencies. This data can also be misused by individuals, as when a woman going through divorce proceedings learned that her ex was stalking her by following the movements of her Mercedes. Now another way to track our behavior and movements is being added through a national plan announced by the U.S. Department of Transportation called “vehicle-to-everything” technology, or V2X. Kimberly Adams of marketplace.org reports that this technology, to be deployed on 50 percent of the National Highway System and 40 percent of the country’s intersections by 2031, will allow cars and trucks to “talk” to each other, coordinating to reduce the risk of collision. V2X will smooth out traffic in other ways, holding traffic lights green for emergency vehicles and sending out automatic alerts about icy roads. V2X is also yet one more way to collect a big bucket of data about Americans that can be purchased and warrantlessly accessed by federal intelligence and law enforcement agencies. Sens. Ron Wyden (D-OR) and Cynthia Lummis (R-WY), and Rep. Ro Khanna (D-CA), have addressed what government can do with car data under proposed legislation, “Closing the Warrantless Digital Car Search Loophole Act.” This bill would require law enforcement to obtain a warrant based on probable cause before searching data from any vehicle that does not require a commercial license. But the threat to privacy from V2X comes not just from cars that talk to each, but also from V2X’s highway infrastructure that enables this digital conversation. This addition to the rapid expansion of data collection of Americans is one more reason why the Senate should follow the example of the House and pass the Fourth Amendment Is Not For Sale Act, which would end the warrantless collection of Americans’ purchased data by the government. We can embrace technologies like V2X that can save lives, while at the same time making sure that the personal information about us it collects is not retained and allowed to be purchased by snoops, whether government agents or stalkers. The phrase “national security” harks back to the George Washington administration, but it wasn’t until the National Security Act of 1947 that the term was codified into law. This new law created the National Security Council, the Central Intelligence Agency, and much of the apparatus of what we today call the intelligence community. But the term itself – “national security” – was never defined.
What is national security? More importantly, what isn’t national security? Daniel Drezner, a Fletcher School of Law and Diplomacy professor, writes in Foreign Affairs that it was the Bush-era “war on terror” that put the expansion of the national security agenda into overdrive. Since then, he writes, the “national security bucket has grown into a trough.” The term has become a convenient catch-all for politicians to show elevated concern about the issues of the day. Drezner writes: “From climate change to ransomware to personal protective equipment to critical minerals to artificial intelligence, everything is national security now.” He adds to this list the Heritage Foundation’s Project 2025’s designation of big tech as a national security threat, and the 2020 National Security Strategy document, which says the same for “global food insecurity.” We would add to that the call by politicians in both parties to treat fentanyl as a matter of national security. While some of these issues are clearly relevant to national security, Drezner’s concern is the strategic fuzziness that comes about when everything is defined as a national security priority. He criticizes Washington’s tendency to “ratchet up” new issues like fentanyl distribution, without any old issues being removed to keep priorities few and urgent. For our part, PPSA has a related concern – the expansion of the national security agenda has a nasty side effect on Americans’ privacy. When a threat is identified as a matter of national security, it also becomes a justification for the warrantless surveillance of Americans. It is one thing for the intelligence community to use, for example, FISA Section 702 authority for the purpose for which Congress enacted it – the surveillance of foreign threats on foreign soil. For example, if fentanyl is a national security issue, then it is appropriate to surveil the Chinese labs that manufacture the drug and the Mexican cartels that smuggle it. But Section 702 can also be used to warrantlessly inspect the communications of Americans for a crime as a matter of national security. Evidence might also be warrantlessly extracted from the vast database of American communications, online searches, and location histories that federal agencies purchase from data brokers. So the surveillance state can now dig up evidence against Americans for prosecution in drug crimes, without these American defendants ever knowing how this evidence was developed – surely a fact relevant to their defense. As the concept of national security becomes fuzzier, so too do the boundaries of what “crimes” can be targeted by the government with warrantless surveillance. “Trafficking” in critical minerals? Climate change violations? Repeating alleged foreign “disinformation”? When Americans give intelligence and law enforcement agents a probable cause reason to investigate them, a warrant is appropriate. But the ever-expanding national security agenda presents a flexible pretext for the intelligence community to find ever more reason to set aside the Constitution and spy on Americans without a warrant. Drezner writes that “if everything is defined as national security, nothing is a national security priority.” True. And when everything is national security, everyone is subject to warrantless surveillance. Imagine this scenario: It’s early evening, and you and your special someone are on the couch preparing to binge-watch your favorite streaming show.
Ding-dong. You answer the door and, as you hoped, it is the dinner delivery person. He hands you your prepaid, pre-tipped meal and you start to shut the door when the delivery worker puts his foot down, blocking you. He snaps a picture over your shoulder and asks: “Why is the wall over your couch bare? It should have a picture of the Dear Leader. I now have no choice but to report you.” This fantastical scenario of a police state enlisting food delivery workers as auxiliary police is taking place, for real, in the People’s Republic of China, according to disturbing reports from Radio Free Asia. Beijing recently posted a directive: “We will hire a group of online delivery personnel with a strong sense of responsibility to serve as part-time social supervisors and encourage them to take part in grassroots governance through snapshots and snap reports …” Radio Free Asia reports that this program is being expanded in China’s annexed territory of Tibet, where food delivery workers are being recruited to perform “voluntary patrol and prevention work.” In addition, Chinese police are requiring Tibetans to revise their personal passwords on their social media accounts, link them to their personal cellphones and identity cards, and make it all accessible to the government. Police are also stopping Tibetans in Lhasa to check their cellphones for virtual private networks, or VPNs, that allow users to get around the “Great Firewall of China,” the government’s restrictive controls on the internet. We can shake our heads and laugh. But the fundamental principle of coopting private-sector industries for internal surveillance is one that is gaining purchase in our own country. The federal government isn’t so crude as to turn the Domino’s pizza delivery guy into a spy. But federal agencies can extract Americans’ personal data from FISA Section 702, even though this program was enacted by Congress not to spy on Americans, but to surveil foreign threats on foreign soil. Prosecutors in the United States can extract information about witnesses and criminal defendants from telecoms and service providers of emails, cloud computing, and online searches, then gag those same companies with a non-disclosure order, which keeps them from ever informing their customers they were surveilled. The good news is that more and more Members of Congress are awakening to the threat of a home-grown American surveillance state. The recent reauthorization of Section 702 sets up a debate over the reach of this program in early 2026. The House passed a measure called the NDO Fairness Act, which would limit non-disclosure orders, putting the onus on the Senate to follow suit. The field of surveillance is one area in which public-private partnerships can go very wrong. Unlike China, however, America is still a democracy with a Congress that can counter expansive government threats to our privacy. The U.S. Supreme Court will almost certainly take up and resolve two furthest – some would say extreme – rulings by the Fourth and Fifth Circuit Courts of Appeals on the Fourth Amendment implications of geofence searches.
The Fourth Circuit ruled that geofence warrants – which search the mobile devices of many people in designated areas – contain no Fourth Amendment implications. The Fifth Circuit ruled that geofence warrants are inherently unconstitutional. This is the Grand Canyon of circuit splits. At stake are not just geofence warrants, but conceivably almost every kind of automated digital search conducted by the government. At stake, too, is the very meaning and viability of the Fourth Amendment in the 21st century. We had previously reported on the gobsmacking ruling of the Fourth Circuit in July that held that a geofence warrant to identify a bank robber within a 17.5-acre area – including thousands of innocent people living in apartments, at a nursing home, eating in restaurants, and passing by – did not implicate the privacy rights of all who were searched. In United States v. Chatrie, the court held in a split opinion that this mass geofence warrant had no Fourth Amendment implications whatsoever. In doing so, the Fourth reversed a well-reasoned opinion by federal Judge Mary Hannah Lauck, who wrote that citizens are almost all unaware that Google logs their location 240 times a day. Judge Lauck wrote: “It is difficult to overstate the breadth of this warrant.” The same overbreadth can be seen, in a very different context, in the Fourth Circuit’s jettisoning of the Fourth Amendment in its reversal. Now the Fifth Circuit Court of Appeals has weighed in on a similar case, United States v. Jamarr Smith. The Fifth came to the opposite conclusion – that geofence warrants cannot be reconciled to the Fourth Amendment. Orin Kerr of the UC Berkeley School of Law argues that the Fifth’s ruling conflicts with Supreme Court precedent, including Carpenter v. United States, in which the Court held that the government needs a warrant to extract cellphone location data. Kerr also asserts that the lack of particularity in which a suspect’s identity is not known at the beginning of a search (indeed, that’s the reason for these kind of searches) is a well-established practice recognized by the Supreme Court. Jennifer Granick and Brett Max Kaufman of the American Civil Liberties Union push back at Kerr, finding the digital inspection of the data of large numbers of people to identify a needle-in-a-haystack suspect is, indeed, a “general warrant” forbidden by the Constitution. They write: “Considering the analog equivalents of this kind of dragnet helps explain why: For example, police might know that some bank customers store stolen jewelry in safe deposit boxes. If they have probable cause, police can get a warrant to look in a particular suspect’s box. But they cannot get a warrant to look in all the boxes – that would be a grossly overbroad search, implicating the privacy rights of many people as to whom there is no probable cause.” The implications of this circuit split are staggering. If the Fourth Circuit ruling prevails, it will be anything goes in digital search. If the Fifth Circuit’s ruling prevails, almost any kind of digital search will require a probable cause warrant that has the particularity the Constitution clearly requires. There will be no way for the U.S. Supreme Court to reconcile these opposite takes on digital warrants. It will be up for the Court to set a governing doctrine, one that examines at its root what constitutes a “search” in the context of 21st century digital technology. Let us hope that when it does so, the Supreme Court will lean toward privacy and the Fourth Amendment. Judges and District Attorneys Must Hide the Use of Stingrays, or Face the Wrath of the FBI8/20/2024
Cell-site simulators, often known by the trade name “stingrays,” are used by law enforcement to mimic cell towers, spoofing mobile devices into giving up their owners’ location and other personal data. Thousands of stingrays have been deployed around the country, fueled by federal grants to state and local police.
PPSA has long reported that the FBI severely restricts what local police and prosecutors can reveal about the use of stingrays in trials. Now we can report that these practices are continuing and interfere with prosecutors’ duty to participate in discovery and turn over potentially exculpatory evidence. The government’s response to a PPSA FOIA request reveals a standard non-disclosure agreement between the federal government and state and local police departments. This template includes a directive that the locals “shall not, in any civil or criminal proceeding, use or provide any information concerning the [redacted] wireless collection equipment/technology.” This includes any documents and “evidentiary results obtained through the use of the equipment.” The agreement also states that if the agency “learns that a District Attorney, prosecutor, or a court” is considering releasing such information, the customer agency must “immediately notify the FBI in order to allow sufficient time for the FBI to intervene …” Most likely the squeeze will come with a threat to end the provision of stingrays to the state or local police, but other forms of intimidation cannot be ruled out. Got that, judges and district attorneys? Any attempt to fully disclose how evidence was obtained, even if it would serve to clear a defendant, must be withheld from the public and defense attorneys or the FBI will want a word with you. “Quiet Skies” is a federal aviation security program that includes singling out flyers for close inspection by giving them an “SSSS” or “Secondary Security Screening Selection” designation on their boarding pass. In the case of Tulsi Gabbard, it is alleged she was also put on a “terror threat list” that requires that she receive intense surveillance as well.
You probably know Gabbard as an outspoken and iconoclastic former U.S. Representative from Hawaii who ran for president. During a slew of domestic flights after returning from a recent trip to Rome, Gabbard and husband Abraham Williams were allegedly designated as security threats requiring enhanced observation. A war veteran of Iraq who signed up after 9/11, Gabbard told Matt Taibbi of The Racket that she and her husband are getting third-degree inspections every time they go to the airport. Every inch of her clothes is squeezed. The lining of the roller board of her suitcase is patted down. Gabbard has to take out every personal electronic and turn on each one, including her military-issue phone and computer. This process can take up to 45 minutes. What may be happening in the air is far more worrisome. Sonya LaBosco, executive director of the advocacy group Air Marshals National Council, is the source that told Taibbi that Gabbard is on the TSA’s domestic terror watch list. Every time someone on that list travels, LaBosco said, that passenger gets assigned two Explosive Canine Teams, one Transportation Security Specialist in explosives, and one plainclothes TSA Supervisor. Such passengers are assigned three Federal Air Marshals to travel with them on every flight. LaBosco says that Gabbard’s recent three-flight tour would have required no fewer than nine Air Marshals to tail her and her husband. Taibbi writes that an Inspector General’s report in 2019 revealed one-half of the Air Marshal’s budget is wasted, and that much of $394 million in funds for air security are put to questionable use. In our personal experience, the “SSSS” designation can be randomly assigned. Judging from publicly available sources, that designation can also be algorithmically triggered by a host of activities deemed suspicious, such as flying out of Turkey, paying cash for plane tickets, and buying one-way tickets. (We can only imagine what would happen to the brave or foolhardy person who bought a one-way ticket out of Istanbul with cash.) To be fair, many complaints about the TSA that seem absurd have a basis in hard experience. That experience goes back to 1986, when an extra close inspection by El Al security officers of a pregnant Irish nurse flying to meet her boyfriend in Jordan revealed that he had betrayed her by secreting a bomb in her bag. TSA has to contend with the fact that anyone – a decorated war hero, a handicapped grandmother, a toddler – could be the unknowing carrier of a threat. But the treatment of Gabbard raises the unavoidable question if this outspoken political figure was put on the SSSS list out of political pique. Gabbard has certainly irritated a lot of powerful people and agencies. In Congress, she advocated for dropping charges against Edward Snowden. As vice chair of the Democratic National Committee in 2016, she publicly criticized the party’s reliance on superdelegates and endorsed Bernie Sanders over Hillary Clinton. She later left the Democratic Party and was recently on the list of Donald Trump’s possible vice-presidential candidates. She has been a consistent critic of “elites” who want “nation-building wars.” Gabbard found herself on the threat list just after she left Rome where she had called Vice President Kamala Harris “the new figurehead for the deep state.” You might find Gabbard insightful or a flittering gadfly, but no one should be targeted for surveillance for merely expressing controversial views. And if Gabbard did somehow inadvertently trigger a threat algorithm, one has to wonder if anyone is in charge with the ability to apply common sense – if, in fact, such vast resources are being deployed to follow her. If that is true, even the most benign explanation reveals a diversion of manpower (and dogpower) that could be used to deter real threats. A Congressional investigation – perhaps by the Weaponization of the Federal Government subcommittee – is warranted to discover if the facts reported by Taibbi are correct and, more importantly, if Gabbard has been targeted for enhanced surveillance and harassment for her speech. After all, crazier things have happened, like Matt Taibbi finding himself targeted with a rare home visit from the IRS on the same day the journalist testified before Congress about federal meddling in social media curation. Police have access to more than 71,000 surveillance cameras in New York City, and to more than 40,000 cameras in Los Angeles.
This technology is rapidly becoming ubiquitous from coast to coast. As it does, civil libertarians are shifting from outright opposition to public surveillance cameras – which increasingly seems futile – to advocating for policy guardrails that protect privacy. That American cities are going the way of London, where police cameras are on every street corner, is undeniable. The Harvard Crimson reports that Cambridge, Massachusetts, is one of the latest cities to debate whether to allow police to deploy a network of surveillance cameras. The Cambridge Police Department was on the verge of installing highly visible cameras that would surveil the city’s major parks and even Harvard Yard when the city council suspended a vote after hearing from a prominent civil rights attorney. Even then, Emiliano Falcon-Morano of the Technology for Liberty Program at the Massachusetts ACLU seemed to bow to the inevitability of cameras. He recommended that this technology not be installed until the “police department addresses several important questions and concerns to ensure that it is deployed in a manner that conforms with civil rights and civil liberties.” In Philadelphia Dana Bazelon, a former criminal defense attorney and frequent critic of police intrusions into privacy, is now advocating the expansion of surveillance cameras. As an advisor to the Philadelphia district attorney, Bazelon sees police cameras as the only way to stem gun violence. This turnabout prompted Reason’s J.D. Tuccille to accuse Bazelon of discarding “concerns about government abuses to endorse a wide-reaching surveillance state.” Tuccille notes how much easier surveillance cameras may make the job of policing. He archly quotes Charlton Heston’s character in Touch of Evil, “A policeman’s job is only easy in a police state.” The argument in favor of public surveillance cameras is that when we step into the public square, we can expect to lose a degree of privacy. After all, no law keeps an officer on patrol from glancing our way. What’s so bad about being seen by that same officer through a lens? The answer, simply, is that camera networks do more than see. They record and transform faces into data. That data, combined with facial recognition software, with cellsite simulators that record our movements by tracking our cellphone location histories, with social media posts that log our political views, religious beliefs, and personal lives, brings us to within spitting distance of a police state. It is out of this concern that the Electronic Frontier Foundation has helpfully provided Americans with the ability to see the surveillance mechanisms unfolding in their communities through its Street Level Surveillance website. Yet, whether we like it or not (and we don’t like it), ubiquitous camera surveillance by the police in almost every city is coming. It is coming because public surveillance is useful in solving so many crimes. As city leaders temporarily shelved the police surveillance proposal in Cambridge, a man in New York City was freed after serving 16 years in prison, exonerated by evidence from old surveillance footage. Arvel Marshall was railroaded in 2016 by a Brooklyn prosecutor who sat on the exonerating tape, which clearly showed someone else committing the murder for which Marshall was convicted. There is no denying that, when the images are clear, surveillance footage can provide irrefutable identification of a criminal (or not, as in Marshall’s case). But the flip side is that the same technology, once it becomes networked and seamlessly integrated by AI, will give the powerful the means to track Americans with little more than a snap of the fingers or a click of the mouse – not just criminals, but protestors, political groups, journalists, and candidates. As this new reality unfolds, questions emerge. How will police surveillance data be stored? How secure will it be from hackers? How long will it be kept? Will it be networked with other forms of tracking, such as our purchased digital data, and combined by AI into total personal surveillance? Will this data be used to follow not just potential terrorists but Americans with criminal records in a predictive effort at “precrime”? Should technology be deployed that anonymizes the faces of everyone on a tape, with deanonymization or unmasking only at the hands of an authorized person? Should a warrant be issued to watch a given crime or to unmask a face? The terms of this new debate are changing as technology evolves at fast forward. But it is not too early to ask these questions and debate new policies, city by city, as well as in Congress. United States v. ChatrieWe reported on the bold opinion of federal district Judge Mary Hannah Lauck of Virginia who ruled in 2022 that the government erred by seeking a warrant for the location histories of every personal digital device within a 17.5-acre area around a bank that had been robbed in Richmond, Virginia, in 2019.
To identify the suspect, Nathaniel Chatrie, law enforcement officials obtained a geofence warrant from Google, requesting location data for all devices within that large area. Swept into this mass surveillance – reminiscent of the “general warrants” of the colonial era – were people in restaurants, in an apartment complex, and an elder care facility, as well as innumerable passersby. Judge Lauck wrote that these consumers were almost all unaware that Google logs their location 240 times a day. She wrote: “It is difficult to overstate the breadth of this warrant” and that every person in the vicinity has “effectively been tailed.” At times it almost seems that no good opinion goes upheld, at least where the Fourth Amendment is concerned. On July 9, the Fourth Circuit Court of Appeals reversed Judge Lauck’s decision in United States v. Chatrie. The court held that a geofence warrant covering a busy area around a bank robbery did not qualify as a Fourth Amendment search at all, a sweeping decision that has serious implications for privacy rights and law enforcement practices across the country. The two-judge majority on the Fourth Circuit Court of Appeals concluded that the geofence warrant did not, after all, constitute a Fourth Amendment search because the collection of location data from such a broad geographic area, even a busy one, did not infringe upon reasonable expectations of privacy. Got that? Judge J. Harvie Wilkinson III, writing for the majority, emphasized that the geofence warrant was a valuable tool for law enforcement in solving serious crimes. He wrote that the use of such warrants is necessary in an era where traditional investigative methods may be insufficient to address modern criminal activities. In a strongly worded dissent (beginning on p. 39), Judge James Andrew Wynn Jr. criticized the majority opinion, highlighting the potential dangers of allowing such broad warrants. Judge Wynn, with solid logic and command of the relevant precedents, demonstrated that the decision undermines the Fourth Amendment’s protections and opens the door for pervasive surveillance. Judge Wynn showed that the geofence warrant lacked the necessary particularity required by the Fourth Amendment. By allowing the collection of data from potentially thousands of innocent people, the warrant was not sufficiently targeted to the suspect. He emphasized that individuals have a reasonable expectation of privacy in their location data, even in public places. The widespread collection of such data without individualized suspicion poses significant privacy concerns. And Judge Wynn warned that the majority's decision sets a dangerous precedent, ignoring the implications of the U.S. Supreme Court’s 2018 Carpenter v. United States opinion in its landmark case on location data. So what, you might ask, is the harm of geofencing in this instance, which caught a suspect in a bank robbery? Answer: Enabling law enforcement to use geofence warrants in such a broad way will almost certainly lead to a variety of novel contexts, such as political protests, that could implicate Americans’ rights to free speech and freedom of assembly. Judge Wynn's dissent highlights the need for a careful balance between effective law enforcement and the preservation of civil liberties. While the majority’s decision underscores the perceived necessity of geofence warrants in modern investigations, Judge Wynn's dissent serves as a poignant reminder of the constitutional protections at stake. The Electronic Frontier Foundation reports that Chatrie’s lawyers are petitioning for an en banc hearing of the entire Fourth Circuit to review the case. PPSA supports that move and we hope that if it happens, there are judges who take the same broad view as Judge Lauck and Judge Wynn. Earlier this year, students in a high school art class were called to a meeting of administrators to defend the contents of their art portfolio.
This happened after Lawrence High School in Lawrence, Kansas, signed a $162,000 contract with Gaggle safety software to review all student messages and files for issues of concern. Gaggle had flagged the digital files of the students’ art portfolio for containing nudity. The students vehemently protested that there was no nudity at all in their work. But it was a hard case to make considering that the files had already been removed from the students accounts so the student artists themselves couldn’t refer to it. Max McCoy, a writer with the nonprofit news organization The Kansas Reflector, wrote that if you’re a Lawrence High student, “every homework assignment, email, photo, and chat on your school-supplied device is being monitored by artificial intelligence for indicators of drug and alcohol use, anti-social behavior, and suicidal inclinations.” The same is true of many American high schools from coast-to-coast. Gaggle claims to have saved an estimated 5,790 student lives from suicide between 2018 and 2023 by analyzing 28 billion student items and flagging 162 million for reviews. McCoy took a hard look this incredibly specific number of lives saved, finding it hard to validate. Simply put, Gaggle counts each incident of flagged material that meets all safety criteria as a saved life. Still, it is understandable that school administrators would want to use any tool they could to reduce the potential for student suicide (the second-leading cause of death among Americans 15-19), as well as reduce the threat of school violence that has plagued the American psyche for decades now. But is an artificial surveillance regime like Gaggle the way to do it? McCoy likens Gaggle to the science-fictional “precrime” technology in the Philip K. Dick novel and Stephen Spielberg movie Minority Report. But could Gaggle technology in its actual use be more like the utterly dysfunctional totalitarian regime depicted in the classic movie Brazil? McCoy reports that a cry for help from one student to a trusted teacher was intercepted and rerouted to an administrator with whom the student has no relationship. The editors of the Lawrence student paper, The Budget, are concerned about Gaggle’s intrusion into their newsgathering, notes, and other First Amendment-protected activities. McCoy quotes Rand researchers who recently wrote, “we found that AI based monitoring, far from being a solution to the persistent and growing problem of youth suicide, might well give rise to more problems than it seeks to solve.” It is one thing to keep tabs on student attitudes and behavior. Spyware technology over all student messages and content looks pointlessly excessive. Worse, it trains the next generation of Americans to be inured to a total surveillance state. As the 2024 elections loom, legislative progress in Congress will likely come to a crawl before the end of meteorological summer. But some unfinished business deserves our attention, even if it should get pushed out to a lame duck session in late fall or to the agenda of the next Congress.
One is a bipartisan proposal now under review that would forbid federal government agencies from strong-arming technology companies into providing encryption keys to break open the private communications of their customers. “Efforts to give the government back-door access around encryption is no different than the government pressuring every locksmith and lock maker to give it an extra key to every home and apartment,” said Erik Jaffe, President of PPSA. Protecting encryption is one of the most important pro-privacy measures Congress could take up now. Millions of consumers have enjoyed end-to-end encryption, from Apple iPhone data to communications apps like Telegram, Signal, and WhatsApp. This makes their communications relatively invulnerable to being opened by an unauthorized person. The Department of Justice has long demanded that companies, Apple especially, provide the government with an encryption key to catch wrong-doers and terrorists. The reality is that encryption protects people from harm. Any encryption backdoor is bound to get out into the wild. Encryption protects the abused spouse from the abuser. It protects children from malicious misuse of their messages. Abroad, it protects dissidents from tyrants and journalists from murderous cartels. At home, it even protects the communications of law enforcement from criminals. The case for encryption is so strong the European Court of Human Rights rejected a Russian law that would have broken encryption because it would violate the human right to privacy. (Let us hope this ruling puts the breaks on recent measures in the UK and the EU to adopt similarly intrusive measures.) Yet the federal government continues to demand that private companies provide a key to their encryption. The State of Nevada’s attorney general went to court to try to force Meta to stop offering encrypted messages on Facebook Messenger on the theory that it will protect users under 18, despite the evidence that breaking encryption exposes children to threats. PPSA urges the House to draft strong legislation protecting encryption, either as a bill or as an amendment. It is time for the people’s representatives to get ahead of the jawboning demands of the government to coerce honest businesses into giving away their customers’ keys. The Quick Unlocking of Would-Be Trump Assassin’s Phone Reveals Power of Commercial Surveillance7/18/2024
Since 2015, Apple’s refusal to grant the FBI a backdoor to its encrypted software on the iPhone has been a matter of heated debate. When William Barr was the U.S. Attorney General, he accused Apple of failing to provide “substantive assistance” in the aftermath of mass shootings by helping the FBI break into the criminals’ phones.
Then in a case in 2020, the FBI announced it had broken into an Apple phone in just such a case. Barr said: “Thanks to the great work of the FBI – and no thanks to Apple …” Clearly, the FBI had found a workaround, though it took the bureau months to achieve it. Gaby Del Valle in The Verge offers a gripping account of the back-and-forth between law enforcement and technologists resulting, she writes, in the widespread adoption of mobile device extraction tools that now allow police to easily break open mobile phones. It was known that this technology, often using Israeli-made Cellebrite software, was becoming ever-more prolific. Still, observers did a double-take when the FBI announced that its lab in Quantico, Virginia, was able to break into the phone of Thomas Matthew Crooks, who tried to assassinate former President Trump on Saturday, in just two days. More than 2,000 law enforcement agencies in every state had access to such mobile device extraction tools as of 2020. The most effective of these tools cost between $15,000 and $30,000. It is likely, as with cell-site simulators that can spoof cellphones into giving up their data, that these phone-breaking tools are purchased by state and local law enforcement with federal grants. We noticed recently that Tech Dirt reported that for $100,000 you could have purchased a cell-site simulator of your very own on eBay. The model was old, vintage 2004, and is not likely to work well against contemporary phones. No telling what one could buy in a more sophisticated market. The takeaway is that the free market created encryption and privacy for customer safety, privacy, and convenience. The ingenuity of technologists responding to market demand from government agencies is now being used to tear down consumer encryption, one of their greatest achievements. We reported earlier this month that Los Angeles police are alarmed at the proliferation of wireless cameras installed in bushes that allow criminals to remotely surveil homes targeted for burglaries.
Now police in Braintree, Massachusetts, have arrested two men and a woman in connection to a series of burglaries enabled by these remote, wireless cameras. One of the suspects, a Colombian man wearing all black and a mask, was arrested and charged with resisting arrest and assault and battery on a police officer, after attempting to flee when he was allegedly caught retrieving a wireless camera in front of a home that had been burgled. The three people arrested are, according to Braintree police, connected to a group known as the South American Theft Group, which uses extensive surveillance, GPS tracking technology, and counter-surveillance measures to analyze the comings and goings of their victims. The commoditization of spyware and the popularization of sophisticated plans for surveillance is driving this revolution in neighborhood crime. What can we do? In addition to the customary precautions of installing locks and alarms, outdoor lights, and installing security cameras, you should avoid posting advance notice of family vacations. Criminals are watching your social media posts as well. We often report on the disturbing growth of surveillance camera systems in the hands of government, whether it’s through expansion of networks at city intersections, or convincing citizens to hand over video from their Ring and other private camera systems. We’ve reported on police aiming a camera at a home to create a 24-hour stakeout over eight months.
Now a new threat is emerging – criminals are leveraging these same surveillance tools for stakeouts to determine the best time to clean out your house. For years, burglars have scouted out target homes by posing as salesmen or dressing up as repairmen or utility workers. But that required shoe leather and a certain degree of risk. A report by Nathan Solis of The Los Angeles Times uncovers a troubling trend in Southern California going nationwide – criminals are installing hidden cameras in residential yards. Burglars are planting hidden cameras wreathed in plastic leaves and inserted into bushes to stake out unsuspecting homeowners’ yards to monitor the comings and goings of family members in order to plan their crimes with precision. Wi-Fi jammers, illegal to possess but legal to sell, are also often used to disable home security systems when the break-in does occur. In the face of such a threat, what can we do? The Times offers proactive steps you can take to protect against surveillance-enabled burglars. First, if you spot such a device you should alert police immediately, so law enforcement can track the secret trackers. You should have an electrician hardwire your burglary alarm with cables that go direct into your router so it cannot be turned off. Put a padlock on your circuit-breaker to further protect against someone turning off the power to your alarm system. Have lights activated by motion detectors and harden your points of entry. The Times also reports that police recommend placing Apple Air Tags or some other tracker placed inside a few valuables to allow the police to track your items if they should be stolen. In any event, as with the deep infiltration of the phones of police and journalists by cartels with “zero-day” software, we should expect any new surveillance technology in the hands of the government and law enforcement will wind up in the hands of criminals as well. We’ve long recounted the bad news on law enforcement’s use of facial recognition software – how it misidentifies people and labels them as criminals, particularly people of color. But there is good news on this subject for once: the Detroit Police Department has reached a settlement with a man falsely arrested on the basis of a bad match from facial recognition technology (FRT) that includes what many civil libertarians are hailing as a new national standard for police.
The list of injustices from false positives from FRT has grown in recent years. We told the story of Randall Reid, a Black man in Georgia, arrested for the theft of luxury goods in Louisiana. Even though Reid had never been to Louisiana, he was held in jail for a week. We told the story of Porchia Woodruff, a Detroit woman eight months pregnant, who was arrested in her driveway while her children cried. Her purported crime was – get this – a recent carjacking. Woodruff had to be rushed to the hospital after suffering contractions in her holding cell. Detroit had a particularly bad run of such misuses of facial recognition in criminal investigations. One of them was the arrest of Robert Williams in 2020 for the 2018 theft of five watches from a boutique store in which the thief was caught on a surveillance camera. Williams spent 30 hours in jail. Backed by the American Civil Liberties Union, the ACLU of Michigan, and the University of Michigan Civil Rights Litigation Initiative, Williams sued the police for wrongful arrest. In an agreement blessed by a federal court in Michigan, Williams received a generous settlement from the Detroit police. What is most important about this settlement agreement are the new rules Detroit has embraced. From now on:
Another series of reforms impose discipline on the way in which lineups of suspects or their images unfold. When witnesses perform lineup identifications, they may not be told that FRT was used as an investigative lead. Witnesses must report how confident they are about any identification. Officers showing images to a witness must themselves not know who the real suspect is, so they don’t mislead the witness with subtle, non-verbal clues. And photos of suspects must be shown one at a time, instead of showing all the photos at once – potentially leading a witness to select the one image that merely has the closest resemblance to the suspect. Perhaps most importantly, Detroit police officers will be trained on the proper uses of facial recognition and eyewitness identification. “The pipeline of ‘get a picture, slap it in a lineup’ will end,” Phil Mayor, a lawyer for the ACLU of Michigan told The New York Times. “This settlement moves the Detroit Police Department from being the best-documented misuser of facial recognition technology into a national leader in having guardrails in its use.” PPSA applauds the Detroit Police Department and ACLU for crafting standards that deserve to be adopted by police departments across the United States. As the adoption of Automated License Plate Readers (ALPRs) creates ubiquitous surveillance of roads and highways, the uses and abuses of these systems – which capture and store license plate data – received fresh scrutiny by a Virginia court willing to question Supreme Court precedent.
In Norfolk, 172 such cameras were installed in 2023, generating data on just about every citizen’s movements available to Norfolk police and shared with law enforcement in neighboring jurisdictions. Enter Jayvon Antonio Bell, facing charges of robbery with a firearm. In addition to alleged incriminating statements, the key evidence against Bell includes photographs of his vehicle captured by Norfolk’s Flock ALPR system. Bell’s lawyers argued that the use of ALPR technology without a warrant violated Bell’s Fourth and Fourteenth Amendment rights, as well as several provisions of the Virginia Constitution. The Norfolk Circuit Court, in a landmark decision, granted Bell's motion to suppress the evidence obtained from the license plate reader. This ruling, rooted in constitutional protections, weighs in on the side of privacy in the national debate over data from roadway surveillance. The court was persuaded that constant surveillance and data retention by ALPRs creates, in the words of Bell’s defense attorneys, a “dragnet over the entire city.” This motion to dismiss evidence has the potential to reframe Fourth Amendment jurisprudence. The Norfolk court considered the implications of the Supreme Court opinion Katz v. United States (1967), which established that what a person knowingly exposes to the public is not protected by the Fourth Amendment. In its decision, the court boldly noted that technological advancements since Katz have expanded law enforcement's capabilities, making it necessary to re-evaluate consequences for Fourth Amendment protections. The court also referenced a Massachusetts case in which limited ALPR use was deemed not to violate the Fourth Amendment. The Norfolk Circuit Court’s approach was again pioneering. The court found that the extensive network of the 172 ALPR cameras in Norfolk, which far exceeded the limited surveillance in the Massachusetts case, posed unavoidable Fourth Amendment concerns. The Norfolk court also expressed concern about the lack of training requirements for officers accessing the system, and the ease with which neighboring jurisdictions could share data. Additionally, the court highlighted vulnerabilities in ALPR technology, citing research showing that these systems are susceptible to error and hacking. This is a bold decision by this state court, one that underscores the need for careful oversight and regulation of ALPR systems. As surveillance technology continues to evolve, this court’s decision to suppress evidence from a license plate reader is a sign that at least some judges are ready to draw a line around constitutional protections in the face of technological encroachment. George Orwell wrote that in a time of deceit, telling the truth is a revolutionary act.
Revolutionary acts of truth-telling are becoming progressively more dangerous around the world. This is especially true as autocratic countries and weak democracies purchase AI software from China to weave together surveillance technology to comprehensively track individuals, following them as they meet acquaintances and share information. A piece by Abi Olvera posted by the Bulletin of Atomic Scientists describes this growing use of AI to surveil populations. Olvera reports that by 2019, 56 out of 176 countries were already using artificial intelligence to weave together surveillance data streams. These systems are increasingly being used to analyze the actions of crowds, track individuals across camera views, and pierce the use of masks or scramblers intended to disguise faces. The only impediment to effective use of this technology is the frequent Brazil-like incompetence of domestic intelligence agencies. Olvera writes: “Among other things, frail non-democratic governments can use AI-enabled monitoring to detect and track individuals and deter civil disobedience before it begins, thereby bolstering their authority. These systems offer cash-strapped autocracies and weak democracies the deterrent power of a police or military patrol without needing to pay for, or manage, a patrol force …” Olvera quotes AI surveillance expert Martin Beraja that AI can enable autocracies to “end up looking less violent because they have better technology for chilling unrest before it happens.” Olivia Solon of Bloomberg reports on the uses of biometric identifiers in Africa, which are regarded by the United Nations and World Bank as a quick and easy way to establish identities where licenses, passports, and other ID cards are hard to come by. But in Uganda, Solon reports, President Yoweri Museveni – in power for 40 years – is using this system to track his critics and political opponents of his rule. Used to catch criminals, biometrics is also being used to criminalize Ugandan dissidents and rival politicians for “misuse of social media” and sharing “malicious information.” The United States needs to lead by example. As our facial recognition and other systems grow in ubiquity, Congress and the states need to demonstrate our ability to impose limits on public surveillance, and legal guardrails for the uses of the sensitive information they generate. In the early 1920s revenue agents staked out a South Carolina home the agents suspected was being used as a distribution center for moonshine whiskey. The revenue agents were in luck. They saw a visitor arrive to receive a bottle from someone inside the house. The agents moved in. The son of the home’s owner, a man named Hester, realized that he was about to be arrested and sprinted with the bottle to a nearby car, picked up a gallon jug, and ran into an open field.
One of the agents fired a shot into the air, prompting Hester to toss the jug, which shattered. Hester then threw the bottle in the open field. Officers found a large fragment of the broken jug and the discarded bottle both contained moonshine whiskey. This was solid proof that moonshine was being sold. But was it admissible as evidence? After all, the revenue agents did not have a warrant. This case eventually wound its way to the Supreme Court. In 1924, a unanimous Court, presided over by Chief Justice (and former U.S. President) William Howard Taft, held that the Fourth Amendment did not apply to this evidence. Justice Oliver Wendell Holmes, writing the Court’s opinion, declared that “the special protection accorded by the Fourth Amendment to the people in their ‘persons, houses, papers and effects,’ is not extended to the open field.” This principle was later extended to exclude any garbage that a person throws away from Fourth Amendment protections. As strange as it may seem, this case about broken jugs and moonshine from the 1920s, Hester v. United States, provides the principle by which law enforcement officers freely help themselves to the information inside a discarded or lost cellphone – text messages, emails, bank records, phone calls, and images. We reported a case in 2022 in which a Virginia man was convicted of crimes based on police inspection of a cellphone he had left behind in a restaurant. That man’s attorney, Brandon Boxler, told the Daily Press of Newport News that “cellphones are different. They have massive storage capabilities. A search of a cellphone involves a much deeper invasion of privacy. The depth and breadth of personal and private information they contain was unimaginable in 1924.” In Riley v. California, the Supreme Court in 2018 upheld that a warrant was required to inspect the contents of a suspect’s cellphone. But the Hester rule still applies to discarded and lost phones. They are still subject to what Justice Holmes called the rules of the open field. The American Civil Liberties Union, ACLU Oregon, the Electronic Privacy Information Center, and other civil liberties organizations are challenging this doctrine before the Ninth Circuit in Hunt v. United States. They told the court that it should not use the same reasoning that has historically applied to garbage left out for collection and items discarded in a hotel wastepaper basket. “Our cell phones provide access to information comparable in quantity and breadth to what police might glean from a thorough search of a house,” ACLU said in a posted statement. “Unlike a house, though, a cell phone is relatively easy to lose. You carry it with you almost all the time. It can fall between seat cushions or slip out of a loose pocket. You might leave it at the check-out desk after making a purchase or forget it on the bus as you hasten to make your stop … It would be absurd to suggest that a person intends to open up their house for unrestrained searches by police whenever they drop their house key.” Yet that is the government position on lost and discarded cellphones. PPSA applauds and supports the ACLU and its partners for taking a strong stand on cellphone privacy. The logic of extending special protections to cellphones, which the Supreme Court has held contain the “privacies of life,” is obvious. It is the government’s position that tastes like something cooked up in a still. State of Alaska v. McKelveyWe recently reported that the Michigan Supreme Court punted on the Fourth Amendment implications in a case involving local government’s warrantless surveillance of a couple’s property with drone cameras. This was a disappointing outcome, one in which we had filed an amicus brief on behalf of the couple.
But other states are taking a harder look at privacy and aerial surveillance. In another recent case, the Alaska Supreme Court in State v. McKelvey upheld an appeals court ruling that the police needed to obtain a warrant before using an aircraft with officers armed with telephoto lenses to see if a man was cultivating marijuana in his backyard at his home near Fairbanks. In a well-reasoned opinion, Alaska’s top court found that this practice was “corrosive to Alaskans’ sense of security.” The state government had argued that the observations did not violate any reasonable expectation of privacy because they were made with commercially available, commonly used equipment. “This point is not persuasive,” the Alaska justices responded. “The commercial availability of a piece of technology is not an appropriate measure of whether the technology’s use by the government to surveil violates a reasonable expectation of privacy.” The court’s reasoning is profound and of national significance: “If it is not a search when the police make observations using technology that is commercially available, then the constitutional protection against unreasonable searches will shrink as technology advances … As the Seventh Circuit recently observed, that approach creates a ‘precarious circularity.’ Adoption of new technologies means ‘society’s expectations of privacy will change as citizens increasingly rely on and expect these new technologies.’” That is as succinct a description of the current state of privacy as any we’ve heard. The court found that “few of us anticipated, when we began shopping for things online, that we would receive advertisements for car seats and burp cloths before telling anyone there was a baby on the way.” We would add that virtually no one in the early era of social media anticipated that federal agencies would use it to purchase our most intimate and sensitive information from data brokers without warrants. The Alaska Supreme Court sees the danger of technology expansion with drones, which it held is corrosive to Alaskans’ sense of privacy. As we warned, drones are becoming ever cheaper, sold with combined sensor packages that can be not only deeply intrusive across a property, but actually able to penetrate into the interior of a home. The Alaska opinion is an eloquent warning that when it comes to the loss of privacy, we’ve become the proverbial frog, allowing ourselves to become comfortable with being boiled by degrees. This opinion deserves to be nationally recognized as a bold declaration against the trend of ever-more expanding technology and ever-more shrinking zones of privacy. Katie King in the Virginian-Pilot reports an in-depth account about the growing dependency of local law enforcement agencies on Flock Safety cameras, mounted on roads and intersections to catch drivers suspected of crimes. With more than 5,000 police agencies across the nation using these devices, the privacy implications are enormous.
Surveillance cameras have been in the news at lot lately, often in a positive light. Local news is consumed by murder suspects and porch pirates alike captured on video. The recently released video of a physical attack by rapper Sean “Diddy” Combs on a girlfriend several years ago has saturated media, reminding us that surveillance can protect the vulnerable. The crime-solving potential of license plate readers is huge. Flock’s software runs license plate numbers through law enforcement databases, allowing police to quickly track a stolen car, locate suspects fleeing a crime, or find a missing person. With such technologies, Silver and Amber alerts might one day become obsolete. As with facial recognition technology, however, license plate readers can produce false positives, ensnaring innocent people in the criminal justice system. King recounts the ordeal of an Ohio man who was arrested by police with drawn guns and a snarling dog. Flock’s license plate reader had falsely flagged his vehicle as having stolen tags. The good news is that Flock insists it is not even considering combining its network with facial recognition technology – reducing the possibility of both technologies flagging someone as dangerous. As with so many surveillance technologies, the greater issue in license-plate readers is not the technology itself, but how it might be used in a network. “There’s a simple principle that we’ve always had in this country, which is that the government doesn’t get to watch everybody all the time just in case somebody commits a crime – the United States is not China,” Jay Stanley, a senior analyst with the American Civil Liberties Union, told King. “But these cameras are being deployed with such density that it’s like GPS-tracking everyone.” License plate readers could, conceivably, be networked to track everywhere that everyone goes – from trips to mental health clinics, to gun stores, to houses of worship, and protests. With so many federal agencies already purchasing Americans’ sensitive data from data brokers, creating a national network of drivers’ whereabouts is just one more addition to what is already becoming a national surveillance system. With apologies to Jay Stanley, we are in serious danger of becoming China. As massive databases compile facial recognition, location data, and now driving routes, we need more than ever to head off the combination of all these measures. A good place to start would be for the U.S. Senate follow the example of the House by passing the Fourth Amendment Is Not For Sale Act. The City of Denver is reversing its previous stance against the use of police drones. The city is now buying drones to explore the effectiveness of replacing many police calls with remote aerial responses. A Denver police spokesman said that on many calls the police department will send drones first, officers second. When operators of drones see that a call was a false alarm, or that a traffic issue has been resolved, the police department will be free to devote scarce resources to more urgent priorities.
Nearby Arapahoe County already has a fleet of 20 such drones operated by 14 pilots. Arapahoe has successfully used drones to follow suspects fleeing a crime, provide live-streamed video and mapping of a tense situation before law enforcement arrives, and to look for missing people. In Loveland, Colorado, a drone was used to deliver a defibrillator to a patient before paramedics were able to get to the scene. The use of drones by local law enforcement as supplements to patrol officers is likely to grow. And why not? It makes sense for a drone to scout out a traffic accident or a crime scene for police. But as law enforcement builds more robust fleets of drones, they could be used not just to assess the seriousness of a 911 call, but to provide the basis for around-the-clock surveillance. Modern drones can deliver intimate surveillance that is more invasive than traditional searches. They can be packed with cell-simulator devices to extract location and other data from cellphones in a given area. They can loiter over a home or peek in someone’s window. They can see in the dark. They can track people and their activities through walls by their heat signatures. Two or more cameras combined can work in stereo to create 3D maps inside homes. Sensor fusion between high definition, fully maneuverable cameras can put all these together to essentially give police an inside look at a target’s life. Drones with such high-tech surveillance packages can be had on the market for around $6,000. As with so many other forms of surveillance, the modest use of this technology sounds sensible, until one considers how many other ways they can be used. Local leaders at the very least need to enact policies that put guardrails on these practices before we learn, the hard way, how drones and the data they generate can be misused. A report by The New York Time’s Vivian Wang in Beijing and one by Tech Policy’s Marwa Sayed in New York describes the twin strategies for surveilling a nation’s population, in the United States as well as in China.
Wang chronicles the move by China’s dictator, Xi Jinping, to round out the pervasive social media and facial recognition surveillance capability of the state by bringing back Mao-era human snitching. Wang writes that Xi wants local surveillance that is “more visible, more invasive, always on the lookout for real or perceived threats. Officers patrol apartment buildings listening for feuding neighbors. Officials recruit retirees playing chess outdoors as extra eyes and ears. In the workplace, employers are required to appoint ‘safety consultants’ who report regularly to the police.” Xi, Wang reports, explicitly links this new emphasis on human domestic surveillance to the era when “the party encouraged residents to ‘re-educate’ purported political enemies, through so-called struggle sessions where people were publicly insulted and humiliated …” Creating a society of snitches supports the vast network of social media surveillance, in which every “improper” message or text can be reviewed and flagged by AI. Chinese citizens are already followed everywhere by location beacons and a national network of surveillance cameras and facial recognition technology. Marwa Sayed writes about the strategy of technology surveillance contained in several bills in New York State. One bill in the state legislature would force the owners of driver-for-hire vehicles to install rear-facing cameras in their cars, presumably capturing private conversations by passengers. Another state bill would mandate surveillance cameras at racetracks to monitor human and equine traffic, watching over people in their leisure time. “Legislators seem to have decided that the cure to what ails us is a veritable panopticon of cameras that spares no one and reaches further and further into our private lives,” Sayed writes. She notes another measure before the New York City Council that would require the Department of Sanitation to install surveillance cameras to counter the insidious threat of people putting household trash into public litter baskets. Sayed writes: “As the ubiquity of cameras grows, so do the harms. Research shows that surveillance and the feeling it creates of constantly being watched leads to anxiety and paranoia. People may start to feel there is no point to personal privacy because you’ll be watched wherever you go. It makes us wary about taking risks and dampens our ability to interact with one another as social creatures.” Without quite meaning to, federal, state, and local authorities are merging the elements of a national surveillance system. This system draws on agencies’ purchases of our sensitive, personal information from data brokers, as well as increasingly integrated camera, facial recognition, and other surveillance networks. And don’t think that organized human snitching can’t come to these shores either. During World War One, the federal government authorized approved citizens to join neighborhood watch groups with badges inscribed with the words, “American Protection League – Secret Service.” At a time when Americans were sent to prison for opposing the war, the American Protection League kept tabs on neighbors, always on the watch out for anyone who seemed insufficiently enthusiastic about the war. Americans could be reported to the Department of Justice for listening to Beethoven on their phonographs or checking out books about German culture from the library. Today, large numbers of FBI and other government employees secretly “suggest” that social media companies remove posts that contain “disinformation.” They monitor social media to track posts of people, whether targeted by the FBI as traditional Catholics or observant Muslims, for signs of extremism. As world tension grows between the United States and China, Russia, Iran and North Korea, something like the American Protection League might be resurrected soon in response to a foreign policy crisis. Its digital ghost is already watching us. Suspect: “We Have to Follow the Law. Why Don’t They?" Facial recognition software is useful but fallible. It often leads to wrongful arrests, especially given the software’s tendency to produce false positives for people of color.
We reported in 2023 on the case of Randall Reid, a Black man in Georgia, arrested and held for a week by police for allegedly stealing $10,000 of Chanel and Louis Vuitton handbags in Louisiana. Reid was traveling to a Thanksgiving dinner near Atlanta with his mother when he was arrested. He was three states and seven hours away from the scene of this crime in a state in which he had never set foot. Then there is the case of Portia Woodruff, a 32-year-old Black woman, who was arrested in her driveway for a recent carjacking and robbery. She was eight months pregnant at the time, far from the profile of the carjacker. She suffered great emotional distress and suffered spasms and contractions while in jail. Some jurisdictions have reacted to the spotty nature of facial recognition by requiring every purported “match” to be evaluated by a large team to reduce human bias. Other jurisdictions, from Boston to Austin and San Francisco, responded to the technology’s flaws by banning the use of this technology altogether. The Washington Post’s Douglas MacMillan reports that officers of the Austin Police Department have developed a neat workaround for the ban. Austin police asked law enforcement in the nearby town of Leander to conduct face searches for them at least 13 times since Austin enacted its ban. Tyrell Johnson, a 20-year-old man who is a suspect in a robbery case due to a facial recognition workaround by Austin police told MacMillan, “We have to follow the law. Why don’t they?” Other jurisdictions are accused of working around bans by posting “be on the lookout” flyers in other jurisdictions, which critics say is meant to be picked up and run through facial recognition systems by other police departments or law enforcement agencies. MacMillian’s interviews with defense lawyers, prosecutors, and judges revealed the core problem with the use of this technology – employing facial recognition to generate leads but not evidence. They told him that prosecutors are not required in most jurisdictions to inform criminal defendants they were identified using an algorithm. This highlights the larger problem with high-tech surveillance in all its forms: improperly accessed data, reviewed without a warrant, can allow investigators to work backwards to incriminate a suspect. Many criminal defendants never discover the original “evidence” that led to their prosecution, and thus can never challenge the basis for their case. This “backdoor search loophole” is the greater risk, whether one is dealing with databases of mass internet communications or facial recognition. Thanks to this loophole, Americans can be accused of crimes but left in the dark about how the cases against them were started. The long back-and-forth between Michigan’s Long Lake Township and Todd and Heather Maxon ended with the Michigan Supreme Court punting on the Fourth Amendment implications of drone surveillance over private property.
An appellate court had held that the township’s warrantless use of a drone three times in 2017 to photograph the Todd’s property was an unreasonable, warrantless search, constituting a Fourth Amendment violation. PPSA filed a brief supporting the Maxons before the Michigan Supreme Court, alerting the court to the danger of intimate searches of home and residents by relatively inexpensive drones now on the market. To demonstrate the privacy threat of drones, PPSA informed the court that commercially available drones have thermal cameras that can penetrate beyond what is visible to the naked eye. They can be equipped with animal herd tracking algorithms that can enhance the surveillance of people. Drones can swarm and loiter, providing round-the-clock surveillance. They can carry lightweight cell-site simulators that prompt the mobile phones of people inside the targeted home to give up data that reveals deeply personal information. Furthermore, PPSA’s brief states that drones “can see around walls, see in the dark, track people by heat signatures, and recognize and track specific people by their face.” PPSA agreed that even ordinary photography from a camera hovering over the Maxon’s property violated, in the words of an appellate court, the Maxon’s reasonable expectation of privacy. But in a unanimous decision, Michigan’s top court was having none of this. It concluded that the exclusionary rule – a judicial doctrine in which evidence is excluded or suppressed – is generally applied when law enforcement violates a defendant’s constitutional rights in a criminal case. The justices remanded the case based upon a procedural issue unrelated to the Fourth Amendment question. The Michigan Supreme Court, therefore, declined to address “whether the use of an aerial drone under the circumstances presented here is an unreasonable search in violation of the United States or Michigan Constitutions.” A crestfallen Todd Maxon responded, “Like every American, I have a right to be secure on my property without being watched by a government drone.” The issue between the township and the Maxons was the contention that, behind the shelter of trees, the couple was growing a salvage operation. This violated an earlier settlement agreement the Maxons had made pledging not to keep a junkyard on their five-acre property. Given the potential for drones to use imaging and sensor technology to violate the intimate lives of families, it is all but inevitable that a better – and uglier – test case will come along. If anything, this ruling makes it a virtual certainty. |
Categories
All
|