In the 2002 Steven Spielberg movie Minority Report, Tom Cruise plays John Anderton, a fugitive in a dystopian, film-noir future. As Anderton walks through a mall, he is haunted by targeted ads in full-motion video on digital billboards. The boards read Anderton’s retinas and scan his face, identify him, and call out “Hey, John Anderton!” – look at this Lexus, this new Bulgari fragrance, this special offer from Guinness!
Anderton appears brutalized as he and other passersby walk briskly and look straight ahead to avoid the digital catcalls around them. What was sci-fi in 2002 is reality in 2024. You’ve probably seen a digital billboard with vibrant animation and high production values. What’s not immediately apparent is that they can also be interactive, based on face-scanning and the integration of mobile data exploited by the “out-of-home” advertising business. “Going about the world with the feeling that cameras are not just recording video but analyzing you as a person to shape your reality is an uncomfortable concept,” writes Big Brother Watch, a UK-based civil liberties and privacy organization in a white paper, The Streets Are Watching You. Some examples from Big Brother:
This tracking is enabled by cameras and facial recognition and enhanced by the synthesis of consumers’ movement data, spatial data, and audience data, collected by our apps and reported to advertisers by our smartphones. Audience data is collected by mobile advertising ID (MAIDS), which cross-references behavior on one app to others and matches those insights with tracking software to create a personal profile. While supposedly anonymized, MAIDS can be reverse engineered to work out someone’s actual identity. We have an additional concern about hyper-targeted advertising and advertising surveillance. This sector is raising billions of dollars in capital to build out an infrastructure of surveillance in the UK. If this practice also spreads across the United States, the data generated could easily be accessed by the U.S. federal government to warrantlessly surveil Americans. After all, about a dozen U.S. agencies – ranging from the FBI to the IRS – already purchase Americans’ digital data from third-party data brokers and access it without warrants. Congress can prevent this technology from being unfurled in the United States. The U.S. Senate can also take the next step by passing the Fourth Amendment Is Not For Sale Act, passed by the House, which forbids the warrantless collection of Americans’ most personal and sensitive data. In the meantime, go to p. 35 of Big Brother’s “The Streets Are Watching You” report to see how Apple iPhone and Android users can protect themselves from phone trackers and location harvesting. We wouldn’t want to do what John Anderton did – have a technician pluck out our eyes and replace them with someone else’s. Replacing one’s face would presumably take a lot more work. Neil C. Hughes has a compelling piece in cybernews.com describing an Orwellian reality that, unfortunately, is not a matter of science-fiction. It is already part of our daily lives. Hughes writes:
“The constant tracking from our devices, websites, social media platforms, CCTV, and even your employer might be leaving you feeling like you are trapped inside a personalized version of The Truman Show.” At home, images and data from digital assistants, Ring Doorbell surveillance partnered with police departments, smart appliances, heart rate monitors, and even washing machines produce information that “could be used against you by digital forensics teams should you find yourself accused of a crime.” At work, you are tracked by productivity tools, and on the streets by cameras and facial recognition. Banks monitor our “every transaction to monitor for fraud or money laundering.” Hughes adds: “After you finally return home and collapse in your favorite chair to unwind, you are not necessarily paranoid if you question whether you’re watching your TV or if it’s watching you. Some new smart TVs have cameras typically hidden in a bezel at the top of the TV screen, leaving many to think there is nowhere to hide from the watchful eye of cameras and algorithms.” While partisan control of the U.S. Senate balances on a knife’s edge, also at stake is whether that body will have more surveillance reformers and protectors of privacy, or more defenders of the government surveillance status quo. We find no partisan correlation between the reformers and the defenders. Some of the most liberal/progressive and conservative candidates support reform of government surveillance programs to protect the Fourth Amendment rights of Americans and their privacy. The same diversity exists among those who stoutly defend the government’s supposed “right” to warrantlessly surveil Americans. You can review the PPSA Scorecard to see how your Senators (and Representative) fare in our ratings. We rate candidates on a grading scale from F to A+ (see details below). Here we apply these grades to eight of the closest or most-watched races for the U.S. Senate in 2024. We usually rate only the incumbent in each race because most opponents either have no voting record to score or, if an opponent was previously a Member of Congress, his or her votes are usually too far in the past to be relevant. ***Not pictured above is Former Rep. Debbie Mucarsel-Powell (D) who scored a D the 116th Congress (2019-2021). We should note that the last Senate candidate has an exceptionally troubling record on privacy and government surveillance. Rep. Adam Schiff, former House Intelligence Committee Chairman, is now running for the open Senate seat in California and polls show him with a comfortable lead. Should Schiff come to represent all the people of California, we hope he will “see the light” and become an advocate for his constituents’ privacy. In all races, voters, volunteers and campaign donors select their candidates by their stances on many positions. PPSA hopes that, in the coming election, you will consider your candidates’ stance on vital issues of surveillance and privacy. These include:
Again, please refer to our Scorecard for the records of other Members. As the 20th century Chicago columnist Sidney J. Harris observed: “Democracy is the only system that persists in asking the powers that be whether they are the powers that ought to be.” Here are the details of our grading system: “A+” = Members who voted for every major pro-privacy amendment or bill “A” = Members who voted for privacy on 80 to 99 percent of the votes “B” = Members who voted for privacy on 60 to 79 percent of the votes “C” = Members who voted for privacy on 40 to 59 percent of the votes “D” = Members who voted for privacy on 20 to 39 percent of the votes “F” = Members who voted for privacy on 0 to 19 percent of the votes The year is far from over and the U.S. House of Representatives has already had a banner year on privacy and surveillance reform. The House passed the Fourth Amendment Is Not for Sale Act, which would curb the purchases of Americans’ data by government agencies. It also passed the PRESS Act, which gives reporters and their sources protection from the prying of eyes of prosecutors. Finally, the House came within one vote of passing a measure to require the government to obtain a warrant before accessing Americans’ personal communications caught up in the global trawl of foreign surveillance programs authorized by FISA Section 702. But will the House of the 119th Congress be able to improve on these bold, pro-privacy stands? In our PPSA Scorecard we rate how all representatives (and senators) have voted on pro-privacy amendments or bills. Below are incumbents’ ratings from the 22 closest House races: Here is how evaluated these Members by their votes:
PPSA hopes that in the coming election, you will consider your candidates’ stance on vital issues of surveillance and privacy. Please refer to our Scorecard for the records of other Members. And don’t be shy about expressing your views on privacy and surveillance reform with your candidates. As Abraham Lincoln said: “If the people turn their backs to a fire they will burn their behinds, and they will just have to sit on their blisters.” Does Congress have oversight of the federal intelligence community, or do the spies and intelligence officials have oversight of Congress?
Under our Constitution, the answer should be obvious – the legislative branch oversees executive agencies. Besides, no American should want spies and intelligence officials looking over the shoulders of our elected representatives. That is why the founders established Congress in Article One of the Constitution. And yet, at times, it seems as if the intelligence community regards oversight of Congress as its legitimate business. We learned last year that Jason Foster, the former chief investigative counsel for Sen. Chuck Grassley – Ranking Member of the Senate Judiciary Committee – is among numerous staffers and Congressional lawyers, Democrats and Republicans, who had their personal phone and email records searched by the Department of Justice in 2017. Foster later founded Empower Oversight Whistleblowers & Research, which went to court to press for disclosure of the misuse of Justice’s subpoena power that risked identifying confidential whistleblowers who provided information to Congress about governmental misconduct. Now federal Judge James E. Boasberg has ordered the partial unsealing of a Non-Disclosure Order (NDO) application filed by the Department of Justice to prevent Google from notifying users like Foster that their phone records, email, and other communications were ransacked by the Justice Department. This is a significant victory for transparency. We eagerly await the results of the unsealed NDO for clues about the Justice Department’s intentions in spying on Congressional attorneys with oversight responsibility. In the meantime, PPSA continues to use every legal means to press a Freedom of Information Act request seeking documents on “unmasking” and other forms of surveillance of 48 current and former House and Senate Members on committees that oversee the intelligence agencies. We will alert you about any further revelations from the court. In the meantime, the Senate can do its part by following up on the unanimous passage of the Non-Disclosure Order Fairness Act by the House. This bill restricts the government’s currently unlimited ability to impose gag orders on telecom and digital companies. These gag orders keep these companies’ customers from learning that their sensitive, personal information has been surveilled by the government. As Congress learns about the degree to which its Members are being watched by the executive branch, the NDO Fairness Act should be more popular than ever. In George Orwell’s Nineteen Eighty-Four, the walls of every domicile in Oceania bristle with microphones and cameras that catch the residents’ every utterance and action. In 2024, we have done Big Brother’s work for him. We have helpfully installed microphones and cameras around the interior of our homes embedded in our computers, laptops, smartphones, and tablets. Might someone be selling our conversations to companies and the federal government without our consent?
Few worry about this because of explicit promises by tech companies not to enable their microphones to be used against us. Google, Amazon, Meta are firm in denying that they eavesdrop on us. For example, Meta states that “sometimes ads can be so specific, it seems like we must be listening to your conversations through our microphones, but we’re not.” Still, many of us have had the spooky sensation of talking about something random but specific – perhaps a desire to buy a leather couch or take a trip to Cancun – only to find our social media feeds littered with ads for couches and resorts in Cancun. The tech companies’ explanation for this is that we sometimes perform online searches for things, forget about them, and then mistakenly attribute the ads in our social media feeds to a conversation. We hope that’s the case. But now we’re not so sure. 404 Media has acquired a slide deck from Cox Media Group (CMG) that claims its “Active-Listening” software can combine AI with our private utterances captured by 470-plus sources to “improve campaign deployment, targeting and performance.” One CMG slide says, “processing voice data with behavioral data identifies an audience who is ‘ready to buy.’” CMG claims to have Meta’s Facebook, Google, and Amazon as clients. After this story broke, the big tech companies stoutly denied that they engage in this practice and expressed their willingness to act against any marketing partner that eavesdrops. This leaves open the possibility that CMG and other actors are gathering voice data from microphones other than from those of their big tech clients. What these marketers want to do is to predict what we will want and send us an ad at the precise time we’re thinking about a given product. The danger is that this same technology in the hands of government could be used to police people at home. This may sound outlandish. Yet consider that a half-dozen federal agencies – ranging from the FBI to the IRS – already routinely purchase our geolocation, internet activity, and other sensitive information we generate on our social media platforms – and then access it freely, without a warrant. Considering what our government already does with our digital data, the addition of our home speech would be an extension of what is already a radical new form of surveillance. Congress should find out exactly what marketers like CMG are up to. As an urgent matter of oversight, Congress also should also determine if any federal agencies are purchasing home voice data. And while they’re at it, the Senate should follow the example of the House and pass the Fourth Amendment Is Not For Sale Act, which would stop the practice of the warrantless purchasing of Americans’ personal, digital information by law enforcement and intelligence agencies. Are the Charges Against Telegram CEO Pavel Durov Meant to Lead the World to Outlaw Encryption?9/3/2024
For days after the arrest of Telegram CEO Pavel Durov by French authorities at Le Bourget Airport near Paris, the world civil liberties community held back.
The impulse to rush to the defense of a Russian dissident/entrepreneur was almost overwhelming. Durov had refined his skills with the creation of VK, a social media website that allowed dissidents, opposition politicians, and Ukrainian protesters to evade Vladimir Putin’s emerging surveillance state as late as 2014. After Durov fled Russia with his brother Nikolai, they created the encrypted messenger service Telegram, which allows users not only to communicate in secrecy, but to also set their messages to disappear. Across Asia, Africa, Latin America, and our own country, Telegram enables dissidents, journalists, and people in fear of cartels or abusive spouses to communicate without making themselves vulnerable. So civil libertarians were naturally poised to rush to Durov’s defense. But they didn’t. There was the matter of the 12 charges approved by a French judge this week, including “complicity” in crimes such as aiding in the distribution of international narcotics and child sex abuse material. The many devils in this case lurk in its many details, some of which are far from well understood. At this point, however, we can at least pose preliminary questions. Some answers must come from the French government. Some must come from every person who cares about privacy, including the almost 1 billion users of Telegram.
We can already highlight at least one aspect of this case that should concern civil libertarians and free speech advocates around the world. Thanks to an insightful analysis by Kevin Collier and Rob Wile in Slate, we know that two of the 12 charges involve a purported obligation of providers of cryptological services to require their users to register with their real identities. Another count declares it a crime to import such an encrypted service “without prior declaration.” Collier and Wile write that this latter provision, which at first sounds like a matter of bureaucratic form-filling, actually implies that “France sees the use of internationally based, unregulated ‘encryption’ service as a crime all its own.” If so, will France get away with criminalizing private encryption services? And if that happens, might this become EU policy? We are already seeing Europe employ illiberal interpretations of the recently enacted Digital Services Act. The EU’s top digital regulator, Thierry Breton, threatened X with legal action if it ran Elon Musk’s full interview with Donald Trump. While Breton’s threat was later disowned by his boss, EU President Ursula von der Leyen, it was still breathtaking to see in Europe today that a powerful regulator believes the European public would be well served by censoring the words of a major party nominee to lead the United States. It is not a stretch to imagine such people also wanting to stamp out private communications. Is France now using possibly legitimate charges about Telegram’s operation to undermine the very idea of encryption? Everyone who cares about privacy should watch how this case unfolds. After all, thanks to Telegram, we know that there are at least one billion of us. The U.S. Department of Justice is pioneering ever-more dismissive gestures in its quest to fob off lawful Freedom of Information Act (FOIA) requests seeking to shed light on government surveillance. One PPSA FOIA request, aimed at uncovering details about the DOJ's purchase of Americans’ commercially available data from third-party data brokers, sets a new record for unprofessionalism.
Until now, we had become used to the Catch-22 denials in which the government refuses to even conduct a search for responsive records with a Glomar response. This judge-made doctrine allows the withholding of requested information if it is deemed so sensitive that the government can neither confirm nor deny its existence. But when the government issues a Glomar response without first conducting a search, we can only ask: How could they know that if they haven’t even searched for the records? DOJ’s latest response that arrived this week, however, is a personal best. The DOJ’s response shows that it didn’t bother to even read our FOIA request. Our request sought records detailing the DOJ's acquisition of data on U.S. persons and businesses, including the amounts spent, the sources of the data, and the categories of information obtained. This request was clearly articulated and included a list of DOJ components likely to have the relevant records. Despite this clarity, DOJ responded by stating that the request did not sufficiently identify the records. DOJ's refusal to conduct a proper search appears to be based on a misinterpretation, either genuine or strategic, of our request. DOJ claimed an inability to identify the component responsible for handling a case based solely on the “name” of the case or organization. However, PPSA's request did not rely on any such identifiers. Instead, DOJ's response indicates that it may have resorted to a generic form letter to reject our request without actually reviewing its contents. Precedents like Miller v. Casey and Nation Magazine v. U.S. Customs Service establish that an agency must read requests “as drafted” and interpret them in a way that maximizes the likelihood of uncovering relevant documents. DOJ’s blanket dismissal is not just a bureaucratic oversight. It is an affront to the principles of openness and accountability that FOIA is designed to uphold. If the DOJ, the agency responsible for upholding the law, continues to disregard its legal obligations, it sets a dangerous precedent for all government agencies. The good news is that DOJ’s Office of Information Policy has now ordered staff to conduct a proper search in response to PPSA’s appeal, a directive that should have been unnecessary. It remains to be seen whether the DOJ will comply meaningfully or continue to obstruct … perhaps with another cookie-cutter Glomar response. How far might DOJ go to withhold basic information about its purchasing of Americans’ sensitive and personal information? In a Glomar response to one of our FOIA requests in 2023, DOJ came back with 40 redacted pages from a certain Mr. or Mrs. Blank. They gave us nothing but a sea of black on each page. The only unredacted line in the entire set of documents was: “Hope that’s helpful.” This latest response is just another sign that those on the other end of our FOIA requests are treating their responsibilities with flippancy. This is unfortunate because the American public deserves to know the extent to which our government is purchasing and warrantlessly accessing our most private information. Filing these requests and responding to non-responsive responses administratively and in court is laborious and at times frustrating work. But somebody has to do it – and PPSA will continue to hold the government accountable. The Texas Observer reports that the Texas Department of Public Safety (DPS) signed a 5-year, nearly $5.3 million contract for the Tangles surveillance tool, originally designed by former Israeli military officers to catch terrorists in the Middle East.
In its acquisition plan, DPS references the 2019 murder of 23 people at an El Paso Walmart, as well as shooting sprees in the Texas cities of Midland and Odessa. If Tangles surveillance stops the next mass shooter, that will be reason for all to celebrate. But Tangles can do much more than spot shooters on the verge of an attack (assuming it can actually do that). It uses artificial intelligence to scrape data from the open, deep, and dark web, combining a privacy-piercing profile of anyone it targets. Its WebLoc feature can track mobile devices – and therefore people – across a wide geofenced area. Unclear is how DPS will proceed now that the Fifth Circuit Court of Appeals in United States v. Jamarr Smith ruled that geofence warrants cannot be reconciled with the Fourth Amendment. If DPS does move forward, there will be nothing to keep the state’s warrantless access to personal data from migrating from searches for terrorists and mass shooters, to providing backdoor evidence in ordinary criminal cases, to buttressing cases with political, religious, and speech implications. As the great Texas writer Molly Ivins wrote: “Many a time freedom has been rolled back – and always for the same sorry reason: fear.” The Wall Street Journal editorial page beat us to the punch to be the first to call the Securities and Exchange Commission the “Surveillance and Exchange Commission.”
It is an apt description, increasingly not a stretch or even a bit of sarcasm. In April we reported that the SEC had taken it upon itself, authorized by no law and under no Congressional or judicial oversight, to create a huge database called the Consolidated Audit Trail. This database allows 3,000 government employees to track, in real time, the identities of tens of millions of Americans who buy and sell stocks and other securities. In June we reported on the protest of state auditors and treasurers in 23 states over this program, which allows government agents to conduct fishing expeditions with the data of millions of Americans who’ve done nothing wrong or suspicious. The state financial officers wrote: “Traditionally, Americans’ financial holdings are kept between them and their broker, not them, their broker, and a massive government database. The only exception has been legal investigations with a warrant.” Now it has come to light, thanks to The Journal, that the SEC fined 26 financial firms almost $400 million for failing to track the private communications of their employees on their personal phones. Most financial firms already enforce policies that prohibit their employees from using their personal devices and messaging apps like WhatsApp for business. But until now, it was not the business of an employer to force employees to hand over their personal phones for inspection. Under this mandate from the SEC, firms must search the personal phones of their employees for evidence of business-related communications. Unlike the Consolidated Audit Trail database, which is government operated, the SEC is outsourcing the task of monitoring of the communications of hundreds of thousands of Americans to their employers. This is a sneaky move. Making employers into the government’s spies obviates the pesky need to worry about niceties like the Fourth Amendment and probable cause warrants. Never mind that the SEC fails to report any crimes or rule-bending from all this surveillance. Readers will recall that a wave of protest prevented the reporting of all financial transactions to the government in excess of $600. But the broad movement to collect, record, and analyze the financial lives of all Americans is ongoing. And the Surveillance and Exchange Commission is its leader. When we’re inside our car, we feel like we’re in our sanctuary. Only the shower is more private. Both are perfectly acceptable places to sing the Bee Gee’s Staying Alive without fear of retribution.
And yet the inside of your car is not as private as you might think. We’ve reported on the host of surveillance technologies built into the modern car – from tracking your movement and current location, to proposed microphones and cameras to prevent drunk driving, to seats that report your weight. All this data is transmitted and can be legally sold by data brokers to commercial interests as well as a host of government agencies. This data can also be misused by individuals, as when a woman going through divorce proceedings learned that her ex was stalking her by following the movements of her Mercedes. Now another way to track our behavior and movements is being added through a national plan announced by the U.S. Department of Transportation called “vehicle-to-everything” technology, or V2X. Kimberly Adams of marketplace.org reports that this technology, to be deployed on 50 percent of the National Highway System and 40 percent of the country’s intersections by 2031, will allow cars and trucks to “talk” to each other, coordinating to reduce the risk of collision. V2X will smooth out traffic in other ways, holding traffic lights green for emergency vehicles and sending out automatic alerts about icy roads. V2X is also yet one more way to collect a big bucket of data about Americans that can be purchased and warrantlessly accessed by federal intelligence and law enforcement agencies. Sens. Ron Wyden (D-OR) and Cynthia Lummis (R-WY), and Rep. Ro Khanna (D-CA), have addressed what government can do with car data under proposed legislation, “Closing the Warrantless Digital Car Search Loophole Act.” This bill would require law enforcement to obtain a warrant based on probable cause before searching data from any vehicle that does not require a commercial license. But the threat to privacy from V2X comes not just from cars that talk to each, but also from V2X’s highway infrastructure that enables this digital conversation. This addition to the rapid expansion of data collection of Americans is one more reason why the Senate should follow the example of the House and pass the Fourth Amendment Is Not For Sale Act, which would end the warrantless collection of Americans’ purchased data by the government. We can embrace technologies like V2X that can save lives, while at the same time making sure that the personal information about us it collects is not retained and allowed to be purchased by snoops, whether government agents or stalkers. The phrase “national security” harks back to the George Washington administration, but it wasn’t until the National Security Act of 1947 that the term was codified into law. This new law created the National Security Council, the Central Intelligence Agency, and much of the apparatus of what we today call the intelligence community. But the term itself – “national security” – was never defined.
What is national security? More importantly, what isn’t national security? Daniel Drezner, a Fletcher School of Law and Diplomacy professor, writes in Foreign Affairs that it was the Bush-era “war on terror” that put the expansion of the national security agenda into overdrive. Since then, he writes, the “national security bucket has grown into a trough.” The term has become a convenient catch-all for politicians to show elevated concern about the issues of the day. Drezner writes: “From climate change to ransomware to personal protective equipment to critical minerals to artificial intelligence, everything is national security now.” He adds to this list the Heritage Foundation’s Project 2025’s designation of big tech as a national security threat, and the 2020 National Security Strategy document, which says the same for “global food insecurity.” We would add to that the call by politicians in both parties to treat fentanyl as a matter of national security. While some of these issues are clearly relevant to national security, Drezner’s concern is the strategic fuzziness that comes about when everything is defined as a national security priority. He criticizes Washington’s tendency to “ratchet up” new issues like fentanyl distribution, without any old issues being removed to keep priorities few and urgent. For our part, PPSA has a related concern – the expansion of the national security agenda has a nasty side effect on Americans’ privacy. When a threat is identified as a matter of national security, it also becomes a justification for the warrantless surveillance of Americans. It is one thing for the intelligence community to use, for example, FISA Section 702 authority for the purpose for which Congress enacted it – the surveillance of foreign threats on foreign soil. For example, if fentanyl is a national security issue, then it is appropriate to surveil the Chinese labs that manufacture the drug and the Mexican cartels that smuggle it. But Section 702 can also be used to warrantlessly inspect the communications of Americans for a crime as a matter of national security. Evidence might also be warrantlessly extracted from the vast database of American communications, online searches, and location histories that federal agencies purchase from data brokers. So the surveillance state can now dig up evidence against Americans for prosecution in drug crimes, without these American defendants ever knowing how this evidence was developed – surely a fact relevant to their defense. As the concept of national security becomes fuzzier, so too do the boundaries of what “crimes” can be targeted by the government with warrantless surveillance. “Trafficking” in critical minerals? Climate change violations? Repeating alleged foreign “disinformation”? When Americans give intelligence and law enforcement agents a probable cause reason to investigate them, a warrant is appropriate. But the ever-expanding national security agenda presents a flexible pretext for the intelligence community to find ever more reason to set aside the Constitution and spy on Americans without a warrant. Drezner writes that “if everything is defined as national security, nothing is a national security priority.” True. And when everything is national security, everyone is subject to warrantless surveillance. What NPD’s Enormous Hack Tells Us About the Reckless Collection of Our Data by Federal Agencies8/23/2024
How to See if Your Social Security Number Was Stolen Was your Social Security number and other personal identifying information among the 2.9 billion records that hackers stole from National Public Data?
Hackers can seize our Social Security numbers and much more, not only from large commercial sites like National Public Data, but also from government sites and the data brokers who sell our personal information to federal agencies. Such correlated data can be used to impersonate you with the financial services industry, from credit card providers to bank loan officers. And once your Social Security number is stolen, it is stolen for life. To find out if your Social Security number and other personal information was among those taken in the National Public Data hack, go to npd.pentester.com. It has been obvious for more than a decade now that the Social Security number is a flawed approach to identification. It is a simple nine-digit number. A fraudster who knows the last few digits of your Social Security number, what year you were born, and where, can likely calculate your number. Because your Social Security number is so often used by dozens of institutions, it is bound to be hacked and sold on the dark web at some point in your life. Yet this insecure form of identification, taken in Is there a better way? Sophie Bushwick asked this question in a 2021 Scientific American article. She reported that one proposed solution is a cryptographic key, those long strings of numbers and symbols that we all hate to use. Or a USB could be plugged into your computer to authenticate you as its owner. Scans of your fingerprints, or face, could also authenticate your identity. The problem is that any one of these methods can also be hacked. Even biometrics is vulnerable since this technology reduces your face to an algorithm. Once the algorithm for your face or fingerprint (or even worse, your iris, which is the most complex and unique biometric identifier of them all) is stolen, your own body can be used against you. There are no perfect solutions, but multifactor identification comes the closest. This technique might combine a text of a one-time passcode to your phone, require a biometric identifier like a fingerprint, and a complex password. Finding and assembling all these elements, while possible, would be a prohibitively difficult chore for many if not most hackers. Strengthening consumer identification, however, is only one part of the problem. Our personal information is insecure in other ways. A dozen federal agencies, including the FBI, IRS, Department of Homeland Security and Department of Defense, routinely purchase Americans’ personal data. These purchases include not just our identifying information, but also our communications, social media posts, and our daily movements – scraped from our apps and sold by data brokers. How secure is all the data held by those third-party brokers? How secure is the government’s database of this vast trove of personal data, which contains the most intimate details of our lives? These are urgent questions for Congress to ask. Congress should also resist the persistent requests from the Department of Justice to compel backdoors for commercial encryption, beginning with Apple’s iPhone. The National Public Data hack reveals that the forced creation of backdoors for encryption would create new pathways for even more hacks, as well as warrantless government snooping. Finally, the Senate should follow up on the House passage of the Fourth Amendment Is Not For Sale Act, which would prohibit government collection of our personal information without a warrant. Protect your data by calling or emailing your senators: Tell them to pass the Fourth Amendment Is Not For Sale Act. Our data will only become more secure if we, as consumers and citizens, demand it. Imagine this scenario: It’s early evening, and you and your special someone are on the couch preparing to binge-watch your favorite streaming show.
Ding-dong. You answer the door and, as you hoped, it is the dinner delivery person. He hands you your prepaid, pre-tipped meal and you start to shut the door when the delivery worker puts his foot down, blocking you. He snaps a picture over your shoulder and asks: “Why is the wall over your couch bare? It should have a picture of the Dear Leader. I now have no choice but to report you.” This fantastical scenario of a police state enlisting food delivery workers as auxiliary police is taking place, for real, in the People’s Republic of China, according to disturbing reports from Radio Free Asia. Beijing recently posted a directive: “We will hire a group of online delivery personnel with a strong sense of responsibility to serve as part-time social supervisors and encourage them to take part in grassroots governance through snapshots and snap reports …” Radio Free Asia reports that this program is being expanded in China’s annexed territory of Tibet, where food delivery workers are being recruited to perform “voluntary patrol and prevention work.” In addition, Chinese police are requiring Tibetans to revise their personal passwords on their social media accounts, link them to their personal cellphones and identity cards, and make it all accessible to the government. Police are also stopping Tibetans in Lhasa to check their cellphones for virtual private networks, or VPNs, that allow users to get around the “Great Firewall of China,” the government’s restrictive controls on the internet. We can shake our heads and laugh. But the fundamental principle of coopting private-sector industries for internal surveillance is one that is gaining purchase in our own country. The federal government isn’t so crude as to turn the Domino’s pizza delivery guy into a spy. But federal agencies can extract Americans’ personal data from FISA Section 702, even though this program was enacted by Congress not to spy on Americans, but to surveil foreign threats on foreign soil. Prosecutors in the United States can extract information about witnesses and criminal defendants from telecoms and service providers of emails, cloud computing, and online searches, then gag those same companies with a non-disclosure order, which keeps them from ever informing their customers they were surveilled. The good news is that more and more Members of Congress are awakening to the threat of a home-grown American surveillance state. The recent reauthorization of Section 702 sets up a debate over the reach of this program in early 2026. The House passed a measure called the NDO Fairness Act, which would limit non-disclosure orders, putting the onus on the Senate to follow suit. The field of surveillance is one area in which public-private partnerships can go very wrong. Unlike China, however, America is still a democracy with a Congress that can counter expansive government threats to our privacy. The U.S. Supreme Court will almost certainly take up and resolve two furthest – some would say extreme – rulings by the Fourth and Fifth Circuit Courts of Appeals on the Fourth Amendment implications of geofence searches.
The Fourth Circuit ruled that geofence warrants – which search the mobile devices of many people in designated areas – contain no Fourth Amendment implications. The Fifth Circuit ruled that geofence warrants are inherently unconstitutional. This is the Grand Canyon of circuit splits. At stake are not just geofence warrants, but conceivably almost every kind of automated digital search conducted by the government. At stake, too, is the very meaning and viability of the Fourth Amendment in the 21st century. We had previously reported on the gobsmacking ruling of the Fourth Circuit in July that held that a geofence warrant to identify a bank robber within a 17.5-acre area – including thousands of innocent people living in apartments, at a nursing home, eating in restaurants, and passing by – did not implicate the privacy rights of all who were searched. In United States v. Chatrie, the court held in a split opinion that this mass geofence warrant had no Fourth Amendment implications whatsoever. In doing so, the Fourth reversed a well-reasoned opinion by federal Judge Mary Hannah Lauck, who wrote that citizens are almost all unaware that Google logs their location 240 times a day. Judge Lauck wrote: “It is difficult to overstate the breadth of this warrant.” The same overbreadth can be seen, in a very different context, in the Fourth Circuit’s jettisoning of the Fourth Amendment in its reversal. Now the Fifth Circuit Court of Appeals has weighed in on a similar case, United States v. Jamarr Smith. The Fifth came to the opposite conclusion – that geofence warrants cannot be reconciled to the Fourth Amendment. Orin Kerr of the UC Berkeley School of Law argues that the Fifth’s ruling conflicts with Supreme Court precedent, including Carpenter v. United States, in which the Court held that the government needs a warrant to extract cellphone location data. Kerr also asserts that the lack of particularity in which a suspect’s identity is not known at the beginning of a search (indeed, that’s the reason for these kind of searches) is a well-established practice recognized by the Supreme Court. Jennifer Granick and Brett Max Kaufman of the American Civil Liberties Union push back at Kerr, finding the digital inspection of the data of large numbers of people to identify a needle-in-a-haystack suspect is, indeed, a “general warrant” forbidden by the Constitution. They write: “Considering the analog equivalents of this kind of dragnet helps explain why: For example, police might know that some bank customers store stolen jewelry in safe deposit boxes. If they have probable cause, police can get a warrant to look in a particular suspect’s box. But they cannot get a warrant to look in all the boxes – that would be a grossly overbroad search, implicating the privacy rights of many people as to whom there is no probable cause.” The implications of this circuit split are staggering. If the Fourth Circuit ruling prevails, it will be anything goes in digital search. If the Fifth Circuit’s ruling prevails, almost any kind of digital search will require a probable cause warrant that has the particularity the Constitution clearly requires. There will be no way for the U.S. Supreme Court to reconcile these opposite takes on digital warrants. It will be up for the Court to set a governing doctrine, one that examines at its root what constitutes a “search” in the context of 21st century digital technology. Let us hope that when it does so, the Supreme Court will lean toward privacy and the Fourth Amendment. Judges and District Attorneys Must Hide the Use of Stingrays, or Face the Wrath of the FBI8/20/2024
Cell-site simulators, often known by the trade name “stingrays,” are used by law enforcement to mimic cell towers, spoofing mobile devices into giving up their owners’ location and other personal data. Thousands of stingrays have been deployed around the country, fueled by federal grants to state and local police.
PPSA has long reported that the FBI severely restricts what local police and prosecutors can reveal about the use of stingrays in trials. Now we can report that these practices are continuing and interfere with prosecutors’ duty to participate in discovery and turn over potentially exculpatory evidence. The government’s response to a PPSA FOIA request reveals a standard non-disclosure agreement between the federal government and state and local police departments. This template includes a directive that the locals “shall not, in any civil or criminal proceeding, use or provide any information concerning the [redacted] wireless collection equipment/technology.” This includes any documents and “evidentiary results obtained through the use of the equipment.” The agreement also states that if the agency “learns that a District Attorney, prosecutor, or a court” is considering releasing such information, the customer agency must “immediately notify the FBI in order to allow sufficient time for the FBI to intervene …” Most likely the squeeze will come with a threat to end the provision of stingrays to the state or local police, but other forms of intimidation cannot be ruled out. Got that, judges and district attorneys? Any attempt to fully disclose how evidence was obtained, even if it would serve to clear a defendant, must be withheld from the public and defense attorneys or the FBI will want a word with you. “Quiet Skies” is a federal aviation security program that includes singling out flyers for close inspection by giving them an “SSSS” or “Secondary Security Screening Selection” designation on their boarding pass. In the case of Tulsi Gabbard, it is alleged she was also put on a “terror threat list” that requires that she receive intense surveillance as well.
You probably know Gabbard as an outspoken and iconoclastic former U.S. Representative from Hawaii who ran for president. During a slew of domestic flights after returning from a recent trip to Rome, Gabbard and husband Abraham Williams were allegedly designated as security threats requiring enhanced observation. A war veteran of Iraq who signed up after 9/11, Gabbard told Matt Taibbi of The Racket that she and her husband are getting third-degree inspections every time they go to the airport. Every inch of her clothes is squeezed. The lining of the roller board of her suitcase is patted down. Gabbard has to take out every personal electronic and turn on each one, including her military-issue phone and computer. This process can take up to 45 minutes. What may be happening in the air is far more worrisome. Sonya LaBosco, executive director of the advocacy group Air Marshals National Council, is the source that told Taibbi that Gabbard is on the TSA’s domestic terror watch list. Every time someone on that list travels, LaBosco said, that passenger gets assigned two Explosive Canine Teams, one Transportation Security Specialist in explosives, and one plainclothes TSA Supervisor. Such passengers are assigned three Federal Air Marshals to travel with them on every flight. LaBosco says that Gabbard’s recent three-flight tour would have required no fewer than nine Air Marshals to tail her and her husband. Taibbi writes that an Inspector General’s report in 2019 revealed one-half of the Air Marshal’s budget is wasted, and that much of $394 million in funds for air security are put to questionable use. In our personal experience, the “SSSS” designation can be randomly assigned. Judging from publicly available sources, that designation can also be algorithmically triggered by a host of activities deemed suspicious, such as flying out of Turkey, paying cash for plane tickets, and buying one-way tickets. (We can only imagine what would happen to the brave or foolhardy person who bought a one-way ticket out of Istanbul with cash.) To be fair, many complaints about the TSA that seem absurd have a basis in hard experience. That experience goes back to 1986, when an extra close inspection by El Al security officers of a pregnant Irish nurse flying to meet her boyfriend in Jordan revealed that he had betrayed her by secreting a bomb in her bag. TSA has to contend with the fact that anyone – a decorated war hero, a handicapped grandmother, a toddler – could be the unknowing carrier of a threat. But the treatment of Gabbard raises the unavoidable question if this outspoken political figure was put on the SSSS list out of political pique. Gabbard has certainly irritated a lot of powerful people and agencies. In Congress, she advocated for dropping charges against Edward Snowden. As vice chair of the Democratic National Committee in 2016, she publicly criticized the party’s reliance on superdelegates and endorsed Bernie Sanders over Hillary Clinton. She later left the Democratic Party and was recently on the list of Donald Trump’s possible vice-presidential candidates. She has been a consistent critic of “elites” who want “nation-building wars.” Gabbard found herself on the threat list just after she left Rome where she had called Vice President Kamala Harris “the new figurehead for the deep state.” You might find Gabbard insightful or a flittering gadfly, but no one should be targeted for surveillance for merely expressing controversial views. And if Gabbard did somehow inadvertently trigger a threat algorithm, one has to wonder if anyone is in charge with the ability to apply common sense – if, in fact, such vast resources are being deployed to follow her. If that is true, even the most benign explanation reveals a diversion of manpower (and dogpower) that could be used to deter real threats. A Congressional investigation – perhaps by the Weaponization of the Federal Government subcommittee – is warranted to discover if the facts reported by Taibbi are correct and, more importantly, if Gabbard has been targeted for enhanced surveillance and harassment for her speech. After all, crazier things have happened, like Matt Taibbi finding himself targeted with a rare home visit from the IRS on the same day the journalist testified before Congress about federal meddling in social media curation. Police have access to more than 71,000 surveillance cameras in New York City, and to more than 40,000 cameras in Los Angeles.
This technology is rapidly becoming ubiquitous from coast to coast. As it does, civil libertarians are shifting from outright opposition to public surveillance cameras – which increasingly seems futile – to advocating for policy guardrails that protect privacy. That American cities are going the way of London, where police cameras are on every street corner, is undeniable. The Harvard Crimson reports that Cambridge, Massachusetts, is one of the latest cities to debate whether to allow police to deploy a network of surveillance cameras. The Cambridge Police Department was on the verge of installing highly visible cameras that would surveil the city’s major parks and even Harvard Yard when the city council suspended a vote after hearing from a prominent civil rights attorney. Even then, Emiliano Falcon-Morano of the Technology for Liberty Program at the Massachusetts ACLU seemed to bow to the inevitability of cameras. He recommended that this technology not be installed until the “police department addresses several important questions and concerns to ensure that it is deployed in a manner that conforms with civil rights and civil liberties.” In Philadelphia Dana Bazelon, a former criminal defense attorney and frequent critic of police intrusions into privacy, is now advocating the expansion of surveillance cameras. As an advisor to the Philadelphia district attorney, Bazelon sees police cameras as the only way to stem gun violence. This turnabout prompted Reason’s J.D. Tuccille to accuse Bazelon of discarding “concerns about government abuses to endorse a wide-reaching surveillance state.” Tuccille notes how much easier surveillance cameras may make the job of policing. He archly quotes Charlton Heston’s character in Touch of Evil, “A policeman’s job is only easy in a police state.” The argument in favor of public surveillance cameras is that when we step into the public square, we can expect to lose a degree of privacy. After all, no law keeps an officer on patrol from glancing our way. What’s so bad about being seen by that same officer through a lens? The answer, simply, is that camera networks do more than see. They record and transform faces into data. That data, combined with facial recognition software, with cellsite simulators that record our movements by tracking our cellphone location histories, with social media posts that log our political views, religious beliefs, and personal lives, brings us to within spitting distance of a police state. It is out of this concern that the Electronic Frontier Foundation has helpfully provided Americans with the ability to see the surveillance mechanisms unfolding in their communities through its Street Level Surveillance website. Yet, whether we like it or not (and we don’t like it), ubiquitous camera surveillance by the police in almost every city is coming. It is coming because public surveillance is useful in solving so many crimes. As city leaders temporarily shelved the police surveillance proposal in Cambridge, a man in New York City was freed after serving 16 years in prison, exonerated by evidence from old surveillance footage. Arvel Marshall was railroaded in 2016 by a Brooklyn prosecutor who sat on the exonerating tape, which clearly showed someone else committing the murder for which Marshall was convicted. There is no denying that, when the images are clear, surveillance footage can provide irrefutable identification of a criminal (or not, as in Marshall’s case). But the flip side is that the same technology, once it becomes networked and seamlessly integrated by AI, will give the powerful the means to track Americans with little more than a snap of the fingers or a click of the mouse – not just criminals, but protestors, political groups, journalists, and candidates. As this new reality unfolds, questions emerge. How will police surveillance data be stored? How secure will it be from hackers? How long will it be kept? Will it be networked with other forms of tracking, such as our purchased digital data, and combined by AI into total personal surveillance? Will this data be used to follow not just potential terrorists but Americans with criminal records in a predictive effort at “precrime”? Should technology be deployed that anonymizes the faces of everyone on a tape, with deanonymization or unmasking only at the hands of an authorized person? Should a warrant be issued to watch a given crime or to unmask a face? The terms of this new debate are changing as technology evolves at fast forward. But it is not too early to ask these questions and debate new policies, city by city, as well as in Congress. U.S. intelligence agencies justify tens of thousands of warrantless backdoor searches of Americans’ communications by claiming an exception to the Fourth Amendment for “defensive” purposes.
In testimony to Congress, FBI Director Christopher Wray has said that such defensive searches are absolutely necessary to protect Americans in real time who may be potential victims of foreign intelligence agents or cyberattacks. On this basis, the FBI and other agencies every year conduct tens of thousands of warrantless “backdoor” searches of Americans’ communications with data extracted from programs authorized by FISA Section 702 – even though this program was enacted by Congress not to spy on Americans, but to authorize U.S. agencies to surveil foreign spies and terrorists located abroad. Noah Chauvin, Assistant Professor of Law at Widener University School of Law, in a 53-page paper neatly removes every leg of the government’s argument. He begins with the simple observation that there is no “defensive” exception in the Fourth Amendment. Indeed, an analogous claimed exception for “community caretaking” was rejected by the U.S. Supreme Court in the 2021 decision on Caniglia v. Strom, holding that the government could not enter a home without a warrant based on the simple, non-exigent claim that the police needed to check on the homeowner’s well-being. Whether for community caretaking or for surveillance, the “we are doing this for your own good” excuse does not override the Fourth Amendment. In surveillance, the lack of constitutional validity makes the government’s position “a political argument, not a legal one.” Chauvin adds: “It would be perverse to strip crime victims of the Fourth Amendment’s privacy protections – a person should not lose rights because they have been violated.” It is apparently on the basis of such a “defensive search,” for example, that the FBI violated the Fourth Amendment rights of Rep. Darin LaHood (R-Ill). In that case, the FBI was concerned that Rep. LaHood was being unknowingly targeted by a foreign power. If the FBI can secretly violate the rights of a prominent and respected Member of Congress, imagine how blithely it violates your rights. While making these sweeping claims of violating the Fourth Amendment to protect Americans, “the government has provided almost no public information about how these defensive backdoor searches work.” Chauvin adds: “The government has claimed it uses backdoor searches to identify victims of cyberattacks and foreign influence campaigns, but has not explained how it does so, saying only that backdoor searches have ‘contributed to’ or ‘played an important role in’ intelligence services.” Also unexplained is how the government identifies potential American victims, or why it searches for victims instead of potential perpetrators. Nor does it reveal its success rate at identifying potential victims and how that compares to traditional methods of investigation. Finally, Chauvin asks: “Would obtaining permission before querying a victim compromise the investigation?” It is a matter of settled law that any American can give informed consent to waive his or her Fourth Amendment rights. “It seems particularly likely,” Chauvin writes, “that would-be victims will grant the government permission to perform defensive backdoor searches.” One can easily imagine a long list of companies – from hospitals to cloud providers – that would grant such blanket permission. So why not just do that? Finally, Chauvin appeals to Congress not just to remedy this backdoor search loophole for Section 702. He proposes closing this loophole for Americans’ digital data that U.S. intelligence and law enforcement agencies purchase from third-party data brokers, as well as for Executive Order 12333, a non-statutory surveillance authority claimed by the executive branch. At the very least, Congress should demand answers to Chauvin’s questions about how defensive searches are used and how they work. He concludes, “the government’s policy preferences should never override Americans’ constitutional rights.” We are about 160 days away from the next presidential inaugural.
If Donald Trump returns to the presidency, he will bring with him an innate skepticism of federal surveillance. This is because his campaign and transition (and by extension himself) were the targets of four surveillance orders issued by the secret FISA Court in 2016 and 2017 that were based on a concocted intelligence report and forged document created by an FBI lawyer (later convicted of a felony). But Trump may not have the surveillance skepticism lane to himself. Despite Vice President Kamala Harris having served in a very pro-surveillance administration, her background also reflects skepticism of federal surveillance. This is especially true of FISA Section 702, an authority enacted by Congress to surveil foreign threats located abroad but has come to be also used as a domestic spying authority. As a senator in 2017, Harris co-sponsored an amendment with her fellow Californian and leading Democrat, the late Sen. Dianne Feinstein, that would have required federal agencies to obtain a probable cause warrant before the FISA Court to review the contents of Americans’ emails. Did service in the Biden Administration, which opposed warrants, change Vice President Harris’ thinking, or would she revert to her Senate position? We cannot be sure what a President Harris or a President Trump would do in a political and geopolitical environment that is much different from the landscape of 2017. But one useful metric for the next administration would be to know how many “U.S. persons” – or people located inside the United States – have had their communications collected under FISA Section 702. Jonathan Mayer, a professor at Princeton University, served as Harris staffer in the Senate. Last year, Politico’s John Sakellariadis reported that Mayer and his research assistant Anunay Kulshrestha used cutting edge cryptographic techniques to estimate how much U.S. person information is collected by under Section 702. Mayer’s math produces only a partial data set. It also doesn’t count data on people inside the United States who communicate or cooperate with foreign spies or terrorists, which would make them legitimate targets of Section 702. But if fully fleshed out, this form of analysis could give a ballpark idea of how extensively Section 702 databases uses spy techniques that result in gathering massive amounts of private information about thousands, if not millions, of average Americans. Of course, the intelligence community could simply tell us. But the intelligence community, in perhaps a too-clever-by-half response, says that separating out who is and isn’t an American in the database would be exactly the kind of privacy intrusion that groups like ours protest. PPSA holds that if such a count were quarantined only for the explicit purpose of making such a count, it would harm no one’s privacy and serve the purpose of illuminating the nature of Section 702 for policymakers when it comes up for reauthorization again in the spring of 2026. “One of the best ways to understand the risk of incidental collection to U.S. persons is to have a sense of data contained through the authority,” says Travis LeBlanc, a Privacy and Civil Liberties Oversight Board member. There are, however, simpler ways to get at the real number. Congress could demand it by the end of this session. Failing that, a President Trump or a President Harris could simply release that number by executive order. When a surveillance authority hoovers up the private data of Americans, at the very least we have a right to know how many Americans have had their privacy compromised. As part of their responses to PPSA’s FOIA requests, the Department of Justice and Department of State recently produced their own derivative classification guides. These protracted documents have hundreds of different classification rules, which might explain part of the prolific growth in derivative classifications that PPSA has previously reported on.
But even among this maze of rules, one item stands out: Government classification rules show that the use of or application for a FISA warrant, in any case, is automatically classified as “secret,” a level of protection supposed to be reserved for when a release can be “expected to cause serious damage to national security" if made public. This means the use of FISA in any case will, at a minimum, remain locked away for 25 years. And worse, these qualify for an exception to automatic declassification, and so the government can extend those blackouts indefinitely. “The use of FISA warrants issued against any American for any reason is secret,” said Gene Schaerr, PPSA general counsel. “And given previous scandals, a multitude of abuses could well be hidden in these blanket classifications.” It is easy to understand why the government would want to classify many FISA warrants. Revealing them could expose ongoing efforts to track Chinese spies, counter Russian saboteurs, and catch possible Iranian assassins. There is also something to the customary government concern about protecting “sources and methods.” But does it make sense for the government to hide every FISA warrant? After all, these guides show that federal agents already make the determination to classify other potentially more important information on a case-by-case basis, including government passwords, safe combinations, and attempted or successful cyberattacks on systems containing national security information. Schaerr said: “As we saw in the Crossfire Hurricane scandal, the rights of all Americans can be implicated when the FISA process is abused. At the very least, this ‘classify first, ask questions later’ approach calls for the House to follow the example of the U.S. Senate and to allow for more House staffers to receive security clearances that enable them to advise House Members on the soundness of the government’s use of FISA warrants. This knowledge gap calls out for more Congressional oversight.” United States v. ChatrieWe reported on the bold opinion of federal district Judge Mary Hannah Lauck of Virginia who ruled in 2022 that the government erred by seeking a warrant for the location histories of every personal digital device within a 17.5-acre area around a bank that had been robbed in Richmond, Virginia, in 2019.
To identify the suspect, Nathaniel Chatrie, law enforcement officials obtained a geofence warrant from Google, requesting location data for all devices within that large area. Swept into this mass surveillance – reminiscent of the “general warrants” of the colonial era – were people in restaurants, in an apartment complex, and an elder care facility, as well as innumerable passersby. Judge Lauck wrote that these consumers were almost all unaware that Google logs their location 240 times a day. She wrote: “It is difficult to overstate the breadth of this warrant” and that every person in the vicinity has “effectively been tailed.” At times it almost seems that no good opinion goes upheld, at least where the Fourth Amendment is concerned. On July 9, the Fourth Circuit Court of Appeals reversed Judge Lauck’s decision in United States v. Chatrie. The court held that a geofence warrant covering a busy area around a bank robbery did not qualify as a Fourth Amendment search at all, a sweeping decision that has serious implications for privacy rights and law enforcement practices across the country. The two-judge majority on the Fourth Circuit Court of Appeals concluded that the geofence warrant did not, after all, constitute a Fourth Amendment search because the collection of location data from such a broad geographic area, even a busy one, did not infringe upon reasonable expectations of privacy. Got that? Judge J. Harvie Wilkinson III, writing for the majority, emphasized that the geofence warrant was a valuable tool for law enforcement in solving serious crimes. He wrote that the use of such warrants is necessary in an era where traditional investigative methods may be insufficient to address modern criminal activities. In a strongly worded dissent (beginning on p. 39), Judge James Andrew Wynn Jr. criticized the majority opinion, highlighting the potential dangers of allowing such broad warrants. Judge Wynn, with solid logic and command of the relevant precedents, demonstrated that the decision undermines the Fourth Amendment’s protections and opens the door for pervasive surveillance. Judge Wynn showed that the geofence warrant lacked the necessary particularity required by the Fourth Amendment. By allowing the collection of data from potentially thousands of innocent people, the warrant was not sufficiently targeted to the suspect. He emphasized that individuals have a reasonable expectation of privacy in their location data, even in public places. The widespread collection of such data without individualized suspicion poses significant privacy concerns. And Judge Wynn warned that the majority's decision sets a dangerous precedent, ignoring the implications of the U.S. Supreme Court’s 2018 Carpenter v. United States opinion in its landmark case on location data. So what, you might ask, is the harm of geofencing in this instance, which caught a suspect in a bank robbery? Answer: Enabling law enforcement to use geofence warrants in such a broad way will almost certainly lead to a variety of novel contexts, such as political protests, that could implicate Americans’ rights to free speech and freedom of assembly. Judge Wynn's dissent highlights the need for a careful balance between effective law enforcement and the preservation of civil liberties. While the majority’s decision underscores the perceived necessity of geofence warrants in modern investigations, Judge Wynn's dissent serves as a poignant reminder of the constitutional protections at stake. The Electronic Frontier Foundation reports that Chatrie’s lawyers are petitioning for an en banc hearing of the entire Fourth Circuit to review the case. PPSA supports that move and we hope that if it happens, there are judges who take the same broad view as Judge Lauck and Judge Wynn. Earlier this year, students in a high school art class were called to a meeting of administrators to defend the contents of their art portfolio.
This happened after Lawrence High School in Lawrence, Kansas, signed a $162,000 contract with Gaggle safety software to review all student messages and files for issues of concern. Gaggle had flagged the digital files of the students’ art portfolio for containing nudity. The students vehemently protested that there was no nudity at all in their work. But it was a hard case to make considering that the files had already been removed from the students accounts so the student artists themselves couldn’t refer to it. Max McCoy, a writer with the nonprofit news organization The Kansas Reflector, wrote that if you’re a Lawrence High student, “every homework assignment, email, photo, and chat on your school-supplied device is being monitored by artificial intelligence for indicators of drug and alcohol use, anti-social behavior, and suicidal inclinations.” The same is true of many American high schools from coast-to-coast. Gaggle claims to have saved an estimated 5,790 student lives from suicide between 2018 and 2023 by analyzing 28 billion student items and flagging 162 million for reviews. McCoy took a hard look this incredibly specific number of lives saved, finding it hard to validate. Simply put, Gaggle counts each incident of flagged material that meets all safety criteria as a saved life. Still, it is understandable that school administrators would want to use any tool they could to reduce the potential for student suicide (the second-leading cause of death among Americans 15-19), as well as reduce the threat of school violence that has plagued the American psyche for decades now. But is an artificial surveillance regime like Gaggle the way to do it? McCoy likens Gaggle to the science-fictional “precrime” technology in the Philip K. Dick novel and Stephen Spielberg movie Minority Report. But could Gaggle technology in its actual use be more like the utterly dysfunctional totalitarian regime depicted in the classic movie Brazil? McCoy reports that a cry for help from one student to a trusted teacher was intercepted and rerouted to an administrator with whom the student has no relationship. The editors of the Lawrence student paper, The Budget, are concerned about Gaggle’s intrusion into their newsgathering, notes, and other First Amendment-protected activities. McCoy quotes Rand researchers who recently wrote, “we found that AI based monitoring, far from being a solution to the persistent and growing problem of youth suicide, might well give rise to more problems than it seeks to solve.” It is one thing to keep tabs on student attitudes and behavior. Spyware technology over all student messages and content looks pointlessly excessive. Worse, it trains the next generation of Americans to be inured to a total surveillance state. As the 2024 elections loom, legislative progress in Congress will likely come to a crawl before the end of meteorological summer. But some unfinished business deserves our attention, even if it should get pushed out to a lame duck session in late fall or to the agenda of the next Congress.
One is a bipartisan proposal now under review that would forbid federal government agencies from strong-arming technology companies into providing encryption keys to break open the private communications of their customers. “Efforts to give the government back-door access around encryption is no different than the government pressuring every locksmith and lock maker to give it an extra key to every home and apartment,” said Erik Jaffe, President of PPSA. Protecting encryption is one of the most important pro-privacy measures Congress could take up now. Millions of consumers have enjoyed end-to-end encryption, from Apple iPhone data to communications apps like Telegram, Signal, and WhatsApp. This makes their communications relatively invulnerable to being opened by an unauthorized person. The Department of Justice has long demanded that companies, Apple especially, provide the government with an encryption key to catch wrong-doers and terrorists. The reality is that encryption protects people from harm. Any encryption backdoor is bound to get out into the wild. Encryption protects the abused spouse from the abuser. It protects children from malicious misuse of their messages. Abroad, it protects dissidents from tyrants and journalists from murderous cartels. At home, it even protects the communications of law enforcement from criminals. The case for encryption is so strong the European Court of Human Rights rejected a Russian law that would have broken encryption because it would violate the human right to privacy. (Let us hope this ruling puts the breaks on recent measures in the UK and the EU to adopt similarly intrusive measures.) Yet the federal government continues to demand that private companies provide a key to their encryption. The State of Nevada’s attorney general went to court to try to force Meta to stop offering encrypted messages on Facebook Messenger on the theory that it will protect users under 18, despite the evidence that breaking encryption exposes children to threats. PPSA urges the House to draft strong legislation protecting encryption, either as a bill or as an amendment. It is time for the people’s representatives to get ahead of the jawboning demands of the government to coerce honest businesses into giving away their customers’ keys. We’ve long chronicled the downward trajectory of EO 13526, President Barack Obama’s 2009 executive order that boldly sought to stem the tide of excessive government secrecy. President Obama imposed checks on the government by forbidding classification decisions that are made to prevent embarrassment to a person, organization, or agency, and by boosting the ability of the National Archives and Records Administration (NARA) to lead a declassification program.
“My administration is committed to operating with an unprecedented level of openness,” the president declared. At the time President Obama swept his pen over this order, there were 55 million classified documents. And how has that worked out? Today 75 million classified documents have piled up. Some of them date back to the Truman administration. A report released Tuesday by the National Coalition for History makes public the inside grips of NARA in trying to fulfill its mission. The report states that NARA’s flatlined budget leaves its National Declassification Center (NDC) short-staffed and unable to cope with thousands of pending Freedom of Information Act requests. We filed one such FOIA of our own asking a slew of federal agencies in effect if “they’ve done anything to comply with President Obama’s executive order?” Some FOIAs, the History Coalition reports, sit in 12-year queues. But the bigger problems for declassification involve perverse incentives. The History Coalition reports: “Even highly skilled and experienced NDC staffers lack the authority to reverse agency decisions that they disagree with, a dynamic that perpetuates the over-classification problem.” No one ever got fired for refusing to declassify something. No one should be surprised, then, that when you ask the agency that classified a document if it should remain classified, the answer will almost always be “yes.” Another revelation from the History Coalition’s report is that the NDC lacks a secure electronic transmittal system to send classified records for agency referrals. Instead, they are sent on digitized diskettes through regular U.S. mail. You would think that if a document is so sensitive it must remain secret that sending it back with a postage stamp would be a non-starter. That laxity, more than anything, is a sure sign that what is at work isn’t the protection of vital national secrets, but bureaucratic backside covering, the only perpetual motion machine known to physics. What can be done? A good place to start is the History Coalition’s reform proposal to vest the NDC “with the authority to declassify information subject to automatic declassification without having to refer the records back to the originating agency.” Sounds like a good idea to us. |
Categories
All
|