Project for Privacy and Surveillance Accountability (PPSA)
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE

 NEWS & UPDATES

CBP’s Explosive Increase in Searches of Americans’ Phones at Border

11/10/2025

 
Picture
​Customs and Border Protection (CBP) has long asserted a right to inspect the contents of the digital devices of Americans returning from abroad. Now, Wired’s Dell Cameron and Matt Burgess report that the recent increase in these invasive practices at ports of entry has caused the number of international visitors to the United States to plummet. They note that while most of these searches are basic, “where agents manually scroll a person’s phone,” deeper, tool-based sweep-searches do occur.
 
In either scenario, refusing to provide a passcode means subjecting oneself to massive delays or even the seizure of one’s device(s). And while digital inspection at the border is not a new trend, it’s a rapidly increasing one.
 
CBP’s own data shows warrantless digital inspections conducted at the border jumped from 8,503 in 2015 to more than 50,000 this year.
 
This accelerating increase of warrantless scanning of digital devices at the border is attracting attention internationally and concern here at home.
​
Four years ago we noted the need for respect for the Fourth Amendment at U.S. borders and entry zones. Sens. Ron Wyden (D-OR) and Rand Paul (R-KY) introduced the Protecting Data at the Border Act, and then renewed their push to pass this initiative. In between, investigative journalist Jana Winter found that CBP was spying on journalists.
 
By that time, the Inspector General of the Department of Homeland Security (DHS) had issued a scathing report on the privacy violations committed by its various agencies – with agents helping themselves freely to Americans’ location histories and other personal data. This was, the IG found, partly because the DHS Privacy Office “did not follow or enforce its own privacy policies and guidance.”
 
And it appears that the agency is still not adhering to its own internal procedures in collecting and retaining Americans’ personal data. On the heels of the phone search story comes another tale of CBP overreach. Only this time, it isn’t about personal devices. Rather, the agency is looking for contractors to build a massive fleet of AI-powered surveillance trucks.
 
Wired reports: “With a fleet of such vehicles, each would act as a node in a wider surveillance mesh.” This is a technical point, but its chilling philosophical ramifications are what strike us most. 
 
Node by node, our government is building a surveillance net to cover the country. This is all the more reason for Congress to use the upcoming debate over the reauthorization of FISA Section 702 in April to subject every element of this emerging surveillance state to long-delayed scrutiny.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Humans Are Peering Through the Eyes of Robots

11/10/2025

 

“We shall describe devices which appear to move of their own accord.”

​- Hero of Alexandria, Pneumatica

Picture
Image courtesy of 1X.
​Those of a certain age might remember the Domesticon, a line of 22nd century robotic butlers from the movie Sleeper. To avoid being caught by the authoritarian state, Woody Allen’s character Miles Monroe pretends to be a Domesticon during a dinner party. The scene is equal parts slapstick and satire. Miles’ cover is blown when he tries to help the host but acts too human in the process.

The Wall Street Journal’s Joanna Stern recently found that one actual prototype of the Domesticon is not entirely dissimilar to the fictional version. 1X Technologies is beta testing NEO, the $20,000 “home humanoid” it hopes to bring to market in 2026. Recently, Stern got to see it in action for the second time and discovered a decidedly Sleeper-like connection: NEO is part human.

Not organically, like a cyborg – so far the full integration of creature and computer is limited to cockroaches. No, NEO is remotely human, as in there’s a remote human operator back at company HQ, “potentially peering through the robot’s camera eyes to get chores done.”

Now, how’d you like to have that job? But as 1X CEO Bernt Børnich told Stern: “If you buy this product, it is because you’re okay with that social contract. If we don’t have your data, we can’t make the product better.”

Such transparency is refreshing. It is also a reminder of the Faustian bargain we must strike in order to make artificial intelligence work at the expense of our personal privacy. AI is unlike any software that came before in that it requires gargantuan amounts of data in order to learn its jobs. As Stern notes, “It needs data from us – and from our homes.” A world model, in other words, centered around us and private things we do at home. 

We expect these machines to be capable of fully human, fully competent, fully safe behaviors – all while being fully autonomous. None of that will happen without the ability to collect and learn from the data of day-to-day human lives. There are no shortcuts, either. When 1X let Stern drive NEO using one of the company’s VR headsets its human operators wear, she nearly dislocated its arm. The robot left for the shop in a wheelchair. The robot, a cross between “a fencing instructor and a Lululemon mannequin,” as she describes it, had neither’s dexterity nor style.

And during the first meeting the reporter had with NEO earlier in the year, the robot managed to faceplant.

“No way that thing is coming near my kids or dog,” she remembers thinking. Domestic robotics remains in its infancy – literally in Stern’s view. “The next few years won’t be about owning a capable robot; they’ll be about raising one.” Like a toddler, humanoid AI can’t learn without doing, watching, and remembering.

1X says users will be able to set “no-go” zones, blur faces in the video feed, and that human operators back at HQ will not connect unless invited to do so. CEO Børnich told Stern that such “teleoperation” was a lot like having a house cleaner. “Last I checked,” Stern responded wryly, “my house cleaner doesn’t wear a camera or beam my data back to a corporation.”

A punchline of sorts seems appropriate here: We’re big fans of the ethical AI principle that says always have a human in the loop – “but this is ridiculous!” 

Stern’s forthcoming book, I Am Not a Robot: My Year Using AI to Do (Almost) Everything and Replace (Almost) Everyone, is now available for pre-order. Readers can expect more dirt on NEO.
​
Unless he learns to vacuum first.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

One Nation Under Watch: How Borders Went from Being Physical to Digital

11/10/2025

 

​“If you want to keep a secret, you must also hide it from yourself.”

​- George Orwell

Picture
​Imagine a dish called Surveillance Stew. It’s served anytime multiple privacy-threatening technologies come together, rather like a witch’s brew of bad ideas. It's best served cold.

The latest Surveillance Stew recipe includes location data, social media, and facial recognition. Nicole Bennett, who studies such things, writes in The Conversation that this particular concoction represents a turning point: borders are no longer physical but digital. The government has long held that the border is a special zone where the Fourth Amendment has little traction. Now the government is expanding border rules to the rest of America.

Immigration and Customs Enforcement (ICE) has put out a call to purchase a comprehensive social media monitoring system. At first glance, Bennett notes, it seems merely an expansion of monitoring programs that already exist. But it’s the structure of what’s being proposed that she finds new, expansive, and deeply concerning. “ICE,” she writes, “is building a public-private surveillance loop that transforms everyday online activity into potential evidence.”

The base stock of Surveillance Stew came with Palantir’s development of a national database that could easily be repurposed into a federal surveillance system. Add ICE’s social media monitoring function and the already-thoroughgoing Palantir system becomes “a growing web of license plate scans, utility records, property data and biometrics,” says Bennett, “creating what is effectively a searchable portrait of a person’s life.”

Such a technology gumbo seems less a method for investigating individual criminal cases than a sweeping supposition that any person anywhere in the United States could, at any moment, be a “criminal.” It’s a dragnet, says Wired’s Andrew Couts, noting that 65 percent of ICE detainees had no criminal convictions. Dragnets are inimical to privacy and corrosive to the spirit of the Constitution.

Traditional, law-based approaches to enforcement are one thing – and enforcement, of course, is ICE’s necessary job. The problem now, warns Bennett, is that “enforcement increasingly happens through data correlations” rather than the gathering of hard evidence.

We agree with Bennett's conclusion that these sorts of “guilt by digitization” approaches fly in the face of constitutional guardrails like due process and protection from warrantless searches. To quote Wired’s Couts again, “It might be ICE using it today, but you can imagine a situation where a police officer is standing on a corner and just pointing his phone at everybody, trying to catch a criminal.”

The existence of Palantir’s hub makes it inevitable that ICE’s expanded monitoring capability will migrate to other agencies – from the FBI to the IRS. And when that happens, what ICE does to illegal immigrants can just as easily be done to American citizens – by any government entity, for any reason.
​
When our daily lives are converted into zeroes and ones, the authorities can draw “borders” wherever they want.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Can a Cop’s “Hunch” Be the Basis of a Search?

11/6/2025

 
Picture
What would detective fiction be without the hunch? We all love the scene where the world-weary gumshoe just knows – somehow – that the drug-addled vagrant isn’t the killer and that the dewy-eyed heiress and the “upstanding” banker are hiding something dark.

But the courtroom is not a detective novel, and constitutional rights don’t bend to intuition.

Hunches fascinate us because they show how the mind pieces together tiny clues to form intuition. (Veteran police officer Robin Kipling has written about the hidden mental mechanics behind intuition.) But how far can a hunch take you when the stakes are your liberty?

The Tenth Circuit Court of Appeals just answered that question – firmly.

Detective Eric Shurley of the Denver Police Department was searching for a shooting suspect described as a light-skinned Black man: muscular, bald, heavy beard, seen in a black Ford Expedition. Officers found the Expedition. Then a white Dodge Durango SUV pulled up nearby. Detective Shurley decided to order backup units to block it in – “just to be on the safe side.”

That “safety” instinct turned into a search. One occupant was said to resemble the suspect – even though he wasn’t light-skinned, bald, nor did he have a heavy beard. Officers searched anyway, found a gun apparently connected to one of the passengers in the Durango and arrested him.

The three-judge 10th Circuit panel tossed the evidence and delivered the obvious verdict: “reasonable suspicion is lacking.” In other words, a gut feeling is not a constitutional basis for a search.

The Fourth Amendment couldn’t be clearer. To target someone for a search, officers need a warrant based on probable cause, describing “the place to be searched” and “the person or things to be seized.”

There is no asterisk for hunches. No detective-story exception. No “close enough.”

Could more crimes be stopped if police searched anyone who raised a momentary suspicion? Almost certainly. But we don’t live in a country where the government gets to rummage through your life because someone’s instincts started tingling. A government that can search you on a hunch can search you for any reason – or no reason at all.
​

We can applaud constitutional guardrails while still cheering for Detective Shurley, a former Denver Police Officer of the Year, who continues to protect Denver’s streets. But there is no public-safety benefit worth trading away the bedrock principle that constitutional rights beat hunches.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Just What We Need – Hack Makes Recordings by Wearable Glasses Undetectable

11/3/2025

 

“Privacy is not just about hiding things or keeping secret, it’s about controlling who has access to your life.”

​- Roger Spitz

Picture
​Here’s a quick news update on one of the privacy stories of the year: Meta’s Ray-Ban smartglasses. Joseph Cox and Jason Koebler of 404 Media told the story of Bong Kim, a hobbyist who engineered a way to disable the LED light intended to shine conspicuously whenever Meta’s glasses are recording or taking photos.
 
Let’s be clear: Meta has nothing to do with hacks like this one. The company tried to prevent privacy violations by designing the glasses so that if someone covered up the LED light, the recording function wouldn’t work. So we'll skip the “we told you so” part where we question the wisdom of building a modern Prometheus (powered by an app and AI, of course) while clutching at pearls when it gets compromised – as it now is.
 
We’ll also refrain from asking what could possibly go wrong. But here’s one possibility out of 10,000 would-be privacy violations: Imagine a stalker no longer having to worry about an LED light giving him away. Or industrial spies. Or actual spies. Or the colleague at work tricking you into saying something that will get you fired.
 
From a privacy standpoint, wearables (including smartglasses) are a non-starter, a set of technologies primarily in search of a hack. And if you don’t believe that, you probably haven’t been on Reddit lately.
 
According to 404’s reporting, Kim’s modification is advertised on YouTube and costs just $60 (though it’s unclear whether shipping is included). That’s what your privacy is worth these days.
 
So what can you do? At the very least, familiarize yourself with the look of these new wearable glasses from a host of companies. And quietly read yourself a Miranda warning: “anything thing you say can and will be used against you in a court of law.” Or, maybe just in a meeting with HR.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Bay State Drivers Can Now Be Tracked by 7,000 Flock Customers

11/3/2025

 

“There is something predatory in the act of taking a picture.”

- Susan Sontag

Picture
​Search our news blog for "Flock" and you'll hit the jackpot. This company has been a consistent source of concern for privacy watchdogs.
 
Just last week, the ACLU’s Jay Stanley summarized the results of a detailed Massachusetts open-records investigation. Thanks to Flock’s contracts with more than 40 Massachusetts police departments, Bay State drivers can now be tracked by 7,000 of the company’s customers – “in real time, without a warrant, probable cause, or even reasonable suspicion of wrongdoing.” To be clear, that surveillance of Massachusetts drivers can be conducted from other parts of the country… because why wouldn’t Texas authorities want to know what Massachusetts drivers are up to?
 
This chilling state of affairs is the result of Flock’s boilerplate contract language, which only changes if a police department demands it (most have not). The company’s contracts include an “irrevocable, worldwide, royalty-free, license to use the Customer Generated Data for the purpose of providing Flock Services.”
 
Stanley’s article includes additional anecdotes about Flock’s propensity for over-sharing that suggest the issue goes far beyond Massachusetts. In Virginia, for example, reporters found that “thousands of outside law enforcement agencies searched Virginians’ driving histories over 7 million times in a 12-month period.” As we’ve written before, Virginia is already one of the most surveilled states in the country, thanks largely to vendors like Flock Safety.
 
Consider following the ACLU’s advice for pushing back against this kind of Orwellian oversight. If we don’t say anything, nothing is going to change.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Keep Lummis-Wyden in the NDAA to Secure the Pentagon – and Our Democracy – from Foreign Hackers

10/31/2025

 
Picture
Sen. Cynthia Lummis (Left) and Sen. Ron Wyden (Right)
National security wake-up calls do not get louder than the revelation that a Chinese government-linked hacking group, known as Salt Typhoon, successfully penetrated major U.S. telecommunications carriers in 2024.  AT&T and Verizon were among the companies compromised, exposing the communications of Members of Congress, senior officials, and even both major-party presidential candidates.
 
This was not an isolated breach. It followed a 2023 cyberattack in which Chinese state hackers infiltrated Microsoft’s cloud-hosted email systems, compromising accounts at multiple federal agencies, including the Departments of State and Commerce. According to the Cyber Safety Review Board, the attackers downloaded roughly 60,000 emails from the State Department alone. Pilfered correspondence included those of Cabinet-level officials.
 
These events underscore an uncomfortable truth – the Department of Defense and the intelligence community cannot defend the nation with unencrypted communications routed through a handful of vulnerable providers.
 
The good news is that we do not have to accept this status quo. As the House and Senate negotiate the National Defense Authorization Act (NDAA) for Fiscal Year 2026, conferees must retain the Lummis-Wyden amendment, which mandates secure, interoperable, end-to-end-encrypted collaboration tools for the Pentagon.
 
A Pattern of Foreign Infiltration
From defense contractors to cloud service providers, adversarial regimes have repeatedly exploited weak communication infrastructure to spy on U.S. institutions. The Salt Typhoon and Microsoft incidents illustrate how a single breach in a major service can compromise thousands of sensitive conversations. When communication systems lack end-to-end encryption, even one point of failure can expose entire networks to foreign intelligence agencies.
 
What Lummis-Wyden Would Do
This measure requires the Department of War to use only collaboration systems that meet rigorous cybersecurity standards – including true end-to-end encryption that ensures only the sender and intended recipient can read a message, even if servers in between are hacked.
 
Just as importantly, Lummis-Wyden mandates interoperability. Today, the Pentagon is confined to using a small set of proprietary, “walled garden” platforms that block seamless communication across systems. Interoperable standards would allow the Defense Department to adopt superior tools as they emerge, preventing vendor lock-in that traps communications in the domains of single companies, while enhancing long-term resilience of the Pentagon’s digital networks.
 
By promoting interoperability and strong encryption, Lummis-Wyden would open the door to competition, inviting companies to develop more secure, agile, and affordable solutions. America’s defense and intelligence agencies should never be dependent on single-point-of-failure vendors whose systems are ripe targets for global espionage.
 
A Strategic Imperative
From the theft of federal employee records to the infiltration of telecom carriers, the pattern is unmistakable: insecure communications infrastructure is a strategic liability.
 
Passing Lummis-Wyden would do more than patch vulnerabilities: it would redefine what secure collaboration means in the 21st century. It would signal that America prizes both privacy and resilience, and rewards technologies that deliver genuine end-to-end security rather than superficial compliance checkboxes.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

AI Drones Sharpen the Security/Privacy Tradeoff of a Surveillance State

10/30/2025

 
Picture
​Flock Safety – the vendor installing license plate readers across the country – is now helping police departments enhance their drone fleets with artificial intelligence. With this surveillance comes improved public safety, but also new threats to privacy and personal freedom.

Police drones are not an exotic trend. From 2018 to 2024, the number of police and sheriff departments with drones has risen by 150 percent – for a total of about 1,500 drone-enabled departments.

Increasingly, these drones have brains as well as eyes. Rather than requiring a human operator to direct them, a new generation of autonomous drones can work in concert with an officer at the scene. Lieutenant Ryan Sill, Patrol Watch Commander of the police department in Hayward, California, writes in Police 1 News of surveillance vendor Axon’s “One-Click” drone technology for Autonomous Aerial Vehicles (AAVs):

“The future is one where an AAV can be assigned to each officer, deploying from a patrol car, operating independently without the need for a pilot, responding to voice commands, and completing tasks as directed by the officer.”
​

The integration of AI and drone technology is undeniably a boon to public safety. One of the most dangerous police activities – both for police officers and the public – is the high-speed pursuit of criminals in cars. Increasingly, suspects in cars and on foot can run all they want, but they can be tracked wherever they go by drones.
View this post on Instagram

A post shared by Skydio (@skydiohq)

​Intelligent drones can also zoom quickly to an accident or crime scenes. They can record incidents and respond to situations in ways that assist police departments with too-few officers.

But intelligent drones bring with them the likelihood that all the information they collect will be abused. Then there is information that won’t be collected by drones operated by citizens and journalists in airspace cleared by police drones. Earlier this month, the Federal Aviation Administration imposed a 12-day ban on all non-governmental drone flights across much of Chicago. This coincided with the arrival of National Guard troops and federal agents to conduct immigration raids.

ACLU reports: “This raises the sharp suspicion that it is intended not to ensure the safety of government aircraft, but (along with violence, harassment, and claims of ‘doxing’) is yet another attempt to prevent reporters and citizens from recording the activities of authorities.”

Even more concerning is the emergence of drones that can predict crime.

Malavika Madgula of Sify.com writes about “Dejaview,” a new South Korean technology that “blends AI with real-time CCTV to discern anomalies and patterns in real-life scenarios, allowing it to envisage incidents ranging from drug trafficking to pettier offenses with a sci-fi-esque accuracy rate of 82 percent.”

Knowing that a synthetic brain is watching you for any sign that you might be a criminal is hardly the vibe of a free society. Madgula writes: “It could trigger feelings of heightened self-awareness and unease for even the most innocuous of activities, such as taking a shortcut on your way home or using a cash machine.”
​

Elon Musk famously worried that in AI “we’re summoning the demon.” The demon is welcomed by law enforcement because he is enormously useful in protecting communities. Without guardrails in place to prevent the misuse of this immense collection of our personal movements, activities and associations, it could also turn out to be a Faustian bargain.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

How TikTok Helps the Stalkerverse Infiltrate Tinder

10/28/2025

 

“Do I not know you by your face?” - Twelfth Night, Act 1 Scene 5

Picture
​Another day, another TikTok story. Last time, reporters found that TikTok Shop was allowing ads tailored to GPS-savvy stalkers. This time, it’s ads for Cheaterbuster – which represents yet one more invasive abuse of facial recognition technology, often with images taken from the Tinder dating app.
 
Cheaterbuster’s “Facetrace” feature, which 404 Media verified, allows users to “discover someone’s online presence from a single selfie.” That’s right, you need only upload a photo of your “loved one” and Cheaterbuster’s AI scours the web in search of that person’s Tinder profile, for $18 per search.
 
Notably, Tinder itself has nothing to do with this according to Sullivan Davis and other bloggers. “Not only do we not authorize this practice, it is squarely against our policies,” the company told 404 Media. It appears that sites like Cheaterbuster (sadly, there are others) are scraping publicly available profiles (pro tip – pay for Tinder tiers that allow private mode).
 
The Mary Sue webzine points out that any number of TikTok accounts are really just paid marketing fronts for Cheaterbuster. “Aurora” was applauded by naïve users who believed that she was literally dumping her boyfriend (by driving him to the landfill) after Cheaterbuster saved the day. According to 404, Cheaterbuster’s affiliate program pays more than YouTube does.
 
About a year ago, two Harvard students hacked Meta’s Ray-Ban smartglasses to identify strangers on the subway. As we wrote at the time, “Armed with this technology, your neighborhood creep could easily spot a woman walking down the street and be there when she arrives at her front doorstep.”
 
Now thanks to TikTok and Cheaterbuster, he could know all about her and just what to say.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

Don’t Look Up: Those Satellites Are Leaking

10/27/2025

 

“To have good data, we need good satellites.”  - Jeff Goodell

Picture
​Sigh. As if we didn’t have enough to worry about already. While privacy experts were focusing on the security of undersea fiberoptic cables, government surveillance, and corporate subterfuge, our data is being broadcast unencrypted all around the Earth by satellites.

Satellites are leaky – and it isn’t fuel they’re off-gassing; it’s our personal information. “These signals are just being broadcast to over 40 percent of the Earth at any point in time,” researchers told Wired’s Andy Greenberg and Matt Burgess.

A few years ago, those researchers (at UC San Diego and the University of Maryland) followed up on a whim: Could we eavesdrop on what satellites are broadcasting? The answer was a big fat “yes” – and it took only about $800 in equipment. Their complete findings are detailed in a newly released study. They had assumed, or at least hoped, to find very little – that almost every signal would be protected by encryption – the ne plus ultra of privacy protection.

Instead, among the many things they found floating in the ether were:
​
  • Miscellaneous corporate and consumer data (such as phone numbers)
  • Actual voice calls
  • Text messages
  • Industrial communications
  • Decryption keys
  • Even in-flight Wi-Fi data for systems used by 10 different airlines (including users’ in-flight browsing activities).

Researchers also “pulled down a significant collection of unprotected military and law enforcement communications,” including information about some U.S. sea vessels.

The Wired article’s authors are quick to note that the National Security Agency warned about the security of satellite communications more than three years ago.

Will the publication of such research encourage bad actors to take advantage of these weaknesses?

In the short term, perhaps, but the study’s authors are hopeful that various companies will respond like T-Mobile did and immediately get their encryption house in order (a spokesperson noted the issue was not network-wide). Another affected company, Santander Mexico, responded: “We took the report as an opportunity for improvement, implementing measures that reinforce the confidentiality of technical traffic circulating through these links.” (It should be noted that the affected organizations were notified many months prior to the study’s release.)

In the meantime, let’s hope most hackers haven’t renewed their Wired subscriptions.
​
After all, the scale of the problem is enormous. A Johns Hopkins expert told the magazine: “The implications of this aren't just that some poor guy in the desert is using his cell phone tower with an unencrypted backhaul. You could potentially turn this into an attack on anybody, anywhere in the country.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Is the Fourth Amendment Inconvenient for the Digital World? Too Bad!

10/24/2025

 

“These protections require, at a minimum, a neutral arbiter – a magistrate –  standing between the government's endless desire for information and the citizens' desires for privacy.”

​- Elizabeth Holtzman

Picture
​In September, Lynn Adelman, a federal judge in Wisconsin, helped shore up the Fourth Amendment against the Digital Age’s all-out assault on privacy. As we’ve written so often on this site, algorithms and artificial intelligence are existential threats to constitutional rights like search warrants and probable cause.
 
The digital world seeks to automate the process of justice. It is amoral in the name of efficiency, with automated justice tantamount to automated sentencing.
 
The good news is that the “smartest” algorithms, the most “fine-tuned” large language models can still be stopped by a public official who sticks to a solemn oath to uphold the Constitution. This reminds us that true justice will always require a human in the loop.
 
Which is exactly what defendant Peter Braun did not have when Google and Microsoft sent automated alerts – based on “hash value” data patterns alone – to a national clearinghouse. A Wisconsin special agent, alerted by these hash values, then conducted his own investigation into the flagged files and took a peek before deciding to get a search warrant.
 
A warrant obtained after an examination turned up incriminating evidence of child sex abuse material in Braun’s home. Braun subsequently sued to have that evidence suppressed on the basis that its acquisition violated his Fourth Amendment protections. Judge Adelman agreed.
 
An investigator is paid to be suspicious. But no investigator should be able to explore hunches without a warrant. The investigator’s hunches in this case were based on an interpretation of hash values – which, unlike hash tags that group files for users – are cryptographic functions meant to identify data for computing. Hash values are inherently prone to misinterpretation of contents.
 
If the authorities’ motives are well intentioned – and we believe they were in this case – the rule of law still requires that a court must first review the evidence and agree before an investigator can look at a file. We condemn anyone involved in the possession of material that is, and must be, inherently criminal. But that is no reason throw out due process and the Fourth Amendment.
 
Ante omnia hoc, meaning “before all things, this”: The Constitution could not be clearer – GET A WARRANT.
 
Did the Wisconsin authorities have probable cause in Braun’s case? They might have, but in the eyes of the Fourth Amendment they forfeited any such claim because they could not be bothered to go to a magistrate and present their case. That is why Judge Adelman had no choice but to suppress the illicitly obtained evidence.
 
 “It would have been easy for [agent] Koehler to obtain a warrant before viewing the images,” wrote Adelman in his opinion, “but he decided not to do so.” Illegally obtained evidence isn’t evidence at all.
 
We all want the authorities to protect children to the maximum extent under the law. But even Peter Braun has rights – and his rights are our rights. Do we really want the authorities searching our digital lives without a warrant in hand, simply because some unthinking, blunt-force algorithms decided that something seemed suspicious?
 
Judge Adelman also ruled on the basis that reliance on hash values can target many images that are deeply private, but perfectly legal. He wrote: “Here the government omitted any discussion as to the reliability of hash matching in the warrant affidavit, a fatal flaw which undermines the existence of probable cause.”
 
Justice Amy Coney Barrett calls the Fourth Amendment a “principle” of the rule of law. That principle was ratified in 1791 – by humans, for humans, with nary a bit, byte, algorithm, or special agent in sight. Convenience and expediency were never the point of the Fourth. In fact, they were seen as antithetical to the deliberate – and difficult – ruminations justice requires. That the Digital Age is blindly premised upon such efficiencies should give us all pause.

    STAY UP TO DATE

Subscribe to Newsletter
DOnATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Is AI Evolving from Helpful Assistant to Permanent Spy?

10/23/2025

 

“Their power derives from memory, and memory is where the risks lie.” - Kevin Frazier and Joshua Joseph

Picture
​Here’s a quick news item that will come as a surprise to absolutely no one, except perhaps for hermits who have been living in caves since AI went mainstream in 2022. Two new pieces of reporting, from Stanford and Tech Policy Press, confirm the fresh dangers to privacy emerging from the AI frontier.
 
First to Palo Alto, where researchers evaluated the privacy policies of six frontier AI developers. You can check out the complete analysis, but here are the takeaways from the abstract. Spoiler alert – they’re not a win for privacy:

  • All six AI developers appear to employ their users' chat data to train and improve their models by default

  • Some retain this data indefinitely

  • Developers may collect and train on personal information disclosed in chats, including sensitive information such as biometric and health data, as well as files uploaded by users

  • Four of the six companies examined appear to include children's chat data for model training, as well as customer data from other products
    ​

  • On the whole, developers' privacy policies often lack essential information about their practices, highlighting the need for greater transparency and accountability.
 
The Tech Policy Press interview with experts sheds some light on why “agentic AI” is so dependent on user information. Agentic AI refers to generative AI with the ability to act independently. Generative AI says things. Agentic AI does things. Both are built on the large language models Stanford studied.
 
It’s a logical evolution – think of asking a restaurant chef to give you his recipe versus having a live-in chef who plans and prepares them. But it’s all built on memory. The more AI is allowed to remember about us, the more effective it will be at meeting our asks. “The central tension, then, is between convenience and control,” the experts told Tech Policy Press.
 
We would add that if you think you’re trusting AI what to remember about your prompts and interests and what not to remember, think again. We’re really talking about trusting companies like the ones in the Stanford study – because they’ll be the ones licensing the AI. As of now, then, the fate of your data ultimately rests in the hands of others. From the interview:
 
“Who, exactly, can access your agent’s memories – just you, or also the lab that designed it, a future employer who pays for integration, or even third-party developers who build on top of the agent’s platform?
 
In short, these experts say, the stakes are these:

“Deciding what should be remembered is not just a question of personal preference; it’s a question of governance. Without careful design and clear rules, we risk creating agents whose memories become less like a helpful assistant and more like a permanent surveillance file.”

We close with a refrain that will be familiar to our readers – now is the time for common-sense laws that privilege personal privacy. Without it, these experts warn, AI will become a tool of enclosure rather than empowerment.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

A Subpoena to Spy on Nine Members of Congress?

10/20/2025

 

Why Did Special Prosecutor Jack Smith Make a Ham Sandwich?

Picture
Special Counsel Jack Smith delivers remarks on the indictment against former President Donald Trump at the Justice Department on June 9, 2023, in Washington, DC. (pool livestream image)
​Outrage, the currency of our times, is being minted at a furious rate over Special Counsel Jack Smith’s use of grand jury subpoenas to spy on the telephone metadata records of eight senators and one congressman around the time of the Jan. 6th 2021 assault on the U.S. Capitol.

One statement of majestic and appropriate outrage – the gold standard, if you will – came from Sen. Rand Paul (who was not among those surveilled). He wrote in Breitbart:

“Our Founding Fathers objected to general warrants that allowed soldiers to go from house to house searching homes of American colonists, [and] I think they would be equally horrified by a government that goes from phone to phone collecting data on all Americans.”

Then there is Sen. Lindsey Graham, one of the targets of Smith’s surveillance, who shouted (rhetorically, starting at 2:35) at Attorney General Pam Bondi, “Can you tell me why my phone records, when I’m the Chairman of the Judiciary Committee, were sought by the Jack Smith agents, why did they ask to know who I called and what I was doing from January 4th to the 7th, can you tell me that?”

It's a good question.

David Corn, writing in the progressive Mother Jones, had his own angle of outrage – that President Trump “incited a violent assault on the Capitol, and for hours – as cops were being beaten and Democratic and Republican legislators were being threatened – did nothing in the hope this domestic terrorism would benefit him and allow him to stay in power …

“Should that not have been thoroughly investigated?”

Another good question.

Here’s our take. Yes, after the trashing of the U.S. Capitol, savage beatings of Capitol police, and the erection of a gallows to “hang Mike Pence,” it would have been astonishing for the government not to investigate. But when the executive branch spies on the metadata of Members of Congress – data that can yield a wealth of private information – you would expect a special prosecutor, appointed by one president to investigate his predecessor and likely future opponent, to dot all “i’s” and cross all “t’s.”

Instead of adhering to a strict constitutional standard, Jack Smith predicated his surveillance of U.S. senators and a representative on a subpoena issued by a grand jury. Such a panel, as New York Chief Judge Sol Wachtler famously said, would gladly indict a ham sandwich if that was what the prosecution wanted.

In his Breitbart piece, Sen. Paul quotes Chief Justice John Roberts when the Supreme Court held in Carpenter v. United States (2018) that geolocation from cellphone metadata was a privacy interest protected by the Fourth Amendment. Justice Roberts, for the majority, wrote, “this Court has never held that the Government may subpoena third parties for records in which the subject has a reasonable expectation of privacy.”

Senators, like everyone else, deserve a reasonable expectation that their phone records are private. Of course, senators – also, like everyone else – are not exempt from lawful investigations. But when one branch investigates another – when one political party investigates its opponents – is it too much to ask that the government respect the Fourth Amendment? If Jack Smith had a good reason to surveil nine Members of Congress, he should have made his case for probable cause before a neutral magistrate and obtained a warrant – as the Constitution requires.
​
That Smith instead chose to slather two pieces of bread with mustard and add a slice of ham indicates (mixed metaphor alert) that he was on nothing more than a fishing expedition. When politics intersect with criminal law, prosecutors must adhere to the most rigorous standards. That is in keeping with the character of an exceptional nation. We must not lose it.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Altamides – The New Spyware that Can Infiltrate Your Phone Without a Trace

10/20/2025

 
Picture
​We’ve long reported on Pegasus, the prolific spyware that allows attackers to access the calls, texts, emails, and images on a target’s smartphone. Worse, Pegasus can turn on a phone’s camera and microphone, transforming it into a 24/7 spying device that the victim helpfully takes from place to place.
  • This technology was used by Saudi intelligence to track the soon-to-be murdered journalist Jamal Khashoggi, helped cartels in Mexico to target journalists for assassination, and was implicated in political spying scandals from India to Spain.
 
  • One of the most insidious aspects of Pegasus is that it is “zero-click” malware, meaning it can be remotely installed on a phone and the user doesn’t have to fall for a phishing scam or commit some other act of poor digital hygiene.

But Pegasus has a flaw – digitally savvy victims may be tipped off by a phone’s unusually high data usage, overheating, quick battery drain, and unexpected restarts. If you’re suspicious that Pegasus has been planted in your smartphone, you can scan for it via the Mobile Verification Toolkit developed by Amnesty International’s Security Lab.

Unfortunately, evolution works on spyware as it did on dinosaurs, creating new predators with enhanced stealth and devastating lethality.

Enter First Wap’s Altamides. Based in Jakarta, Indonesia, First Wap’s technology can do what Pegasus does, but without installing malware or leaving digital traces. It tracks people, Mother Jones reports, by exploiting archaic telephonic networks designed without security in mind. It can track users’ movements, listen in on their calls, and extract their text messages.
Recent versions can even penetrate encrypted messaging apps.

  • Victims of such surveillance reportedly include Blackwater founder Erik Prince, Google engineers, the actor Jared Leto, and the wife of former Syrian dictator Bashar al-Assad. Mother Jones also found “hundreds of people with no public profile swept up in the dragnet; a softball coach in Hawaii, a restaurateur in Connecticut, an event planner based in Chicago.”

Who has purchased this surveillance weapon?

Lighthouse Reports, a coalition of media organizations, performed a sophisticated sting operation in which a journalist posed at a Prague sales conference as a shady buyer for an African mining concession. The journalist said he was looking for a way to identify, profile, and track environmental activists.

The salesman replied: “If you are holding an Austrian passport, like me, I am not even allowed to know about the project, because otherwise I can go to prison.”

The salesman, who (irony alert) was secretly videotaped by the journalist, added: “So that’s why such a deal, for example, we make it through Jakarta, with the signature coming from our Indian general manager.”

When the undercover journalist came back for another meeting, he elicited on tape senior First Wap executives discussing workarounds through Niger-to-Indonesia bank transfers to sell its technology to individuals under international sanctions.

Click below for a short film about this undercover sting.
​U.S. Sen. Ron Wyden (D-OR) told Mother Jones that this story only underscores the extent to which the U.S. government and telecoms have failed to make patches to “the glaring weaknesses in our phone system, which the government and phone companies have failed dismally to address.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Flock Partners with Ring – “It’s a Warrantless Day in the Neighborhood!”

10/20/2025

 
Picture
​Amazon’s Ring doorbell cameras are the always-on eyes of the American neighborhood. Owners are free to provide to police images related to suspected crimes, whether a porch pirate or a prowling burglar. But they can also share images of a lawful protest, or turn over warrantless evidence against a targeted individual.
 
Ring is one link in the expanding national chain of visual surveillance. Added to closed-circuit television systems and police-monitored surveillance cameras sold by the tech company Flock Safety, all the elements of a national surveillance system are falling into place.
 
Now, one more element has just been secured with a new partnership between Ring and Flock. Elissa Welle of The Verge reports that “local U.S. law enforcement agencies that use Flock’s platforms Nova or FlockOS can request video footage from Ring users through the Neighbors app.”
 
There is some good news: In the request, Ring says that law enforcement must include details about an alleged crime and its time and location. Individual users still get to decide for themselves whether to respond to a police request for video. And law enforcement cannot see who does or does not respond, limiting the potential for pressure tactics. Still, the integration of Flock – which sells automated license plate readers capable of tracking cars nationwide – into the doorbells of America should be a matter of deep concern.
 
Sen. Ron Wyden (D-OR) told Flock’s management in a letter:
 
“I now believe that abuses of your product are not only likely but inevitable, and that Flock is unable and uninterested in preventing them. In my view, local elected officials can best protect their constituents from the inevitable abuses of Flock cameras by removing Flock from their communities.”
 
The partnership of Flock with Ring is even more troubling in light of Amazon’s reversal of reforms it made in 2024. The company had previously pulled its app feature, which had allowed police to remotely ask for and obtain footage from Ring users. Now, Ring is reinstating the feature, once again making it easy for police to solicit warrantless video from homeowners without a warrant. New policies will also allow police to request live-stream access.
 
Flock does not currently apply facial recognition to its images. The Electronic Frontier Foundation, however, reports that internal Ring documents show an appetite to integrate artificial intelligence – including, perhaps, video analytics and facial recognition software – into its product.
 
Step by step, corporations are working with each other and with government to link technologies to create a national surveillance system. What may be used for commendable purposes today can be used for any purpose tomorrow.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

The Latest Proposal to Compromise Americans’ Privacy – Delay the Reauthorization Debate of Section 702

10/16/2025

 
Picture
Senator Tom Cotton (R-AR)
​Section 702 of the Foreign Intelligence Surveillance Act is an authority enacted by Congress to allow U.S. intelligence agencies to surveil foreign spies and terrorists. But it has been used in the past by the federal government to extract the communications of millions of Americans.
  • Among those who had their privacy violated by Section 702 data were 19,000 donors to a congressional campaign. This authority was also used to spy on a state senator, a state judge, a congressman, and a U.S. senator. If judges and Members of Congress can have their rights violated, imagine how much respect the FBI and other government agencies have for your privacy.

Concerned by this abuse of Section 702 authority, Congress put this surveillance power on a short leash – with the next reauthorization in April 2026.
 
Now Sen. Tom Cotton (R-AR) is reportedly promoting the idea of delaying the next reauthorization of this key surveillance authority for another 18 months. No matter how well-intentioned, this is a bad idea that would derail any meaningful debate on surveillance reform in this and the next Congress.  
 
Such a delay would also remove any leverage Congress has to perform meaningful oversight of an intelligence community that resists accountability at almost every turn.
 
The April 2024 Debate Produced Significant Reforms
 
The last reauthorization demonstrates that the leverage of a hard deadline at a relatively calm time in the legislative calendar yields results.
  • In the face of furious lobbying by the intelligence community, surveillance reformers on the Hill managed to leverage the April 2024 hard deadline to require the FBI to provide quarterly reports on the number of Americans targeted under Section 702.
 
  • Champions of reform proposed a warrant requirement for the extraction of an American’s communications – an amendment that came within one vote of passing the House. Congress also took the Section 702 debate as an opportunity to end “abouts” data collection, a loose practice that prompted the FISA Court to publicly excoriate the National Security Agency for an “institutional lack of candor” about a “very serious Fourth Amendment issue.”

Finally, Congress shortened the window for the next reauthorization of Section 702 – and its attendant surveillance debate – from five years to just two. This ensured that any new issues that emerged would be tracked by congressional overseers.
 
The Issues Ahead
 
With the next Section 702 reauthorization vote set for April 2026, Congress is beginning once again to treat it as an opportunity to discuss broader surveillance policy.
Emerging questions include:
​
  • Why, and under what exact authority, did the FBI surveil the communications of eight senators and one House Member in 2021?
 
  • A recent Department of Justice report portrays FBI agents as suffering from anxiety and “audit fatigue” in meeting the requirements of Section 702 reforms. If this is the case, couldn’t their anxiety be relieved by sharing responsibility with judges in the form of warrants?
 
  • The FBI, IRS, and other federal agency purchase the digital breadcrumbs we leave online when we communicate or conduct an online search. When, if ever, will Congress get another opportunity to require a warrant for the acquisition of Americans’ personal data?
 
  • If the Section 702 debate is scrapped next April, when else will Congress get a chance to review the operations of the “make everyone a spy” provision, a last-minute addition in the 2024 debate that obliges almost all businesses to help the government spy on their customers?

If your answer to the above questions is that these issues can simply be taken up after the 18-month extension, think again.
 
The Crowded Calendar of October 2027
 
The beauty of an April reauthorization is that it falls at a fairly calm time in the legislative calendar. An 18-month delay would bump the Section 702 reauthorization vote and the next surveillance debate into the next Congress, to October 2027, amid the press of business around the end of the budgetary cycle. Such debates would have to compete with a likely continuing resolution and a host of contentious spending measures.
 
There would be no time to debate anything about surveillance. It would just be another “clean” reauthorization – which would suit the advocates of the status quo just fine.
Members should remain firm: Congress agreed to an April 2026 reauthorization debate for Section 702.
 
Let’s keep it that way.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Wi-Fi Turns Spy-Fi

10/15/2025

 

“We are profoundly bad at asking ourselves how the things we build could be misused.”

​- Brianna Wu

Picture
​In terms of surveillance tech, Wi-Fi is having its moment. This is the fourth time in 2025 we’ve covered the growth of an invasive concept that three years ago seemed remote, even arcane: Wi-Fi sensing.

Increasingly, Wi-Fi turned Spy-Fi is ready for prime time. The Karlsruhe Institute of Technology (KIT), a German research university and think tank, found that Wi-Fi networks can use their radio signals to identify people. Any Wi-Fi network can be made to do this, no fancy hardware required. The people being identified don’t have to be logged into these networks, either. In fact, they don’t even need to carry electronic devices for this subterfuge to work; it’s enough simply to be present, minding one’s own business, within range of a given Wi-Fi router.

But given the ubiquity of Wi-Fi networks, that leaves very few places to hide. “This technology turns every router into a potential means for surveillance,” warns security/privacy expert Julian Todt of KIT. “If you regularly pass by a café that operates a Wi-Fi network, you could be identified there without noticing it and be recognized later – for example by public authorities or companies.” (Or hackers, autocrats, or foreign agents).

How does it work? By exploiting a standard feature and turning it into a vulnerability – leveraging weaknesses must be taught at Bad Actor 101 at Spy School. In this case, connected devices regularly send feedback signals to Wi-Fi routers. According to the researchers, these signals are frequently unencrypted – which means anyone nearby can capture them. Then, with the right know-how, that data can be converted into images.

Not photos exactly, but close enough – analogous to ultrasound, sonar, or radar. The more devices that are connected to a given Wi-Fi network, the fuller the picture provided – height, shape, gestures, gait, hats, purses, and more. With a little help from machine learning, our bodies turn out to be uniquely identifiable, not unlike a fingerprint.

Are there easier ways to spy on us? Most certainly – CCTV, for example. But what Wi-Fi sensing lacks in ease it makes up for in reach. As technologies go, it’s practically everywhere that humans are. The vast majority of people don’t have CCTV cameras in their homes, but they (or their neighbors) are almost guaranteed to have Wi-Fi.
​
Wherever you’re reading this from, take a moment to see how many Wi-Fi networks your phone detects. If the KIT research proves correct, any one of them could be used to track your movements and determine your identity.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

Your Mouse May Have Ears Now, Thanks to AI

10/13/2025

 

The Growing Threat of Side-Channel Attacks

Picture
​No, this is not about the brown field mouse you saw in the garage yesterday. We are talking about the high-end laser mouse, common in the gaming world.

Iain Thomson (The Register) reported on a study from UC Irvine, entitled “Invisible Ears at Your Fingertips,” which demonstrates how a modern optical mouse can be exploited to capture human speech. On some surfaces, our voices create vibrations that a supersensitive mouse interprets as movement. Operating systems store such movement data routinely, and it isn’t particularly secure.
​
The researchers found that bad actors could manipulate most operating systems (MacOS included) to capture such data using basic malware, run it through a few sophisticated filters (with artificial intelligence), and eventually discern spoken words. While still imperfect, the concept is sound – literally. See (and hear) for yourself in this demo video produced by the researchers:
​And it isn’t just voices. Footsteps, coughs, and whatever the person in the room happens to be watching on their phone or computer, can be detected. Keystrokes are especially noteworthy – each one emits a slightly different sound. This kind of attack could be used to detect what someone is typing. (For the time being, we can only wonder why it was deemed necessary to give keystrokes unique audio signatures in the first place.)

As Malwarebytes notes, such hacks are classic examples of side-channel attacks, which steal secrets “not by breaking into software, but by observing physical clues that devices give off during normal use.” Because such information is just a natural byproduct rather than an anomaly, no alarms are set to go off. After all, you don't prepare defenses for attacks you can't imagine in the first place.

The good news is that the UC Irvine researchers have informed 26 manufacturers of vulnerable mouse models about their findings. We take more comfort in that approach than Vice’s tongue-in-cheek recommendation: “To hell with those people who told you to buy a gaming mouse.”
​
But the whole thing leaves us – once again – shaking our heads while wondering aloud, “AI can do that?!” Because if it can, then before long, the sky’s the limit. We need robust policy to keep this burgeoning technology firmly grounded in the public interest. Otherwise, this technology is the Tower of Babel in reverse – making varied human communications too comprehensible.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Eleventh Circuit Rules on Eric André’s Not-So-Funny Detention in a Jetway

10/13/2025

 

Could Decision Bring the Fourth Amendment Back to Airports?

Picture
Eric Andre stars in the ABC unscripted comedy, "The Prank Panel" (ABC)
Eric André is a surrealist comedian whose eponymous and NSFW show on Adult Swim was beyond edgy. In one of his hidden-camera comedy bits on his recent Netflix special, Legalize Everything, he steps out of a police car in a police uniform, scattering broken beer bottles on the street, and then approaches astonished onlookers with what appears to be a bong and a bag of mushrooms.

“I stole it from the evidence room,” he says to one startled passerby, offering his purported drugs. “This stuff will knock you into next Tuesday, you gotta get high with me.” Imagine Candid Camera on drugs.

But André, as himself, was not carrying drugs or acting weird when he tried to board a flight in the Atlanta airport in 2021. He had passed through TSA screening and a boarding pass check, only to be stopped on the jet bridge seconds from entering the plane and taking his seat. The police asked André for his boarding pass, then held both his pass and ID while interrogating him about his travel plans. André claimed that with officers standing in front of him, holding documents without which he couldn’t move, a “request” to search his bag was hardly consensual.

He was just one of the 402 people that the Clayton County “Airport Interdiction Unit” had similarly stopped over an eight-month span. André, along with comedian-actor Clayton English (who had the same experience earlier), brought a Fourth Amendment lawsuit, dismissed by a federal district court. When André and English appealed, the Eleventh Circuit Court of Appeals revived their lawsuit – a powerful and necessary affirmation that constitutional protections do not fade away at the airport gate.

The Eleventh Circuit’s recent opinion explains that an improper “seizure” of a person’s effects occurs when that “person’s ‘freedom of movement’ … is restrained ‘by means of physical force or a show of authority.’” The Court held that this was an objective test, resting on the determination of whether the officer’s words and actions would have conveyed to a reasonable person that he was not free to leave.

Yep, holding someone’s boarding pass and ID would tend to give you that impression. The court stressed that “blocking an individual’s path … is a consideration of great, and probably decisive, significance.”

The Eleventh Circuit also concluded that under qualified immunity the individual officers cannot be held liable at this stage of litigation because the law is not so “clearly established” in jet-bridge settings that the officers should have known their actions violated rights. Despite this limitation, André and English can still sue for the violation of their Fourth Amendment rights.

PPSA believes this may well become a landmark case.

We’ve become used to putting up with intrusive inspections at the airport, ranging from millimeter-wave imaging of our nude bodies to pat downs of our intimate areas. These are unfortunate but arguably necessary steps to ensure that bombs and weapons are kept off planes. But playing games with passengers’ Fourth Amendment rights at the jet bridge because someone’s crazy hair strikes an officer as suspicious was appropriately called out by the 11th Circuit.

A follow-up case the courts might soon consider is the widespread practice of Customs and Border Protection agents holding the laptops and digital devices of Americans returning from abroad, ushering them in side rooms while demanding their passcodes. Many Americans have been strong-armed in this way into allowing inspections of the contents of their digital devices, involving more personal information – texts, images, messages – than what most people have in their carry-ons.
​

The American airport has become a gray zone for constitutional rights. If André and English win their lawsuit, this could well mark a revival of the Fourth Amendment for flyers.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

A Fourth Amendment Reckoning for Supreme Court

10/7/2025

 

Case v. Montana

Picture
​The U.S. Supreme Court will soon have a chance to reverse the dangerous precedent set by the Montana Supreme Court in Case v. Montana, which held that officers may enter a home based on mere suspicion of an emergency – instead of the stricter probable cause standard. Unless this Montana decision is reversed, the “community caretaker” welfare-check doctrine will be revived, gutting the Fourth Amendment’s protection of the home from warrantless intrusion.
 
The outcome of this case is far from certain, with U.S. Solicitor General John Sauer now urging the Supreme Court to sustain Montana’s lower standard.
 
A Step Backward
Here are the facts of this case: In 2021, Montana police responded after William Trevor Case’s ex-girlfriend reported suicidal threats and a “clicking” sound on their call. Officers forcibly entered Case’s home, discovered a firearm, and used that evidence to convict him of assaulting an officer.
 
The Montana Supreme Court refused to recognize a Fourth Amendment violation that would have suppressed this evidence. The court explained that “requiring probable cause of a criminal violation would make no sense in the context of emergencies ‘wholly divorced from a criminal investigation.’”
 
This reasoning is dangerous. It equates a home entry to a stop-or-frisk standard fit for automobile or street encounters. This ignores Supreme Court precedent, which has consistently held stricter protections for homes.
The decision invites pretextual entries into homes under the guise of “help.”
 
Worse, Montana revived the “community caretaker” justification. If upheld, this would undermine the Supreme Court’s holding in Caniglia v. Strom (2021), which rejects the idea that general “caretaking” justifies warrantless home entries.
 
The Government’s Hollow Case
The Solicitor General’s brief argues that the Fourth Amendment’s “reasonableness” standard, not probable cause, should govern such entries, because the Constitution confines probable cause to warrants.
 
Under this circular reasoning, discarding probable cause is proof enough that a warrant isn’t needed.
 
Yet the Fourth Amendment does not permit the probable cause standard to evaporate when someone invokes an “emergency.” To permit lower thresholds is to allow a backdoor into the home whenever officers claim they reasonably believe danger exists – a recipe for arbitrary and after-the-fact justification.
 
The same logic threatens to bleed into digital surveillance contexts. PPSA has long warned that if the home, the most sacred zone of privacy, can be entered on less-than-probable-cause grounds, then electronic devices (which contain privacies at least as intimate as a home) will be vulnerable to similar intrusion. Sauer’s brief would turn the Fourth Amendment into a permission slip.
 
The Court Should Hold Firm
Given the Montana court’s flawed approach and the Solicitor General’s weak argument, the Supreme Court should reverse and remand with instructions to suppress the evidence.
 
The privacy of the American home is too important to allow police to invade homes based on nothing but speculation.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

FBI Caught Red-Handed: Bureau Spied on Eight U.S. Senators and One Congressman

10/7/2025

 

Sen. Grassley: “Worse than Watergate”

Picture
“Just because you’re paranoid doesn’t mean they aren’t after you,” says Yossarian, Joseph Heller’s terrified bomber pilot in Catch-22. The same could now be said by eight U.S. Senators and one U.S. House Member – all Republicans – who were secretly spied upon by the FBI during the Biden administration.
​
For five years now, the Project for Privacy and Surveillance Accountability has filed Freedom of Information Act (FOIA) requests demanding records from the FBI and other intelligence agencies about the possible surveillance of Members of Congress. We used every legal avenue – from FOIA requests to lawsuits – to compel the FBI, the Department of Justice, the Office of the Director of National Intelligence (ODNI), the National Security Agency, and the Department of State to disclose documents about the possible surveillance of Members of Congress with oversight responsibility over this intelligence community.

In short, we wanted to know if the FBI and other agencies were “overseeing” their ostensible overseers in Congress.

The government’s only response was the flippant use of the “Glomar response,” a court-created doctrine in which an agency can issue a “neither confirm nor deny” answer. In one instance, a response from ODNI came back within four business days, unprecedented speed for the bureaucracy. The Glomar response was originally created to protect a super-secret CIA project to retrieve a sunken Soviet nuclear submarine. Now it is being used to hide domestic spying.

At the time, Gene Schaerr, PPSA general counsel, responded: “The government doesn’t want to even entertain our question. What do they have to hide?”

Now we know at least part of what the government has to hide.

The FBI in 2023 analyzed the phone records of Sen. Lindsey Graham (R-SC), Sen. Bill Hagerty (R-TN), Sen. Josh Hawley (R-MO), Sen. Dan Sullivan (R-AK), Sen. Tommy Tuberville (R-AL), Sen. Ron Johnson (R-WI), Sen. Cynthia Lummis (R-WY), Sen. Marsha Blackburn (R-TN), and Rep. Mike Kelly (R-PA).

Among them we count three sitting members of the Senate Judiciary Committee, charged with oversight of the FBI, as being targeted by Bureau surveillance.

What was the FBI up to? The FBI document states it “conducted preliminary toll analysis on limited toll records,” meaning it secured and analyzed calls made by these Members in relation to their votes on whether to certify the 2020 presidential election results. The FBI’s analyses were based on metadata – who called whom and when. As research from Stanford University has shown, such seemingly innocuous records can yield “surprisingly sensitive personal information” about the likely contents of those calls.

That is one reason why Sen. Chuck Grassley, Chairman of the Senate Judiciary Committee, called this a “weaponization by federal law enforcement under Biden” that was “arguably worse than Watergate.”

We predict this is just the tip of the iceberg. The ease with which the FBI surveilled prominent Members of Congress hints at the underlying reasons for which PPSA’s queries have been batted away so consistently by the intelligence community. We believe that time will reveal that there is more – much more – evidence of the intelligence community accessing the private communications of Congress.

Next year Congress will hold a debate over the reauthorization of Section 702 of the Foreign Intelligence Surveillance Act. It should be clear to all Members that the FBI can’t be trusted. We need reforms across the board, from ending the abuse of Section 702 as a source of warrantless domestic surveillance, to ending government data purchases.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

The Feds Have Your Number… And Your Location… And a lot More

10/6/2025

 

“A day-in-the-life profile of individuals based on mined social media data.”
​

- Ellie Quinlan Houghtaling, The New Republic

Picture
​You might think that where you go and with whom you meet is your private information. And it is. But now it’s also accessible to the government, with a federal agency purchasing software to track the location of your phone.

Joseph Cox of 404 Media reports that the U.S. Immigration and Customs Enforcement (ICE) is buying an “all-in-one” surveillance tool from Penlink to “compile, process, and validate billions of daily location signals from hundreds of millions of mobile devices, providing both forensic and predictive analytics.”

That chilling quote is ICE’s own declaration. Apparently, acquiring Penlink’s proprietary tools are the only way to beat criminals at their own game.

ICE is not taking us down a slippery slope. It is going straight to the gully, discarding any concept of the prohibition against warrantless surveillance in violation of the Fourth Amendment. From there, monitoring the movements of the general population is simply an act of political will. As with facial recognition software, notes the Independent’s Sean O’Grady, it is one more example of the “creeping ubiquity of various types of surveillance.”

Indeed, location is but one element of commercial telemetry data (CTD), the industry term for information acquired from cellphone networks, connected vehicles, websites, and more. PPSA readers know that banning the sale of CTD to government agencies is one goal of the bipartisan Fourth Amendment Is Not For Sale Act, which passed the House in the previous Congress.

Collecting and selling CTD is the shady business of the data broker industry, a practice the Federal Trade Commission once tried, meekly, to rein in. Indeed, for one brief shining moment, even ICE previously announced it would stop buying (but continue to use) CTD after the Department of Homeland Security’s own Inspector General found that DHS agencies weren’t giving privacy protections their due.

And yet here we are. As the Electronic Frontier Foundation’s Beryl Lipton recently put it in Forbes:

“This extension and expansion of ICE’s Penlink contract underlines the federal government’s enthusiasm for indiscriminate and warrantless data collection on as many people as possible. We’re still learning about the extent of the government’s growing surveillance apparatus, but tools like Penlink can absolutely assist ICE in turning law-abiding citizens and protestors into targets of the federal government.”
​
These tools are in the hands of ICE today, but they could be in the hands of the FBI, IRS, and other federal agencies in the blink of an eye. Congress should take note of this development when it debates reauthorization of a key surveillance authority – FISA Section 702 – next spring.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

Heard on the Street? Our Voices, Apparently

10/6/2025

 

“Don’t eavesdrop on others – you may hear your servant curse you.”
​

- Ecclesiastes 7:21

Picture
Image via https://www.flocksafety.com/
​Flock Safety is a frequent PPSA subject (this is our tenth article on the company). But instead of the company’s license-plate reader cameras, today’s discussion was inspired by Flock’s listening device, Raven.

According to Ben Miller of Government Technology, Raven was developed to detect gunshots and other crime-related noises, then activate nearby Flock Falcon cameras and alert authorities. Flock began marketing the Raven-Falcon combo to schools in 2023. The camera integration is meant to be Raven’s primary selling point, giving law enforcement immediate alerts about gunshots, breaking glass, screeching tires, and whatever it's programmed to listen for.

Funny thing – it can also listen for human voices.

Matthew Gauriglia of the Electronic Frontier Foundation (EFF) reports that Flock has been touting Raven’s ability to detect screaming and other forms of vocal distress. The obvious implication, of course, is the product’s ability to “listen” and record human speech. Raven competitor ShotSpotter proved it could be done when its system recorded the words of a dying man in 2014.

Critics, meanwhile, challenge the notion that technology like Raven and ShotSpotter are good listeners – or even solid policing strategy. ShotSpotter published its own study claiming nearly 97 percent accuracy, though that level required six well-placed (and expensive) sensors in a given area.

Public research tells a different story. Chicago’s Inspector General was highly critical of the technology, finding that “alerts rarely produce evidence of a gun-related crime.” Instead, its use increased stop-and-frisk tactics due to officers’ changed perceptions of the areas where the sensors were deployed. It was deemed not to be worth the $33 million the city had paid for the contract.

Northwestern University’s MacArthur Justice Center published the most comprehensive set of findings to date – claiming that “on an average day, ShotSpotter sends police into these communities [mostly of color] more than 61 times looking for gunfire in vain.” Meanwhile, a National Institute of Justice report last year essentially concluded the technology brought little in terms of meaningful impacts on policing and crime reduction.

And now Raven is joining the audio sensor party, which, as parties go, is turning out to be a veritable Fyre Festival of public safety based on the combined testimony of multiple watchdog groups. In addition to those noted above, the list of audio sensor detractors includes the ACLU, Surveillance Technology Oversight Project, and Electronic Privacy Information Center. We also recommend EFF’s summary of the entire audio sensor industry.

Yet law enforcement continues to hail these too-good-to-be-true, quick-fix “solutions” to public safety challenges, potentially wasting millions of taxpayer dollars and eschewing much-needed transparency. The boosterism continues, despite concerns raised by the communities this technology purports to protect.
​
Audio-sensing tech capable of being deployed at scale nearly completes the mass surveillance infrastructure needed to destroy our privacy once and for all. After all, it is not a great leap for government to go from listening for screams to eavesdropping on private conversations.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

The EU’s Plan Would Destroy Privacy Instead of Protecting Children

10/1/2025

 

“It’s About the Children Until It’s Not”

Picture
​Denmark is encouraging the EU to scan its citizens’ private messages in order to root out sexual predators. The Electronic Frontier Foundation explains that, if enacted, the Chat Control initiative would “undermine the privacy promises of end-to-end encrypted communication tools.” In other words, writes Yaël Ossowski for Euronews: “It’s about the children until it’s not.”

Seemingly aware of how dangerous their own idea is, a leaked 2024 report revealed that multiple EU interior ministers sought carveouts for their own intelligence agencies, police, and military. Such exemptions “highlight the hypocrisy of lawmakers imposing surveillance they would not accept for themselves,” says Ethereum co-founder Vitalik Buterin.

Indeed, this is a suspiciously curious exemption given that the ostensible purpose of the legislation is to fight online child sexual abuse. By definition, therefore, no one should be exempt. It is an unfortunate, even tragic, reality that such initiatives claim the mantle of noble causes like abuse prevention or national security, but do little to actually advance them.

What such sweeping “safety” initiatives do instead is fundamentally erode the foundations of digital privacy (and, perhaps, privacy itself).

“You cannot make society secure by making people insecure,” Buterin declared on X. “Blanket interception of digital communication,” he says, is no substitute for common-sense approaches to child abuse, (such as limiting the release of repeat offenders, raising public awareness, and fostering community engagement).

Undoing encryption (as the EU’s legislation demands) is the beginning of the end of digital privacy. It’s a misguided path, and one that the UK and others have stumbled down before. No goal, no matter how noble or well-intended, justifies the extinguishing of privacy. We must, writes Steve Loynes for Element, “learn from history” and remember that encryption backdoors are frequently the basis for exploitative attacks by bad actors.
​
Because bad actors, in any arena, were never going to follow the rules anyway.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

AI Reinvents Surveillance, This Time Without Limits

10/1/2025

 

“We but teach bloody instructions, which, being taught, return to plague the inventor.” - Macbeth

Picture
​Closed circuit television (CCTV) has changed very little since its introduction in the 1960s – essentially passive systems that merely display whatever they’re aimed at. In fact, without a human at the other end, there was no real surveillance taking place.

That was always the flaw in George Orwell’s 1984 – it would take as many people to surveil as there are people to surveil. And the watchers would have to try to remain alert throughout the day as they watched people eat breakfast, brush their teeth, and wash their dishes.

Then the ability to digitally store vast amounts of surveillance made the task of surveillance easier. But now that AI is here, it is proving to be the real game-changer.

The new generation of CCTV security cameras are capable of autonomous surveillance and action. “Watched by AI guards,” boasts ArcadianAI, whose Ranger line of products operates on its own, proactively identifying what it sees as threats and subsequently alerting authorities.

It’s largely thanks to recent “advances” in computer vision and vision language models, which speak of “objects,” a fiendishly clever euphemism for anything – bodies, body parts, events, contexts, movements, behaviors, colors, dimensions, distances, sounds, textures. In effect, anything that can be recognized and classified as its own distinct kind of pattern.

Thus updated surveillance video now “thinks” about what it’s seeing. Case in point: An orchestral piece powered by AI video. It’s a bit of PR for Axis Communications to make the point that its CCTV systems can detect whatever its clients seek to find and, with that information, do previously unimaginable things.

This moment represents a threshold of sorts: defining, recognizing, and interpreting patterns without limit. Using such technology for musical composition is innocuous enough, but what about scanning a scene for skin color, hair style, facial features, gait, ethnicity, gender, age… or failing to applaud… or using a secret handshake?

Amid all the hype about AI’s possibilities, it’s important to step back and remember that there is nothing inherently moral about creativity – not in medicine, physics, management, or any human endeavor. Yet, here we are rushing headlong into a frenzied new era of possibility with no guardrails or ethical standards in sight.
View this post on Instagram

A post shared by IFLScience (@iflscience)

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS
<<Previous

    Categories

    All
    2022 Year In Review
    2023 Year In Review
    2024 Year In Review
    Analysis
    Artificial Intelligence (AI)
    Call To Action
    Congress
    Congressional Hearings
    Congressional Unmasking
    Court Appeals
    Court Hearings
    Court Rulings
    Data Privacy
    Digital Privacy
    Domestic Surveillance
    Facial Recognition
    FISA
    FISA Reform
    FOIA Requests
    Foreign Surveillance
    Fourth Amendment
    Fourth Amendment Is Not For Sale Act
    Government Surveillance
    Government Surveillance Reform Act (GSRA)
    Insights
    In The Media
    Lawsuits
    Legal
    Legislation
    Letters To Congress
    NDO Fairness Act
    News
    Opinion
    Podcast
    PPSA Amicus Briefs
    Private Data Brokers
    Protect Liberty Act (PLEWSA)
    Saving Privacy Act
    SCOTUS
    SCOTUS Rulings
    Section 702
    Spyware
    Stingrays
    Surveillance Issues
    Surveillance Technology
    The GSRA
    The SAFE Act
    The White House
    Warrantless Searches
    Watching The Watchers

    RSS Feed

FOLLOW PPSA: 
© COPYRIGHT 2024. ALL RIGHTS RESERVED. | PRIVACY STATEMENT
Photo from coffee-rank