Project for Privacy and Surveillance Accountability (PPSA)
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE

 NEWS & UPDATES

Watching the Watchers: If You Are Stopped by ICE, Your Biometric Data Will Be Held for a Generation

11/18/2025

 
Picture
​Robert Frommer, a senior attorney with the Institute for Justice, tells the harrowing story of George Retes, a U.S. citizen and Army veteran of the Iraq War, who was stopped in his car during an immigration sweep.

He was on his way to work when he encountered an Immigration and Customs Enforcement (ICE) roadblock. A melee broke out between protesters and ICE agents. Retes’s car was engulfed in tear gas.

The Institute for Justice reports that agents smashed Retes’s car window, dragged him out, and forced him to the ground with knees on his neck and back – even though he was not resisting.

Despite Retes presenting proof of his citizenship, ICE agents detained him for three days without charges, strip-searched him, and forced him to provide DNA samples. He was not allowed to call a lawyer or given a hearing before a judge. Because Reyes was held incommunicado, his family was left to frantically search for him.

Writing in MSN, Frommer explores what happens to the biometric data ICE collected on Reyes.

“In addition to our DNA, the Department of Homeland Security (DHS) has recently and quietly authorized ICE officers to forcibly collect and retain intimate identifiers: our fingerprints and digital images of our faces. Combined with other technologies, the department is creating a general warrant for our persons, the kind of abuse that ignited the American Revolution.

“A DHS document, meant to ensure our privacy, lays out the facts. An app called Mobile Fortify allows ICE and Customs and Border Protection (CBP) officers to photograph and scan anyone they ‘encounter’ in the field, regardless of citizenship or immigration status. If there isn’t a photo match, officers can collect people’s fingerprints, which are then checked against DHS biometric records. Once DHS has that sensitive data, the app feeds it into CBP’s Automated Targeting System – an enormous watch list that merges border records, passport photos and prior ‘encounter’ images. CBP retains every nonmatch photograph for 15 years, meaning that even if you’re an American citizen mistakenly stopped on the street, the government has your biometric records for (almost) a generation.”
​

Congress should investigate and debate this retention of Americans’ biometric records before reauthorizing a single surveillance authority. And PPSA is hopeful that ICE will be forced to explain its unconstitutional detention of George Reyes when it faces his lawsuit under the Federal Torts Claim Act.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Humans Are Peering Through the Eyes of Robots

11/10/2025

 

“We shall describe devices which appear to move of their own accord.”

​- Hero of Alexandria, Pneumatica

Picture
Image courtesy of 1X.
​Those of a certain age might remember the Domesticon, a line of 22nd century robotic butlers from the movie Sleeper. To avoid being caught by the authoritarian state, Woody Allen’s character Miles Monroe pretends to be a Domesticon during a dinner party. The scene is equal parts slapstick and satire. Miles’ cover is blown when he tries to help the host but acts too human in the process.

The Wall Street Journal’s Joanna Stern recently found that one actual prototype of the Domesticon is not entirely dissimilar to the fictional version. 1X Technologies is beta testing NEO, the $20,000 “home humanoid” it hopes to bring to market in 2026. Recently, Stern got to see it in action for the second time and discovered a decidedly Sleeper-like connection: NEO is part human.

Not organically, like a cyborg – so far the full integration of creature and computer is limited to cockroaches. No, NEO is remotely human, as in there’s a remote human operator back at company HQ, “potentially peering through the robot’s camera eyes to get chores done.”

Now, how’d you like to have that job? But as 1X CEO Bernt Børnich told Stern: “If you buy this product, it is because you’re okay with that social contract. If we don’t have your data, we can’t make the product better.”

Such transparency is refreshing. It is also a reminder of the Faustian bargain we must strike in order to make artificial intelligence work at the expense of our personal privacy. AI is unlike any software that came before in that it requires gargantuan amounts of data in order to learn its jobs. As Stern notes, “It needs data from us – and from our homes.” A world model, in other words, centered around us and private things we do at home. 

We expect these machines to be capable of fully human, fully competent, fully safe behaviors – all while being fully autonomous. None of that will happen without the ability to collect and learn from the data of day-to-day human lives. There are no shortcuts, either. When 1X let Stern drive NEO using one of the company’s VR headsets its human operators wear, she nearly dislocated its arm. The robot left for the shop in a wheelchair. The robot, a cross between “a fencing instructor and a Lululemon mannequin,” as she describes it, had neither’s dexterity nor style.

And during the first meeting the reporter had with NEO earlier in the year, the robot managed to faceplant.

“No way that thing is coming near my kids or dog,” she remembers thinking. Domestic robotics remains in its infancy – literally in Stern’s view. “The next few years won’t be about owning a capable robot; they’ll be about raising one.” Like a toddler, humanoid AI can’t learn without doing, watching, and remembering.

1X says users will be able to set “no-go” zones, blur faces in the video feed, and that human operators back at HQ will not connect unless invited to do so. CEO Børnich told Stern that such “teleoperation” was a lot like having a house cleaner. “Last I checked,” Stern responded wryly, “my house cleaner doesn’t wear a camera or beam my data back to a corporation.”

A punchline of sorts seems appropriate here: We’re big fans of the ethical AI principle that says always have a human in the loop – “but this is ridiculous!” 

Stern’s forthcoming book, I Am Not a Robot: My Year Using AI to Do (Almost) Everything and Replace (Almost) Everyone, is now available for pre-order. Readers can expect more dirt on NEO.
​
Unless he learns to vacuum first.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

One Nation Under Watch: How Borders Went from Being Physical to Digital

11/10/2025

 

​“If you want to keep a secret, you must also hide it from yourself.”

​- George Orwell

Picture
​Imagine a dish called Surveillance Stew. It’s served anytime multiple privacy-threatening technologies come together, rather like a witch’s brew of bad ideas. It's best served cold.

The latest Surveillance Stew recipe includes location data, social media, and facial recognition. Nicole Bennett, who studies such things, writes in The Conversation that this particular concoction represents a turning point: borders are no longer physical but digital. The government has long held that the border is a special zone where the Fourth Amendment has little traction. Now the government is expanding border rules to the rest of America.

Immigration and Customs Enforcement (ICE) has put out a call to purchase a comprehensive social media monitoring system. At first glance, Bennett notes, it seems merely an expansion of monitoring programs that already exist. But it’s the structure of what’s being proposed that she finds new, expansive, and deeply concerning. “ICE,” she writes, “is building a public-private surveillance loop that transforms everyday online activity into potential evidence.”

The base stock of Surveillance Stew came with Palantir’s development of a national database that could easily be repurposed into a federal surveillance system. Add ICE’s social media monitoring function and the already-thoroughgoing Palantir system becomes “a growing web of license plate scans, utility records, property data and biometrics,” says Bennett, “creating what is effectively a searchable portrait of a person’s life.”

Such a technology gumbo seems less a method for investigating individual criminal cases than a sweeping supposition that any person anywhere in the United States could, at any moment, be a “criminal.” It’s a dragnet, says Wired’s Andrew Couts, noting that 65 percent of ICE detainees had no criminal convictions. Dragnets are inimical to privacy and corrosive to the spirit of the Constitution.

Traditional, law-based approaches to enforcement are one thing – and enforcement, of course, is ICE’s necessary job. The problem now, warns Bennett, is that “enforcement increasingly happens through data correlations” rather than the gathering of hard evidence.

We agree with Bennett's conclusion that these sorts of “guilt by digitization” approaches fly in the face of constitutional guardrails like due process and protection from warrantless searches. To quote Wired’s Couts again, “It might be ICE using it today, but you can imagine a situation where a police officer is standing on a corner and just pointing his phone at everybody, trying to catch a criminal.”

The existence of Palantir’s hub makes it inevitable that ICE’s expanded monitoring capability will migrate to other agencies – from the FBI to the IRS. And when that happens, what ICE does to illegal immigrants can just as easily be done to American citizens – by any government entity, for any reason.
​
When our daily lives are converted into zeroes and ones, the authorities can draw “borders” wherever they want.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Just What We Need – Hack Makes Recordings by Wearable Glasses Undetectable

11/3/2025

 

“Privacy is not just about hiding things or keeping secret, it’s about controlling who has access to your life.”

​- Roger Spitz

Picture
​Here’s a quick news update on one of the privacy stories of the year: Meta’s Ray-Ban smartglasses. Joseph Cox and Jason Koebler of 404 Media told the story of Bong Kim, a hobbyist who engineered a way to disable the LED light intended to shine conspicuously whenever Meta’s glasses are recording or taking photos.
 
Let’s be clear: Meta has nothing to do with hacks like this one. The company tried to prevent privacy violations by designing the glasses so that if someone covered up the LED light, the recording function wouldn’t work. So we'll skip the “we told you so” part where we question the wisdom of building a modern Prometheus (powered by an app and AI, of course) while clutching at pearls when it gets compromised – as it now is.
 
We’ll also refrain from asking what could possibly go wrong. But here’s one possibility out of 10,000 would-be privacy violations: Imagine a stalker no longer having to worry about an LED light giving him away. Or industrial spies. Or actual spies. Or the colleague at work tricking you into saying something that will get you fired.
 
From a privacy standpoint, wearables (including smartglasses) are a non-starter, a set of technologies primarily in search of a hack. And if you don’t believe that, you probably haven’t been on Reddit lately.
 
According to 404’s reporting, Kim’s modification is advertised on YouTube and costs just $60 (though it’s unclear whether shipping is included). That’s what your privacy is worth these days.
 
So what can you do? At the very least, familiarize yourself with the look of these new wearable glasses from a host of companies. And quietly read yourself a Miranda warning: “anything thing you say can and will be used against you in a court of law.” Or, maybe just in a meeting with HR.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Bay State Drivers Can Now Be Tracked by 7,000 Flock Customers

11/3/2025

 

“There is something predatory in the act of taking a picture.”

- Susan Sontag

Picture
​Search our news blog for "Flock" and you'll hit the jackpot. This company has been a consistent source of concern for privacy watchdogs.
 
Just last week, the ACLU’s Jay Stanley summarized the results of a detailed Massachusetts open-records investigation. Thanks to Flock’s contracts with more than 40 Massachusetts police departments, Bay State drivers can now be tracked by 7,000 of the company’s customers – “in real time, without a warrant, probable cause, or even reasonable suspicion of wrongdoing.” To be clear, that surveillance of Massachusetts drivers can be conducted from other parts of the country… because why wouldn’t Texas authorities want to know what Massachusetts drivers are up to?
 
This chilling state of affairs is the result of Flock’s boilerplate contract language, which only changes if a police department demands it (most have not). The company’s contracts include an “irrevocable, worldwide, royalty-free, license to use the Customer Generated Data for the purpose of providing Flock Services.”
 
Stanley’s article includes additional anecdotes about Flock’s propensity for over-sharing that suggest the issue goes far beyond Massachusetts. In Virginia, for example, reporters found that “thousands of outside law enforcement agencies searched Virginians’ driving histories over 7 million times in a 12-month period.” As we’ve written before, Virginia is already one of the most surveilled states in the country, thanks largely to vendors like Flock Safety.
 
Consider following the ACLU’s advice for pushing back against this kind of Orwellian oversight. If we don’t say anything, nothing is going to change.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Keep Lummis-Wyden in the NDAA to Secure the Pentagon – and Our Democracy – from Foreign Hackers

10/31/2025

 
Picture
Sen. Cynthia Lummis (Left) and Sen. Ron Wyden (Right)
National security wake-up calls do not get louder than the revelation that a Chinese government-linked hacking group, known as Salt Typhoon, successfully penetrated major U.S. telecommunications carriers in 2024.  AT&T and Verizon were among the companies compromised, exposing the communications of Members of Congress, senior officials, and even both major-party presidential candidates.
 
This was not an isolated breach. It followed a 2023 cyberattack in which Chinese state hackers infiltrated Microsoft’s cloud-hosted email systems, compromising accounts at multiple federal agencies, including the Departments of State and Commerce. According to the Cyber Safety Review Board, the attackers downloaded roughly 60,000 emails from the State Department alone. Pilfered correspondence included those of Cabinet-level officials.
 
These events underscore an uncomfortable truth – the Department of Defense and the intelligence community cannot defend the nation with unencrypted communications routed through a handful of vulnerable providers.
 
The good news is that we do not have to accept this status quo. As the House and Senate negotiate the National Defense Authorization Act (NDAA) for Fiscal Year 2026, conferees must retain the Lummis-Wyden amendment, which mandates secure, interoperable, end-to-end-encrypted collaboration tools for the Pentagon.
 
A Pattern of Foreign Infiltration
From defense contractors to cloud service providers, adversarial regimes have repeatedly exploited weak communication infrastructure to spy on U.S. institutions. The Salt Typhoon and Microsoft incidents illustrate how a single breach in a major service can compromise thousands of sensitive conversations. When communication systems lack end-to-end encryption, even one point of failure can expose entire networks to foreign intelligence agencies.
 
What Lummis-Wyden Would Do
This measure requires the Department of War to use only collaboration systems that meet rigorous cybersecurity standards – including true end-to-end encryption that ensures only the sender and intended recipient can read a message, even if servers in between are hacked.
 
Just as importantly, Lummis-Wyden mandates interoperability. Today, the Pentagon is confined to using a small set of proprietary, “walled garden” platforms that block seamless communication across systems. Interoperable standards would allow the Defense Department to adopt superior tools as they emerge, preventing vendor lock-in that traps communications in the domains of single companies, while enhancing long-term resilience of the Pentagon’s digital networks.
 
By promoting interoperability and strong encryption, Lummis-Wyden would open the door to competition, inviting companies to develop more secure, agile, and affordable solutions. America’s defense and intelligence agencies should never be dependent on single-point-of-failure vendors whose systems are ripe targets for global espionage.
 
A Strategic Imperative
From the theft of federal employee records to the infiltration of telecom carriers, the pattern is unmistakable: insecure communications infrastructure is a strategic liability.
 
Passing Lummis-Wyden would do more than patch vulnerabilities: it would redefine what secure collaboration means in the 21st century. It would signal that America prizes both privacy and resilience, and rewards technologies that deliver genuine end-to-end security rather than superficial compliance checkboxes.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

How TikTok Helps the Stalkerverse Infiltrate Tinder

10/28/2025

 

“Do I not know you by your face?” - Twelfth Night, Act 1 Scene 5

Picture
​Another day, another TikTok story. Last time, reporters found that TikTok Shop was allowing ads tailored to GPS-savvy stalkers. This time, it’s ads for Cheaterbuster – which represents yet one more invasive abuse of facial recognition technology, often with images taken from the Tinder dating app.
 
Cheaterbuster’s “Facetrace” feature, which 404 Media verified, allows users to “discover someone’s online presence from a single selfie.” That’s right, you need only upload a photo of your “loved one” and Cheaterbuster’s AI scours the web in search of that person’s Tinder profile, for $18 per search.
 
Notably, Tinder itself has nothing to do with this according to Sullivan Davis and other bloggers. “Not only do we not authorize this practice, it is squarely against our policies,” the company told 404 Media. It appears that sites like Cheaterbuster (sadly, there are others) are scraping publicly available profiles (pro tip – pay for Tinder tiers that allow private mode).
 
The Mary Sue webzine points out that any number of TikTok accounts are really just paid marketing fronts for Cheaterbuster. “Aurora” was applauded by naïve users who believed that she was literally dumping her boyfriend (by driving him to the landfill) after Cheaterbuster saved the day. According to 404, Cheaterbuster’s affiliate program pays more than YouTube does.
 
About a year ago, two Harvard students hacked Meta’s Ray-Ban smartglasses to identify strangers on the subway. As we wrote at the time, “Armed with this technology, your neighborhood creep could easily spot a woman walking down the street and be there when she arrives at her front doorstep.”
 
Now thanks to TikTok and Cheaterbuster, he could know all about her and just what to say.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

Don’t Look Up: Those Satellites Are Leaking

10/27/2025

 

“To have good data, we need good satellites.”  - Jeff Goodell

Picture
​Sigh. As if we didn’t have enough to worry about already. While privacy experts were focusing on the security of undersea fiberoptic cables, government surveillance, and corporate subterfuge, our data is being broadcast unencrypted all around the Earth by satellites.

Satellites are leaky – and it isn’t fuel they’re off-gassing; it’s our personal information. “These signals are just being broadcast to over 40 percent of the Earth at any point in time,” researchers told Wired’s Andy Greenberg and Matt Burgess.

A few years ago, those researchers (at UC San Diego and the University of Maryland) followed up on a whim: Could we eavesdrop on what satellites are broadcasting? The answer was a big fat “yes” – and it took only about $800 in equipment. Their complete findings are detailed in a newly released study. They had assumed, or at least hoped, to find very little – that almost every signal would be protected by encryption – the ne plus ultra of privacy protection.

Instead, among the many things they found floating in the ether were:
​
  • Miscellaneous corporate and consumer data (such as phone numbers)
  • Actual voice calls
  • Text messages
  • Industrial communications
  • Decryption keys
  • Even in-flight Wi-Fi data for systems used by 10 different airlines (including users’ in-flight browsing activities).

Researchers also “pulled down a significant collection of unprotected military and law enforcement communications,” including information about some U.S. sea vessels.

The Wired article’s authors are quick to note that the National Security Agency warned about the security of satellite communications more than three years ago.

Will the publication of such research encourage bad actors to take advantage of these weaknesses?

In the short term, perhaps, but the study’s authors are hopeful that various companies will respond like T-Mobile did and immediately get their encryption house in order (a spokesperson noted the issue was not network-wide). Another affected company, Santander Mexico, responded: “We took the report as an opportunity for improvement, implementing measures that reinforce the confidentiality of technical traffic circulating through these links.” (It should be noted that the affected organizations were notified many months prior to the study’s release.)

In the meantime, let’s hope most hackers haven’t renewed their Wired subscriptions.
​
After all, the scale of the problem is enormous. A Johns Hopkins expert told the magazine: “The implications of this aren't just that some poor guy in the desert is using his cell phone tower with an unencrypted backhaul. You could potentially turn this into an attack on anybody, anywhere in the country.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Wi-Fi Turns Spy-Fi

10/15/2025

 

“We are profoundly bad at asking ourselves how the things we build could be misused.”

​- Brianna Wu

Picture
​In terms of surveillance tech, Wi-Fi is having its moment. This is the fourth time in 2025 we’ve covered the growth of an invasive concept that three years ago seemed remote, even arcane: Wi-Fi sensing.

Increasingly, Wi-Fi turned Spy-Fi is ready for prime time. The Karlsruhe Institute of Technology (KIT), a German research university and think tank, found that Wi-Fi networks can use their radio signals to identify people. Any Wi-Fi network can be made to do this, no fancy hardware required. The people being identified don’t have to be logged into these networks, either. In fact, they don’t even need to carry electronic devices for this subterfuge to work; it’s enough simply to be present, minding one’s own business, within range of a given Wi-Fi router.

But given the ubiquity of Wi-Fi networks, that leaves very few places to hide. “This technology turns every router into a potential means for surveillance,” warns security/privacy expert Julian Todt of KIT. “If you regularly pass by a café that operates a Wi-Fi network, you could be identified there without noticing it and be recognized later – for example by public authorities or companies.” (Or hackers, autocrats, or foreign agents).

How does it work? By exploiting a standard feature and turning it into a vulnerability – leveraging weaknesses must be taught at Bad Actor 101 at Spy School. In this case, connected devices regularly send feedback signals to Wi-Fi routers. According to the researchers, these signals are frequently unencrypted – which means anyone nearby can capture them. Then, with the right know-how, that data can be converted into images.

Not photos exactly, but close enough – analogous to ultrasound, sonar, or radar. The more devices that are connected to a given Wi-Fi network, the fuller the picture provided – height, shape, gestures, gait, hats, purses, and more. With a little help from machine learning, our bodies turn out to be uniquely identifiable, not unlike a fingerprint.

Are there easier ways to spy on us? Most certainly – CCTV, for example. But what Wi-Fi sensing lacks in ease it makes up for in reach. As technologies go, it’s practically everywhere that humans are. The vast majority of people don’t have CCTV cameras in their homes, but they (or their neighbors) are almost guaranteed to have Wi-Fi.
​
Wherever you’re reading this from, take a moment to see how many Wi-Fi networks your phone detects. If the KIT research proves correct, any one of them could be used to track your movements and determine your identity.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

The Feds Have Your Number… And Your Location… And a lot More

10/6/2025

 

“A day-in-the-life profile of individuals based on mined social media data.”
​

- Ellie Quinlan Houghtaling, The New Republic

Picture
​You might think that where you go and with whom you meet is your private information. And it is. But now it’s also accessible to the government, with a federal agency purchasing software to track the location of your phone.

Joseph Cox of 404 Media reports that the U.S. Immigration and Customs Enforcement (ICE) is buying an “all-in-one” surveillance tool from Penlink to “compile, process, and validate billions of daily location signals from hundreds of millions of mobile devices, providing both forensic and predictive analytics.”

That chilling quote is ICE’s own declaration. Apparently, acquiring Penlink’s proprietary tools are the only way to beat criminals at their own game.

ICE is not taking us down a slippery slope. It is going straight to the gully, discarding any concept of the prohibition against warrantless surveillance in violation of the Fourth Amendment. From there, monitoring the movements of the general population is simply an act of political will. As with facial recognition software, notes the Independent’s Sean O’Grady, it is one more example of the “creeping ubiquity of various types of surveillance.”

Indeed, location is but one element of commercial telemetry data (CTD), the industry term for information acquired from cellphone networks, connected vehicles, websites, and more. PPSA readers know that banning the sale of CTD to government agencies is one goal of the bipartisan Fourth Amendment Is Not For Sale Act, which passed the House in the previous Congress.

Collecting and selling CTD is the shady business of the data broker industry, a practice the Federal Trade Commission once tried, meekly, to rein in. Indeed, for one brief shining moment, even ICE previously announced it would stop buying (but continue to use) CTD after the Department of Homeland Security’s own Inspector General found that DHS agencies weren’t giving privacy protections their due.

And yet here we are. As the Electronic Frontier Foundation’s Beryl Lipton recently put it in Forbes:

“This extension and expansion of ICE’s Penlink contract underlines the federal government’s enthusiasm for indiscriminate and warrantless data collection on as many people as possible. We’re still learning about the extent of the government’s growing surveillance apparatus, but tools like Penlink can absolutely assist ICE in turning law-abiding citizens and protestors into targets of the federal government.”
​
These tools are in the hands of ICE today, but they could be in the hands of the FBI, IRS, and other federal agencies in the blink of an eye. Congress should take note of this development when it debates reauthorization of a key surveillance authority – FISA Section 702 – next spring.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

Heard on the Street? Our Voices, Apparently

10/6/2025

 

“Don’t eavesdrop on others – you may hear your servant curse you.”
​

- Ecclesiastes 7:21

Picture
Image via https://www.flocksafety.com/
​Flock Safety is a frequent PPSA subject (this is our tenth article on the company). But instead of the company’s license-plate reader cameras, today’s discussion was inspired by Flock’s listening device, Raven.

According to Ben Miller of Government Technology, Raven was developed to detect gunshots and other crime-related noises, then activate nearby Flock Falcon cameras and alert authorities. Flock began marketing the Raven-Falcon combo to schools in 2023. The camera integration is meant to be Raven’s primary selling point, giving law enforcement immediate alerts about gunshots, breaking glass, screeching tires, and whatever it's programmed to listen for.

Funny thing – it can also listen for human voices.

Matthew Gauriglia of the Electronic Frontier Foundation (EFF) reports that Flock has been touting Raven’s ability to detect screaming and other forms of vocal distress. The obvious implication, of course, is the product’s ability to “listen” and record human speech. Raven competitor ShotSpotter proved it could be done when its system recorded the words of a dying man in 2014.

Critics, meanwhile, challenge the notion that technology like Raven and ShotSpotter are good listeners – or even solid policing strategy. ShotSpotter published its own study claiming nearly 97 percent accuracy, though that level required six well-placed (and expensive) sensors in a given area.

Public research tells a different story. Chicago’s Inspector General was highly critical of the technology, finding that “alerts rarely produce evidence of a gun-related crime.” Instead, its use increased stop-and-frisk tactics due to officers’ changed perceptions of the areas where the sensors were deployed. It was deemed not to be worth the $33 million the city had paid for the contract.

Northwestern University’s MacArthur Justice Center published the most comprehensive set of findings to date – claiming that “on an average day, ShotSpotter sends police into these communities [mostly of color] more than 61 times looking for gunfire in vain.” Meanwhile, a National Institute of Justice report last year essentially concluded the technology brought little in terms of meaningful impacts on policing and crime reduction.

And now Raven is joining the audio sensor party, which, as parties go, is turning out to be a veritable Fyre Festival of public safety based on the combined testimony of multiple watchdog groups. In addition to those noted above, the list of audio sensor detractors includes the ACLU, Surveillance Technology Oversight Project, and Electronic Privacy Information Center. We also recommend EFF’s summary of the entire audio sensor industry.

Yet law enforcement continues to hail these too-good-to-be-true, quick-fix “solutions” to public safety challenges, potentially wasting millions of taxpayer dollars and eschewing much-needed transparency. The boosterism continues, despite concerns raised by the communities this technology purports to protect.
​
Audio-sensing tech capable of being deployed at scale nearly completes the mass surveillance infrastructure needed to destroy our privacy once and for all. After all, it is not a great leap for government to go from listening for screams to eavesdropping on private conversations.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

The EU’s Plan Would Destroy Privacy Instead of Protecting Children

10/1/2025

 

“It’s About the Children Until It’s Not”

Picture
​Denmark is encouraging the EU to scan its citizens’ private messages in order to root out sexual predators. The Electronic Frontier Foundation explains that, if enacted, the Chat Control initiative would “undermine the privacy promises of end-to-end encrypted communication tools.” In other words, writes Yaël Ossowski for Euronews: “It’s about the children until it’s not.”

Seemingly aware of how dangerous their own idea is, a leaked 2024 report revealed that multiple EU interior ministers sought carveouts for their own intelligence agencies, police, and military. Such exemptions “highlight the hypocrisy of lawmakers imposing surveillance they would not accept for themselves,” says Ethereum co-founder Vitalik Buterin.

Indeed, this is a suspiciously curious exemption given that the ostensible purpose of the legislation is to fight online child sexual abuse. By definition, therefore, no one should be exempt. It is an unfortunate, even tragic, reality that such initiatives claim the mantle of noble causes like abuse prevention or national security, but do little to actually advance them.

What such sweeping “safety” initiatives do instead is fundamentally erode the foundations of digital privacy (and, perhaps, privacy itself).

“You cannot make society secure by making people insecure,” Buterin declared on X. “Blanket interception of digital communication,” he says, is no substitute for common-sense approaches to child abuse, (such as limiting the release of repeat offenders, raising public awareness, and fostering community engagement).

Undoing encryption (as the EU’s legislation demands) is the beginning of the end of digital privacy. It’s a misguided path, and one that the UK and others have stumbled down before. No goal, no matter how noble or well-intended, justifies the extinguishing of privacy. We must, writes Steve Loynes for Element, “learn from history” and remember that encryption backdoors are frequently the basis for exploitative attacks by bad actors.
​
Because bad actors, in any arena, were never going to follow the rules anyway.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

The More We Fly, The More They Spy

9/23/2025

 

How Airlines Sell Our Travel Itineraries to the Government

Picture
​We previously wrote about the Airlines Reporting Corporation (ARC), which began as a humble transaction clearinghouse in the analog days of the 1980s but has since become a full-fledged data broker.

Among the ARC’s best customers is the U.S. government, whose appetite for its citizens’ personal data is matched only by its desire to avoid acquiring that data constitutionally. More specifically, government agencies use third-party data brokers like ARC to dodge obtaining search warrants based on probable cause – in stark defiance of the Fourth Amendment. 

New reporting from Joseph Cox at 404 Media sheds more light on the scale of ARC’s partnership with the federal government. FOIA requests paint a picture of near-total reach when it comes to tracking where and when we fly:

  • 270 airlines participate
  • 12,800 travel agencies provide data
  • Data includes passenger names, itineraries, and financial details

Cox’s ongoing coverage of this subject also reveals that the sale of traveler data isn’t a one-off or even occasional transaction. On a daily basis, ARC supplies passenger information to power TIP, the Traveler Intelligence Program. Despite the name, passengers’ IQs are probably the only piece of data not being sold.

We now know that buyers of that data include the Customs and Border Protection. 404 Media also found that other customers include ATF, the SEC, TSA, the State Department, the U.S. Marshals Service, and the IRS.

Are the skies really overflowing with so much rampant criminality that the government is justified in spying on all passengers? Should the IRS have warrantless access to your travel itinerary?

“ARC's sale of data to U.S. government agencies is yet another example of why Congress needs to close the data broker loophole,” Sen. Ron Wyden (D-OR) told 404.

When you last bought airline tickets, do you remember giving permission to have your itineraries and credit card information sold, either to the government or anyone else? Neither do we, nor any of the other five billion passengers whose records ARC has collected and made searchable.

“Governments,” wrote Jefferson, derive “their just powers from the consent of the governed.” Consent is inconvenient to authority, so it’s little wonder we were never asked. There’s nothing just, consensual, or constitutional about mass surveillance.
​
For the record, the Traveler Intelligence Program was ARC’s own idea, back in 2001. And of course, they knew exactly which doors to knock on.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR RIGHT TO PRIVACY

Clearview AI: Giving the US Government A Clear View of Its Citizens

9/18/2025

 
Picture
​Clearview AI is raking in the cash with its facial recognition software, signing lucrative contracts that make all Americans easier targets for government surveillance. The latest award is a $10 million deal with the Department of Homeland Security (DHS) to support Immigration and Customs Enforcement (ICE) operations.

Clearview was previously fined more than $30 million by Dutch regulators for privacy violations related to data collection. It also settled privacy violation charges in the U.S. for tens of millions more. But none of that has stopped the company from becoming a favorite of law enforcement and government intelligence agencies in the United States. In fact, we’ve written about the dangers of facial recognition more times than we can count. Its continued popularity only proves that the federal government cares more about purchasing facial recognition software than regulating its use. As a result, states have had to step in and fill the regulatory gap.

The new ICE contract means that Clearview will be used to help identify individuals accused of assaulting its officers – a commendable goal. But the accumulation of Americans’ faces into a single database is an immense temptation for abuse in many other domains, including surveillance for political reasons.

You may applaud or deplore ICE’s new aggressiveness. The larger is issue what the government, or Clearview itself, will do down the road with the mass collection of America’s facial data. Our faces, along with the rest of our biometric data – and our privacy in general – remain for sale. Of course, we’re assuming that the software will actually recognize us rather than mistake us for someone else.

As spy tech goes, facial recognition can’t seem to win for losing.
​
It’s enough to make one yearn for the quaint times of Oscar Wilde, who once said, “I never forget a face, but in your case I will make an exception.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

The Wearable Revolution Will Be A Boon For Data Harvesters

9/15/2025

 

“There’s no federal law that is going to protect against these companies weaponizing this data.”

Prof. Alicia Jessop
Picture
​We recently reported that the popularity of wearables is eroding confidence in the idea that private, candid conversations will always remain private. Now Charlie McGill and The American Prospect report that HHS Secretary Robert F. Kennedy Jr. “wants a wearable on every American body.” They described this announcement as “curious” given that five years ago the Secretary himself blasted wearables and other smart devices as being about “surveillance, and harvesting data.”

That was then. A massive, government-funded pro-wearables ad campaign will soon promote Secretary Kennedy’s long-held view that eating right and exercising is superior to pharmaceutical remedies. He also wants HHS to popularize wearables: “You know the [sic] Ozempic is costing $1,300 a month, if you can achieve the same thing with an $80 wearable, it's a lot better for the American people.”

Persuading people to take better care of themselves is certainly a commendable goal for an HHS Secretary. But the security and privacy risks inherent to wearables are also a veritable bonanza for data brokers. On the Dark Web in 2021, healthcare data records were worth $250 each, compared to $5.40 for a payment card record. Just imagine what they’ll be worth in four years’ time if the HHS plan comes to fruition. Meanwhile, companies are lining up to cash in on the wearables boom that the department is promoting.

Companies that buy our data usually just want to target customers with ads and appeals. On a more sinister level, our health data derived from wearables – about as personal as information can be – will be sold by data brokers to about a dozen federal agencies, ranging from the FBI and the IRS to the Department of Homeland Security.

Health data from wearables will surely become part of a single, federal database of Americans’ information. “Techno-utopianism” observes Natalia Mehlman Petrzela “assumes more sophisticated technology always yields a better future.” Without constructing the requisite privacy guardrails for the data new technologies generate, quantifying ourselves on such an extreme scale may invite unwanted scrutiny.
​
Do we really want the FBI or the IRS to be able to warrantlessly access our deeply personal health issues? The wearables revolution, and the data it generates, is just another privacy violation that should prompt Congress to enforce the Fourth Amendment by forbidding the government from warrantlessly purchasing our most personal data.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

When Police Profit From Protection

9/8/2025

 

“Ethics is knowing the difference between what you have a right to do and what is right to do.”

- Justice Potter Stewart
Picture
​Local police departments are spending billions of dollars on surveillance technology, from cameras, to cell-site simulators, to drones. Customers in blue range from the New York Police Department, which has invested $3 billion in surveillance in recent years, to small-town departments willing to fork out tens of thousands.

With so much money sloshing around, it is reasonable to wonder how careful local officials are in maintaining clear boundaries between customer and vendor. Events in Atlanta suggest that sometimes these boundaries are, at best, blurry.

Marshall Freeman is the Chief Administrative Officer of the Atlanta Police Department (APD) and a former leader at the non-profit Atlanta Police Foundation. Together, the Foundation and the APD devised Connect Atlanta, a camera network that makes Atlanta one of the most surveilled cities per capita in the United States.

The Atlanta Community Press Collective (ACPC) was combing through public records when they noticed Freeman’s name on a Conflict of Interest Disclosure Report. Citing “financial interest” in Axon, a law enforcement tech company, he recused himself from contract-related “matters and dealings” that could impact Axon financially. “I have interest in a company that is currently in talks with Axon around acquisition and investment,” he wrote, without specifics.

ACPC discerned that Freeman’s unnamed stake was in a company called Fusus, whose software fuels the Connect Atlanta surveillance system. Axon acquired it for $240 million barely a week after Freeman filed his disclosure. More red flags followed. Freeman was the only public official quoted in Axon’s press release announcing the acquisition: “I wholeheartedly encourage all agencies to embrace this cutting-edge technology and experience its transformative impact firsthand.”

Using open records requests, ACPC also reports it also found emails indicating that Freeman “boosted Fusus and Axon products to other agencies in Georgia and around the U.S.” on multiple occasions post-disclosure. When the reporting first surfaced, APD responded tersely: “The appropriate ethics filings were submitted.”

A few weeks later, though, the City of Atlanta Ethics Office begged to differ, announcing an investigation into Freeman’s post-recusal behavior. Fifteen months later, the body released an official report totaling 313 pages. The findings suggest that Freeman’s relationship with the camera-pushing Fusus dated back to his days at the Atlanta Police Foundation, a relationship he brought with him to APD and continued to nurture. According to The Guardian, he consulted for Fusus for at least a year after joining APD, “crisscrossing the country in person and by email while repping the company, including conversations with police departments in Florida, Hawaii, California, Arizona and Ohio.”

All told, the Ethics Office found 15 separate matters in which Freeman used his official position as an influencer for Axon and Fusus. For at least part of this time, he served on the board of two Fusus subsidiaries in Virginia and Florida – a fact he did not disclose to ethics investigators. 

Writing in The Intercept, Timothy Pratt and Andrew Free detail how Freeman’s impropriety (the “appearance” of which is the only thing he’s admitted to) is making all of us less free – taking the Great Atlanta Mass Surveillance Experiment and replicating it from sea to monitored sea: Seattle, Sacramento, New York City, Omaha, Birmingham, Springfield, Savannah, and counting.

Freeman may be an exception. But he might be the rule. It doesn’t matter, given the outsized influence even one public official can have when it comes to the proliferation of the police surveillance dragnet in the United States. Then again, by the time robust surveillance systems get to smaller, heartland cities like Lawrence, Kansas, it may already be too late.
​
At the very least, police procurement processes would benefit from tighter rules, like those that govern Pentagon officials when they assess contracts.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Home Security and the Rise of Surveillance Art

9/4/2025

 
Picture
PHOTO: www.swiftcreatives.com
We often speak of surveillance technology. Now we have surveillance art, modernist sculptures that watch you back whenever you admire them.

We’re a bit forgiving when the technology is used as a form of home security, since it is defensive in nature rather than invasive (and mass in scale). But the melding of art and surveillance is a trend that ought to give anyone pause.

Alyn Griffiths of Dezeen reports on Sculptural Surveillance by Denmark studio Swift Creatives. Marketing to homeowners, these designs are bendable silly straws that you can customize into landscape art. Slender, looping, and brightly colored, these are meant to be noticed.

In the words of Swift Creatives co-found Carsten Eriksen, “Our concept for this collection aims to challenge the conventional notions of home surveillance, transforming functional devices into objects of beauty that homeowners can proudly display.” They are, he says, “aimed to stand out.”

Driven by residents/owners themselves, such approaches to home security are respectful and a far cry from the techniques high-tech burglars have been using. They also represent a far safer choice than Chinese-made junk products masquerading as security devices – which can be used to watch their owners instead of the other way around.
​
This has the feeling of a beginning of a trend. Perhaps the next time you get the creepy feeling that the eyes in a painting or a Rodin sculpture are following you, you might be right.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Note to Protestors: Turn Off Your Wi-Fi

9/4/2025

 
Picture
Philip K. Dick, the 20th century writer whose science-fiction stories proved prescient, once declared: “My phone is spying on me.” He might have been paranoid then, but he wouldn’t be now.

Wi-Fi has become the newest battlefield in the surveillance war. First, researchers showed it could sense bodies and furniture in the dark. Then came “WhoFi,” a variant that can detect the size, shape, and makeup of those bodies. A once obscure technology is now advancing at a disturbing clip.

Now comes something simpler – and just as insidious, from Australia. In July 2024, the University of Melbourne used Wi-Fi location data, cross-referenced with CCTV footage, to identify student protestors at a sit-in, reports Simon Sharwood of The Register. This was after the school ordered protestors to leave and warned that anyone who stayed could face suspension, discipline, or police referral.

Despite the students’ misbehavior, the Victoria state’s Information Commissioner investigated this use of technology, citing possible violations of the 2014 Privacy and Data Protection Act. The final report cleared the university’s CCTV use but found its Wi-Fi tracking out of bounds. Why? Because the school had never clearly disclosed this purpose in its Wi-Fi policies. The Commissioner reports:

“Even if individuals had read these policies, it is unlikely they would have clearly understood their Wi-Fi location data could be used to determine their whereabouts as part of a misconduct investigation unrelated to allegations of misuse of the Wi-Fi network.”

The Commissioner called this “function creep.” Or as we would say, mission creep. Whatever the name, it’s a serious problem. Surveillance technologies rarely stay in their lane. Once deployed, they inevitably “creep” unless nailed down by clear rules, ethical guardrails, and organizational cultures that prize transparency over convenience.

To its credit, the university cooperated with the investigation and promised reforms.

But let’s be fair, the University of Melbourne isn’t unique here. We’re all naïve about the countless ways our gadgets betray us. And it’s not just CCTV. No one should be shocked when cameras are used as surveillance tools. It is far less obvious that almost every modern technology can be repurposed to follow us wherever we go.

Yes, Virginia, Wi-Fi tracks location. It always has. And whenever location data is on the table, the odds of being spied on shoot through the roof.

What else relies on location data? Practically everything with a battery. If you want to reduce your surveillance footprint, you can’t rip down the cameras – but you can shut down your phone, smartwatch, Fitbit, smartglasses, and every other blinking, beeping device. Or better yet, leave them at home.
​

With the possible exception of pacemakers, of course.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Watching the Watchers: On Its Own, AI Isn’t Watching, Or Thinking

9/2/2025

 
Picture
Image: Citizen website.
Joseph Cox of 404 Media reminds us of three things that we know to be true about the new era of generative artificial intelligence:

  1. AI isn’t a substitute for people.
  2. AI isn’t a substitute for people.
  3. AI isn’t… well, you get the picture.

As we’ve written before, AI works best when there’s a human in the loop. Take the case of Citizen.com, whose app is increasingly taking an AI-only approach to crime fighting. Because, really, what could possibly go wrong?

Plenty, as you can imagine. Without further ado, here’s 404 Media’s report on what happens when AI is left to its own devices, Citizen-style. It is prone to:

  • Mistranslating “motor vehicle accident” as “murder vehicle accident.”
 
  • Misinterpreting addresses.
 
  • Publishing incorrect locations.
 
  • Adding gory or sensitive details that violate Citizen’s guidelines.
 
  • Sending notifications about police officers spotting a stolen vehicle or homicide suspect, potentially putting operations at risk.
 
  • Writing alerts as if officers had already arrived on the scene, when in fact the dispatcher was only providing supplemental information while officers were en route.
 
  • Duplicating incidents, failing to recognize that two pieces of dispatch audio are related to the same singular event.
 
  • This was especially common with police chases, where dispatch continually provided new addresses. The “AI would just go nuts and enter something at every address it would get and we would sometimes have 5-10 incidents clustered on the app that all pertain to the same thing,” one source said.
 
  • Omitting important details, such as whether a person was armed with a weapon.
​
The stakes are as strategic as they are tactical. One of Cox’s sources told him, “This could skew the perception of crime in a particular area,” as AI-created incidents proliferated.
 
By the way, the original name of Citizen – both the app and the company – was, perhaps tellingly, Vigilante. But that’s a story for another day.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Data Privacy Laws Sweeping the States

9/2/2025

 

Will Congress Follow Montana by Closing the Data Broker Loophole?

Picture
Twenty states have enacted major consumer data privacy laws. When will Washington, D.C., wake up and restrict the open season on Americans’ personal information at the federal level?

California lit the fuse in 2018, passing laws that set limits on how businesses collect and sell consumers’ data. This year, new privacy laws have taken effect, or soon will, in New Hampshire, Delaware, Iowa, Nebraska, New Jersey, Tennessee, Minnesota, and Maryland.

Montana may offer the best model for federal action. The Montana Consumer Data Privacy Act, which went into effect late last year, mirrors many other state laws, while giving strong, clear rights. In Montana, consumers have the right:

  • To opt-out of data sales, targeted ads, or profiling that drives automated legal decisions.
 
  • To know if a data “controller” is processing their personal information and to access that data.
 
  • To correct errors.
 
  • To demand deletion of personal data.
 
  • To exercise these rights without retaliation.

Like many other states, the Montana law also adds special protections for minors, requiring consent for data sales and targeted ads to children aged 13 to 16.  

But where Montana truly shines is by closing the notorious “data broker loophole.” That loophole lets government agencies dodge the Fourth Amendment’s warrant requirement by simply buying consumers’ data.

Montana now flatly bars law enforcement from purchasing sensitive electronic data – such as electronic communications metadata and precise geolocation information – without a warrant.

The federal government has no such restraint. Agencies from the FBI and IRS to the Department of Homeland Security, and the Department of Defense, routinely buy and access Americans’ sensitive personal data. Government lawyers insist this is fine because we all “agree” through terms of service – though almost no one reads them, and they never warn consumers that third-party data brokers might be selling their data to the FBI.
​
As more states pioneer privacy laws, the pressure builds. An intense debate on the data-broker loophole in Congress is inevitable. Lawmakers would do well to take a cue from one of Montana’s favorite sons, Gary Cooper, who said: “One nice thing about silence is that it can’t be repeated.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

“Wearables” – A Euphemism for “Spy Tech”

8/26/2025

 
“I don’t think you can make it off the record once you’ve said it – you can’t call dibs after the fact.”

​- Journalist Philip Corbett
Picture
Wearables are defined by their comfort. But there is a lot about wearable technology that is distinctly uncomfortable, if not Orwellian.

Wearable computers hit the mainstream with the introduction of Fitbits and smartwatches in the 2010s. Now, says The San Francisco Standard, the rise of artificial intelligence is adding spy tech to the wearable computing family tree. The newest devices are akin to smartglasses but take that technology’s most invasive feature – recording the environment – and turn the creep factor up to 11. The new wearables are stylish and somewhat stealthy and designed to do two things very well: listen and remember.

They come in the form of pendants, necklaces, lapel pins – or, in a twist, might even look like a Fitbit or smartwatch. But they are all recording devices capable of capturing the wearer’s every conversation and meeting, then transcribing them, and – the pièce de la résistance – using AI to organize, analyze, and mine them for insights (think personal assistant on steroids, or maybe your very own opposition researcher). In some cases, the devices may only transcribe conversations rather than record them, but they’re still listening and processing conversations, so such distinctions are hardly comforting.

The San Francisco Standard suggests that everyone in Silicon Valley should assume that everything they say, especially at work, is being recorded. Which means the rest of America – and its kitchen tables, coffee houses, and classrooms – won’t be far behind.

One venture capital partner told the Standard’s writers that she knew a fellow VC who records all in-person meetings “without telling the other meeting participants. It's an invasion of privacy and I seriously disapprove of it." Then, presumably referring to herself and the rest of us would-be audience members, she added, “Of course, this is a horrible way to live your life.”

In terms of the privacy concerns raised by this new generation of wearables, Julian Chokkattu of Wired cracked the code. Earlier generations of recording devices and software “at least required active engagement like a tap or a wake word to activate their ability to eavesdrop.” For the most part, the new devices are passive and always on, which places responsibility for gaining consent on the instigator. In other words, “Fox, meet henhouse.”

In the research, there are lots of names for the chilling effects that even consensual recording has on conversations, but one of the keenest is “spiral of silence.” People will varnish the truth, if they bother to speak it at all. They will hold back, self-censor, even shut down. As for the possible effects on creativity that this sort of tech might have – as in a brainstorming session, for example – we invite you to judge for yourself.

If you think all of this seems like a claim just waiting for a plaintiff, we agree: It’s a one-way express ticket to litigation city. But as with most things AI, the laws governing them are in their infancy and court rulings sparse. One corner of Silicon Valley is already fighting back though: Confident Security is developing Don’t Record Me, a browser plugin that could potentially detect illicit recordings and disrupt them.

What about audible cues or flashing lights to indicate that one of these devices is collecting data? Don’t count on it. One entrepreneur told Wired, in effect, “That would drain too much battery life.” Another claims that all you have to do is think about recording to activate his product. Thankfully, for that mode to work, the wearable has to be affixed to the side of your temple with medical tape.
​
But don’t expect other forms of personal surveillance to be so obvious. All the more reason for requiring disclosure for private recording and warrants when government agents listen in on what we say.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

TikTok’s Stalkerware Ads

8/25/2025

 
“I think the very word stalking implies that you're not supposed to like it. Otherwise, it would be called 'fluffy harmless observation time'.”

Author Molly Harper
Picture
​TikTok was already a privacy nightmare:

  • The EU fined it $600 million for breaching data privacy rules.
  • An FCC commissioner asked Apple and Google to remove the app from their stores because of mounting evidence that China had access to all user data.
  • The FBI opened an investigation into alleged use of the app to track American journalists.

To this troubling list we can now add the following: In violation of the platform’s own policies, sellers are using TikTok to market GPS trackers to stalkers, reports Rosie Thomas of 404 Media.

“Unlike AirTags,”one vendor boasts, “this thing doesn’t make a sound, doesn’t send alerts, she will never know it’s there.” In the comments section of a similar ad, one user bragged, “I bought some and put it on cars of girls I find attractive at the gym.”

Lest there be any doubt, Thomas’ report quotes Eva Galperin at the Electronic Frontier Foundation: “This is absolutely being framed as a tool of abuse.” Galperin, co-founder of a non-profit that keeps tabs on such products, categorizes these products broadly as “stalkerware.”

The central legal and moral issue underlying stalking, as with all violations of privacy, is consent. Expert Market’s page summarizing GPS tracking laws by state underscores the point: The word “consent” appears in these laws 115 times.

When asked about the viral proliferation of ads for these tracker tools, TikTok told 404 Media that they “prohibit the sale of concealed video or audio recording devices on our platform.” And yet, Thomas and her colleagues continued to find such ads every time they looked.
​
Which, of course, should come as a surprise to absolutely no one. This is just one more good reason why President Trump should cease suspending the law requiring TikTok to be sold or shuttered.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Stop Letting Hackers Win: Pass the Lummis-Wyden Cybersecurity Amendment

8/25/2025

 
Picture
America’s enemies aren’t storming our shores with tanks and planes – they’re breaking into our email, phone, and data systems. And right now, we’re making their job too easy.
 
The U.S. Senate can toughen up America’s defenses by passing the Lummis-Wyden amendment (S. Amdt. 3186) to the 2026 National Defense Authorization Act. This bipartisan fix would finally force the Pentagon to use secure, encrypted communications – and end its costly dependence on a handful of Big Tech vendors.
 
The Scale of Attacks
 
In 2023, Chinese hackers broke into Microsoft-hosted government email accounts, stealing 60,000 messages from the State Department alone. A year later, another Beijing-backed group hacked into AT&T and Verizon, tapping phones of Americans that included presidential candidate Donald Trump and then-Sen. J.D. Vance.
 
But Vance’s conversations were kept safe. How? He relied on Signal, the end-to-end encrypted app that even the hackers couldn’t crack.
 
The obvious takeaway is that without end-to-end encryption, our most sensitive communications are one hack away from the front page of Beijing’s intelligence briefings.
 
The Lummis-Wyden Fixes
 
  • Mandates encryption. The Pentagon must be required to use secure, end-to-end encrypted systems whenever possible.
 
  • Ends vendor lock-in. No more being trapped inside Microsoft Teams or Google Docs. Interoperability will be the law, so new and better tools can compete.
 
  • Saves money and boosts innovation. Opening the market to smaller, nimbler companies means lower costs and stronger security.
 
Why It Matters

Our military today is stuck in walled gardens built by giant tech firms that all too often proved eminently hackable. That’s bad for taxpayers and disastrous for national security. Hackers don’t need to break into every office at the Pentagon – they just need to knock down the door of one weak provider. The Lummis-Wyden amendment puts a lock on those doors.
 
Congress Must Choose Security
 
Congress can keep letting foreign spies read Cabinet-level emails and tap presidential phone calls, or it can finally demand that the Pentagon use the best tools available. This amendment is a wake-up call that we can’t defend the country with outdated software. Encryption and competition would at least give our country a fighting chance to keep China and other bad actors out of our business.
 
PPSA calls on the Senate to pass the Lummis-Wyden Amendment to stop giving hackers the upper hand. This measure will better protect our service members, the American homeland, and the private deliberations of our leaders.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Flock Appears to be Combining Driver Surveillance with Personal Data

8/19/2025

 
Picture
​Where you drive is personal. So is what you click on and who you communicate with. Combine the two, and suddenly a revealing picture emerges of your political, romantic, financial, and religious beliefs and activities – in short, a comprehensive dossier of your private life.
 
That appears to be what is happening with Flock, which is mashing up its camera surveillance of millions of drivers in 5,000 communities across the United States with digital information gathered on us by data brokers.
 
According to 404 Media, the good news is that after internal deliberations, Flock told its employees in May it would not merge stolen dark web data with information from its network of license plate readers (LPRs). Joseph Cox of 404 Media reported that in a meeting, a Flock supervisor told employees that after a “policy review process,” the company’s new search tool Nova would not incorporate hacked data from the dark web.
 
So far, so good. Dealing in stolen merchandise is never a good look for a company. Flock, however, announced that it will combine “public records data, Open Source intelligence, and license plate reader data” for law enforcement and other customers.
 
This marks a policy shift. Flock has long insisted that its license plate readers do not collect personally identifiable information, claiming they merely provide law enforcement with a way to track cars tied to crimes. But Jay Stanley of ACLU reports that the company now plans to plug its systems into commercial data brokers offering “people lookup” services.
 
ACLU’s Stanley writes:
 
“In the 1970s, after some government agencies were found to be building dossiers on people who aren’t suspected of involvement in crime like the East German Stasi, Congress enacted the Privacy Act banning agencies from such recordkeeping. Yet the ethically shady and frequently inaccurate data broker industry does basically the same thing, and when law enforcement becomes a customer of those data brokers, it represents an end-run around the law. By tying its LPR data together with data brokers, Flock is effectively automating and scaling the end run around our checks and balances that law enforcement data broker purchases represent …
 
“Imagine that a police officer stood on your street writing detailed notes about you every time you drove or walked by them. All the details about what your car looks like (make, model, color, distinguishing characteristics, bumper stickers, etc.), as well as details about visible occupants and pedestrians – how many, at what time, their activities, demographic data, what they are wearing, attributes they may have such as a beard, hat, tattoo, or T-shirt, and what that hat, T-shirt, or tattoo might say. Now imagine that there is an army of police officers doing this on every block.”
 
Thus, algorithms can now seek patterns in vehicle movements to identify and alert law enforcement to drivers who are “suspect.” Stanley pinpoints why this approach clashes with both the letter and the spirit of the Fourth Amendment. He writes there is a big difference between “providing tools for officials to use in investigating suspicion to generating suspicion.”
 
The fusion of your purchased data with your movements could do exactly that. One day, something as ordinary as making a right on red or casual U-turn could transform you from a routine driver into a suspect.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Mind Reading Is No Longer Sci-Fi

8/18/2025

 
Picture
​Larry Niven, the acclaimed science-fiction writer, once drolly observed, “I do suspect that privacy was a passing fad.”

It certainly seems so today, with networked Ring cameras on every door linked to public and private CCTV, license plate readers, and government agencies buying up our digital lives from data brokers… all of it potentially connected to AI and facial recognition software.

Even inside our home, drones can look through our windows. Thermal imaging cameras in the hands of police can penetrate walls to watch us move around in our living rooms and bedrooms.

But at least there is one place where surveillance cannot penetrate, one last refuge of absolute, inviolable privacy – the inside of our skulls. We are free to think any thought, sacred or profane, sublime or silly, without fear of detection or punishment by any human authority. 

But maybe not for much longer.

The science journal Cell reports that a computer system has been trained to decode brain waves from people who silently move their mouths while mentally sounding the words to themselves. The signal from the brain is then translated into speech in real time on a computer screen with an error rate of 26 percent to 54 percent.

Annika Inampudi in Science reports that this technology, as it is refined, will be a godsend to speech-impaired people paralyzed by strokes or neurological conditions such as amyotrophic lateral sclerosis (ALS). To protect test subjects from blurting out private, inner speech, users can be given unique, nonsense phrases like “chitty chitty bang bang” to cue the device to read their thoughts only when they want it to.

It is this latter development that gives us pause. The fact that a safeword is needed to defend against unwanted exposure of thought is concerning. Also concerning is that scientists have had significant success decoding thought even when the subject is not silently mouthing the words he or she is thinking about. The system at times can read mere inner thoughts.

At a time when digital technology evolves on fast-forward, it is not too early to be concerned about how this technology might be abused. After all, a few years ago AI couldn’t pass the Turing test. Now ChatGPT is regularly writing entertaining short stories, poems with striking imagery, and student papers that get A’s from naïve professors. The same progression could enable mind-reading technology to rapidly allow authorities to dip into people’s skulls against their will.

Imagine, for example, how this technology might be used in interrogations.

In this country, at least, the Fifth Amendment prohibition against self-incrimination should make results from such mind-readings inadmissible. But in professions in which polygraphs are routine, from law enforcement to intelligence and some retail positions, it is easy to imagine how such technology could be abused.

Overall, speech recognition technology is a boon for handicapped people who are desperate to communicate. It is a heartening and praiseworthy development that scientists – often caricatured as amoral agents of progress – are diligently thinking of procedures to compartmentalize the reading of thoughts only when subjects permit it.
​
Still, this story should give us pause. Something to think about…

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS
<<Previous
Forward>>

    Categories

    All
    2022 Year In Review
    2023 Year In Review
    2024 Year In Review
    Analysis
    Artificial Intelligence (AI)
    Biometric Data
    Call To Action
    Congress
    Congressional Hearings
    Congressional Unmasking
    Court Appeals
    Court Hearings
    Court Rulings
    Data Privacy
    Digital Privacy
    Domestic Surveillance
    Facial Recognition
    FISA
    FISA Reform
    FOIA Requests
    Foreign Surveillance
    Fourth Amendment
    Fourth Amendment Is Not For Sale Act
    Government Surveillance
    Government Surveillance Reform Act (GSRA)
    Insights
    In The Media
    Lawsuits
    Legal
    Legislation
    Letters To Congress
    NDO Fairness Act
    News
    Opinion
    Podcast
    PPSA Amicus Briefs
    Private Data Brokers
    Protect Liberty Act (PLEWSA)
    Saving Privacy Act
    SCOTUS
    SCOTUS Rulings
    Section 702
    Spyware
    Stingrays
    Surveillance Issues
    Surveillance Technology
    The GSRA
    The SAFE Act
    The White House
    Warrantless Searches
    Watching The Watchers

    RSS Feed

FOLLOW PPSA: 
© COPYRIGHT 2026. ALL RIGHTS RESERVED. | PRIVACY STATEMENT
Photo from coffee-rank