Project for Privacy and Surveillance Accountability (PPSA)
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE

 NEWS & UPDATES

Watching the Watchers: AI & Cybercrime Are a Match Made In Hell

12/8/2025

 
Picture
Axios contributors Christine Clarridge and Russell Contreras recently assessed the increasingly ominous role artificial intelligence is playing in cybercrime. Deepfakes, ransomware, identity hijacks, and infrastructure hacks are all newly elevated threats – widely varied acts that previously required specialized expertise and massive organizations.
But not anymore. Now, they write:

“Off-the-shelf AI lowers the skill level and cost of carrying out attacks, enabling small crews to execute schemes that previously required nation-state resources.”

Here's what else their snapshot revealed:

  • Financial systems seem especially vulnerable, but the threat isn’t limited to banks. It potentially affects any entity with customer accounts, from hospitals to water plants to retailers.

  • “Crimes can now hit millions at once with voice clones and account takeovers, while local agencies are trained and funded to chase one case at a time.”

  • AI can commit crimes humans aren’t capable of: “AI can create automations to ‘lock pick’ into a system millions of times per second, something humans can't do.”

  • Almost anything can be disabled in such attacks: a Port of Seattle attack “disabled airport kiosks, baggage systems and Wi-Fi, while exposing data for roughly 90,000 people.” Speaking of Seattle, the Seattle Public Library “suffered a ransomware attack that wiped out its catalog, computers, Wi-Fi and e-books.” It cost Seattle a million dollars and three months to fully recover.

  • The Chinese government is all-in: “State-backed hackers used AI tools from Anthropic to automate breaches of major companies and foreign governments during a September cyber campaign.” That attack marks a particularly dark turn, since the level of human involvement required was minimal thanks to AI’s assistance.

  • More crimes are happening: “Generative AI has increased the speed and scale of synthetic-identity fraud,” especially where real-time payment systems are involved.
​
  • And they are happening faster: “A deepfake attack occurred every five minutes globally in 2024, while digital-document forgeries jumped 244% year-over-year.”

When it comes to cybercrime, these stats suggest that it pays to be more than a little paranoid.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

ShadyPanda: How China’s Browser Extensions Can Browse Your Data

12/8/2025

 
Picture
​Security consulting firm Koi recently published an exposé about a new online privacy threat, one with the unforgettable name of “ShadyPanda.” The scheme allowed browser extensions to infect 4.3 million Chrome and Edge users. In this case, “infect” means sit there quietly, take control whenever it wants, then pretty much do whatever it pleases, including:
​
  • Stealing anything stored in browsers (search history, passwords, credit card numbers)
 
  • Intercepting what’s being typed and captured in screenshots
 
  • Hijacking web traffic, redirecting you to fake logins
 
  • Injecting malicious software/scripts (to add even more capabilities to spy on you and rip you off)
 
  • Impersonating users (stolen cookies and passwords for theft)

ShadyPanda’s extensions often worked legitimately for years before being activated and turned into full-blown spyware – making it an especially effective tool for keeping tabs on businesses.

Some of the extensions were simple wallpaper galleries or productivity tools, and many had been marked as “trusted” or “verified” by the marketplaces that hosted them.

One of the key vulnerabilities this research exposed was the whole “trust and verify” approach. Once approved by various marketplaces, extensions were never re-verified. And because most users opt for “auto-updating,” the extensions could continue to build up a large user base and then be activated as spy tools when needed. Koi reports:

“Chrome and Edge's trusted update pipeline silently delivered malware to users. No phishing. No social engineering. Just trusted extensions with quiet version bumps that turned productivity tools into surveillance platforms.”

And where is all that collected data going? To surveillance-obsessed China, of course.

Worried that you might be infected? Check out The Hacker News’ partial list of the culprits. Infosecurity Magazine recommends you also check your browser extensions and remove anything you don’t recognize or no longer use.

And turn off auto-updating while you’re at it.
​

It is a dispiriting truth of modern life that we are – and likely always will be – in a footrace against hackers and thieves, whose tools will grow even more dangerous as AI evolves. But we don’t have to be helpless. At least we can take satisfaction in knowing that by embracing best practices, we can at least be a step ahead and leave the ShadyPandas of the world empty-handed.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Sex Talk from Children’s AI-Enabled Teddy Bears

12/5/2025

 
Picture
​If you’re making a holiday shopping list for the kids, be grateful that Kumma “talking toy bears” will no longer be on store shelves. It is creepy enough that AI-enabled toys allow companies to track what your children (and any family members in the vicinity) say. How long such data is kept – and how it might be used when children become adults – is anyone’s guess.

Worse, an advocacy group found that FoloToy’s Kumma bear had no problem recommending kinky sex as a way to spice up relationships. (It offered, among other things, tips on how to tie knots). Completely unrelated and of no concern at all is the news that OpenAI announced a partnership with Mattel in June of this year.

Now back to the bear: Not only did Kumma discuss very adult sexual topics, but it also introduced new ideas the evaluators hadn’t even mentioned – “most of which are not fit to print.” They also found AI-powered children’s toys (including Kumma) that variously:

  • Offered advice on where to find matches, knives, and pills
 
  • Provided tips on how to be a good kisser
 
  • Asked follow up questions about sexual preferences
 
  • Seemed dismayed when users said they had to leave
 
  • Found ways to actively discourage users from leaving
 
  • Listened continuously and joined a nearby conversation

And as that last bullet suggests, don’t even think about privacy:

“These toys can record a child’s voice and collect other sensitive data, by methods such as facial recognition scans,” warn the researchers. It’s unclear what the (mostly Chinese) companies pushing these products will do with all the data they mine from these toys, but deleting it seems highly unlikely. To date, such AI systems remain eminently hackable.

Earlier talking toys like Hello Barbie relied on machine learning and could only follow predetermined scripts. But the rise of generative AI has introduced true conversationality into the mix – and with it, massive unpredictability (randomness, after all, is baked into generative AI models). The responses are often completely novel – and may be entirely inappropriate for younger audiences (or, as adults have discovered, just plain wacko).

Parents need to understand that children might be having detailed, potentially formative conversations on all kinds of important topics – without their knowledge or involvement. And many of the toys in question use gamification techniques and other strategies (as in the list above) to keep children engaged and continuously coming back for more.

Of course, it’s now a given that every AI toy tested framed itself as one’s buddy or even best friend. The stakes could hardly be higher: For the youngest children, the presence of AI-based toys introduces a massive unknown into a critical window for development.

For now at least, Kumma the bear is off the market in the wake of the revelations about its kinky side and tell-all personality.
​
Being a parent or caregiver was already hard enough. Now thanks to generative AI and the mad rush to reinvigorate a market (children’s toys) that had long been stagnant, gift-giving is turning out to be almost as fraught as parenting itself.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

You May Already Be A Porn Star (And Not Even Know It)

12/2/2025

 
Picture
​Sometimes the best defense against privacy violations is as simple as choosing a good password.

Such was the case in South Korea, where officials recently arrested multiple suspects accused of hacking into private surveillance cameras and capturing footage as pornography for voyeurs. The 120,000 cameras were inherently hackable because they are, after all, internet devices. But users made it all the easier by choosing exceptionally weak passwords.

It's uncertain just how explicit the footage was (sourced from homes, Pilates studios, and even a women’s health clinic). Some of it was sold on overseas platforms that appear to cater to sexually exploitative content.

Pro tip: “11111” and “12345” are terrible passwords, as are any other repeating or sequential numbers. And this maxim is especially relevant when dealing with devices that are internet-connected. Yet from Zoomers to octogenarians, the password problem remains, as The Register’s Connor Jones reports, as “prevalent and dangerous as ever.”

Case in point: the recent news that the password for the ransacked Louvre’s CCTV system was “Louvre.”

So clearly the vulnerability of camera systems is a problem that goes beyond South Korea and this particular (ab)use case. In June, security researchers found that they could access tens of thousands of internet-connected cameras worldwide (35 percent of which were in the United States). Vulnerable systems were everywhere in addition to homes: retail sites, construction zones, hotels – you name it. By studying the feeds, researchers noted, bad actors can find a treasure trove of useful information – from poorly lit spots to unguarded doors to times when no one’s around.

Somewhere out there is a black market for anything a “security” camera might capture. So think twice about even having Internet-connected cameras (CCTVs that record directly to local devices are a better alternative). If you must be connected, however, then at least up your password game.
​
Finally, if you’ve installed connected cameras, try not to forget where they are five years hence on some enchanted evening.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Utah’s Respect for Personal Identity Puts ICE Arrogance to Shame

12/1/2025

 
Picture
​When your identity is confirmed by a string of numbers in a computer, are you still yourself if the algorithm determines you (the person) are not you (the digital ID)?

One state, Utah, is leading the nation in answering this question with policies that safeguard humans, while Washington, D.C. is heading down the path of reducing humans to algorithms.
Consider ACLU’s Jay Stanley, who praised Utah for its “State-Endorsed Digital Identity” (SEDI), the state’s new framework for digital ID systems. In an approach that should be the norm rather than the notable exception, the Beehive State puts privacy first.

Utah begins with the conviction that identity “is not something bestowed by the state, but that inherently belongs to the individual; the state merely ‘endorses’ a person’s ID.” In other words, our identities belong to us. We are born with them. We own them. With that realization comes new-found respect for privacy and other forms of personal freedom. 

This view of identity stands in sharp contrast to the definition Stanley found in the data-driven world of federal law enforcement. With the feds, identity is becoming something only the state can grant, defaulting to incomplete or faulty digital verification of citizenship.

To be clear, both Utah’s SEDI platform and the federal approach utilize digital ID systems, but one is a case study in digital due diligence while the other illustrates the dangers of slapdash digital recklessness. The federal system is based on incomplete databases, poorly designed architecture, evolving (meaning, far from perfect) technology, and an utter disregard for the constitutional rights of individuals.

Utah’s approach differs from the federal approach in very important ways:
​
  1. Being “user-centric” to ensure that government identification systems are used to empower individuals, not control them.
  2. Being free from surveillance, visibility, tracking, or monitoring by any entity – including private companies and unauthorized government agencies and staff – other than the party that is solely authorized to check the ID.  
  3. Making factors like security, completeness, and accuracy a top priority, in contrast to the unreliability of the facial recognition technology that underlies many of today’s digital verification systems.
  4. Enforcing a user’s “right to paper” (or plastic), including continued and unfettered access to essential government services, even when using only non-digital, physical ID methods.  
  5. Adhering strictly to constitutional rights, particularly Fourth Amendment protections against warrantless searches and dragnet-style fishing expeditions conducted without probable cause.

Stanley goes on to quote the Ranking Member of the House Homeland Security Committee, who reports that an app (called Mobile Fortify) used by Immigration and Customs Enforcement (ICE) now constitutes “definitive” determination of a person’s status “and that an ICE officer may ignore evidence of American citizenship – including a birth certificate.”

That’s bad enough on its own of course, but along the way, the government now sweeps up Americans’ biometric identifiers en masse. The databases Mobile Fortify accesses contain not only our photographs but enough records to constitute a permanent digital dossier.

Congress did not get to review, much less approve, any of this. The American people never voted on it. In fact, the whole thing leaves us wondering what happened to the Privacy Act, signed into law by President Ford in 1974. It has been described as “the American Bill of Rights on data.”  

By declaring that identity is solely digital, determined by stealthy algorithms and policies, and deniable to those whose data is non-existent, incomplete or inaccurate, the federal standard – in sharp contrast to Utah’s – subverts 250 years of traditional, constitutional practice. Remember: Our founders built the world’s most vibrant democracy on pieces of parchment copied by hand.

In any truly free society, identities are personal possessions (to help secure individual rights and facilitate their voluntary participation in society). Identities bestowed by the state ultimately serve only the state.

That we even need to ponder the nature of identity reveals the absurdity of these abuses our personhood and privacy. Nevertheless, here we are. Without transparent conversations and healthy debate, we face a future in which we are whomever the state says we are, made of malleable 0s and 1s, with nothing grounded in the physical world.
​
It's a discussion that, as of now, Utah alone seems committed to having.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

The Double-Edged Sword Wrapped in Eric Swalwell’s Privacy Lawsuit Against Housing Chief Bill Pulte

12/1/2025

 
Picture
U.S. Congressman Eric Swalwell speaking with attendees at the 2019 California Democratic Party State Convention at the George R. Moscone Convention Center in San Francisco, California. PHOTO CREDIT: Gage Skidmore
​Those who live by surveillance cry by surveillance.
 
We wonder how many times politicians on both sides of the aisle will have to get slammed by the very government spying practices they’ve supported before this lesson sinks in.
 
Case in point: Rep. Eric Swalwell (D-CA). Last week, he filed a lawsuit against Bill Pulte, President Trump’s director of the Federal Housing Finance Agency, for accessing and leaking private mortgage records in retaliation for political speech.
 
Pulte has issued criminal referrals to the Department of Justice (DOJ) against Swalwell, New York Attorney General Letitia James, Sen. Adam Schiff (D-CA), and Federal Reserve Governor Lisa Cook on the basis of alleged mortgage fraud. A federal judge dismissed the charges against James, while President Trump used the allegation against Cook to fire her from the Federal Reserve Board (she remains in her job while the Supreme Court reviews the case).
 
Rep. Swalwell’s lawsuit makes an important point:
 
“Pulte’s brazen practice of obtaining confidential mortgage records from Fannie Mae and/or Freddie Mac and then using them as a basis for referring individual homeowners to DOJ for prosecution is unprecedented and unlawful.”
 
We cannot think of any prior use of private mortgage applications to harass political opponents (at least one of them, James, is arguably guilty of using lawfare herself to harass Donald Trump).
 
Pulte’s actions appear to be a flagrant violation of the Privacy Act of 1974, which governs how the government can and cannot handle Americans’ private information. The law, as Swalwell notes, “explicitly forbids federal agencies from disclosing – or even transmitting to other agencies – sensitive information about any individual for any purpose not explicitly authorized by law.”
 
Congress passed the Privacy Act to prevent the creation of a federal database that would create comprehensive dossiers on every American, something we’ve warned is now being attempted. The law specifically forbids agencies from freely sharing Americans’ confidential data gathered for one purpose (such as IRS tax collection), for another purpose (an FBI investigation). Agencies must issue written request justifying any such information sharing.
 
Pulte is anything but transparent.
 
“I’m not going to explain our sources and methods, where we get tips from, who are whistleblowers,” Pulte told the media. This mindset is in keeping with the corrupting spread of the best practices of the intelligence-surveillance state playbook. Today, it is the federal housing agency. We shouldn’t be surprised if tomorrow such “sources and methods” thinking trickles down to federal poultry inspections.
 
Meanwhile, we remain dry-eyed over Rep. Swalwell’s plight.
 
As a member of the House Judiciary Committee, Swalwell argued against – and voted against – the Protect Liberty and End Warrantless Surveillance Act. This bill would have reformed Section 702 of the Foreign Intelligence Surveillance Act by requiring a warrant before the government could access U.S. citizens’ data collected through programs enacted to surveil foreign threats on foreign soil.
 
The Protect Liberty Act would have ended the government practice of using a foreign database to conduct “backdoor searches” on Americans… not unlike, say, a regulatory agency pulling a political opponent’s private mortgage application. The principle of mutually assured payback is something to keep in mind when lawmakers again debate the provisions of Section 702 in April.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

California Court Slams Sacramento’s Racialized Surveillance Dragnet

11/25/2025

 
Picture
Imagine being targeted for surveillance because of your race – not with facial recognition or government inspection of your personal digital data, but through your electric meter. If you lived in parts of Sacramento, this is exactly what happened, as a decade-long scheme quietly bled Americans’ privacy one kilowatt hour at a time.

Sacramento’s Municipal Utility District (SMUD) and local police zeroed in on Asian-American customers, flagging those deemed to be using “too much” electricity. Many were assumed to be growing marijuana illegally – and police eagerly requested bulk data on entire ZIP codes to feed their suspicions.

The Electronic Frontier Foundation in July joined the Asian American Liberation Network to ask the Sacramento County Superior Court to end the local utility district’s illegal dragnet surveillance program. Last week, the court agreed, finding that routine, ZIP-code-wide data dumps had nothing to do with “an ongoing investigation.”

The court wrote:

“The process of making regular requests for all customer information in numerous city ZIP codes, in the hopes of identifying evidence that could possibly be evidence of illegal activity, without any report or other evidence to suggest that such a crime may have occurred, is not an ongoing investigation.”

The response from EFF was even sharper:

“Investigations happen when police try to solve particular crimes and identify particular suspects. The dragnet that turned all 650,000 SMUD customers into suspects was not an investigation.”

The court recognized the obvious danger – dragnets turn vast numbers of innocent citizens and entire communities into suspects.

Still, it wasn’t a clean sweep. The court stopped short of ruling that SMUD’s practice violated the “seizure and search” clause in California’s Constitution.
​
But even a qualified victory is still a victory. We are reminded that privacy wins do happen – one dragged-into-the-sunlight surveillance program at a time. This win is something to be thankful for as we count our blessings this week.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

PPSA Applauds the House Judiciary Committee for Passing the NDO Fairness Act

11/18/2025

 
Picture
​Today, the House Judiciary Committee did something too rare in Washington – it unanimously passed a meaningful privacy reform. By voice vote, Republicans and Democrats joined together to approve the Non-Disclosure Order (NDO) Fairness Act, a bill that reins in one of the most abused secrecy powers in federal law.

Credit for this privacy victory goes to Rep. Scott Fitzgerald (R-WI) and Rep. Jerry Nadler (D-NY), as well as Chairman Jim Jordan (R-OH) and Ranking Member Jamie Raskin (D-MD). Their leadership moved this bill out of committee. It is now up to the full House to pass this measure and send it to the Senate.

The bill’s reform is sorely needed. Under current law, prosecutors can secretly dig through your phone records, emails, and other data – and then slap your telecom provider with a gag order forbidding it from ever telling you that your privacy has been violated. These nondisclosure orders can last indefinitely, leaving Americans in the dark that someone has sifted through their personal communications.

The NDO Fairness Act changes that.

It puts reasonable limits on gag orders, and forces prosecutors to justify any extension. It also requires courts to explain in writing why continued secrecy is necessary – whether to protect an investigation, safeguard a vulnerable person, or address a real national security concern. The NDO Fairness Act makes sunlight the default, not the exception.

The House has, of course, passed the NDO Fairness Act before, only to watch it stall in the Senate. But the politics are shifting.

Senators are furious after learning that Special Counsel Jack Smith secretly subpoenaed the communications of eight senators. They were justifiably upset, but their response was misguided. The Senate quietly added a provision to the recent short-term funding bill giving senators the exclusive right to sue the federal government for up to $500,000 for privacy violations.

Americans don’t need a special carveout for elected officials. They need a law that protects everyone.

The NDO Fairness Act does exactly that.
​
It closes a major privacy loophole without hindering legitimate investigations, striking a balance between public safety and the Fourth Amendment rights of all Americans. The House and Senate now have a chance to fix this problem the right way – by advancing a bill that protects the people who sent them to Washington, not just themselves.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Google Picked a Side – And It’s Not Ours

11/18/2025

 
Picture
​Once upon a time, in Google’s 2004 IPO filing, it aspired to “Don’t Be Evil,” imagining itself a company “that does good things for the world.”

Dateline, November 2025: Various outlets have reported that Google’s app store now includes a version of its Mobile Identify app for Customs and Border Protection. This version is tailored to state and local law enforcement officers who are deputized to work with Immigration and Customs Enforcement (ICE) by using facial recognition to scan people using facial recognition algorithms. If a match is found on federal databases, officials at ICE are notified. And those databases (at least the ones we know of) contain records on more than 270 million people.

Odds are you and your loved ones are in those databases.

The fact that the law enforcement officers who use Mobile Identify are deputized to work alongside ICE is beside the point, as is the fact that ICE has its own, presumably more powerful version of the same app, called Mobile Fortify.

Of far greater concern is that any government agency possesses this ability. It’s easily shared across jurisdictions and Google seems to have no qualms about enabling a tool that could be deployed as a weapon to surveil American citizens at will.

After all, Google’s leaders could’ve just said “no.” But they didn’t, and now an insidious new public-private partnership is afoot. Today, it’s Google and ICE and the issue is immigration enforcement, but don’t expect it to stay that way for long. These kinds of surveillance technologies never stay contained, nor do limitations on who they target. Soon it will be Google and the government – federal, state, county, and local – and the reasons for spying on us could be our religion, political party, ethnicity, affiliation, or – well, you name it.
​
Mobile Identify is just one more reason why Congress must debate how federal agencies are accessing our private information without a warrant. This is something to keep in mind when FISA Section 702, a federal surveillance policy, comes up for reauthorization in April.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Watching the Watchers: If You Are Stopped by ICE, Your Biometric Data Will Be Held for a Generation

11/18/2025

 
Picture
​Robert Frommer, a senior attorney with the Institute for Justice, tells the harrowing story of George Retes, a U.S. citizen and Army veteran of the Iraq War, who was stopped in his car during an immigration sweep.

He was on his way to work when he encountered an Immigration and Customs Enforcement (ICE) roadblock. A melee broke out between protesters and ICE agents. Retes’s car was engulfed in tear gas.

The Institute for Justice reports that agents smashed Retes’s car window, dragged him out, and forced him to the ground with knees on his neck and back – even though he was not resisting.

Despite Retes presenting proof of his citizenship, ICE agents detained him for three days without charges, strip-searched him, and forced him to provide DNA samples. He was not allowed to call a lawyer or given a hearing before a judge. Because Reyes was held incommunicado, his family was left to frantically search for him.

Writing in MSN, Frommer explores what happens to the biometric data ICE collected on Reyes.

“In addition to our DNA, the Department of Homeland Security (DHS) has recently and quietly authorized ICE officers to forcibly collect and retain intimate identifiers: our fingerprints and digital images of our faces. Combined with other technologies, the department is creating a general warrant for our persons, the kind of abuse that ignited the American Revolution.

“A DHS document, meant to ensure our privacy, lays out the facts. An app called Mobile Fortify allows ICE and Customs and Border Protection (CBP) officers to photograph and scan anyone they ‘encounter’ in the field, regardless of citizenship or immigration status. If there isn’t a photo match, officers can collect people’s fingerprints, which are then checked against DHS biometric records. Once DHS has that sensitive data, the app feeds it into CBP’s Automated Targeting System – an enormous watch list that merges border records, passport photos and prior ‘encounter’ images. CBP retains every nonmatch photograph for 15 years, meaning that even if you’re an American citizen mistakenly stopped on the street, the government has your biometric records for (almost) a generation.”
​

Congress should investigate and debate this retention of Americans’ biometric records before reauthorizing a single surveillance authority. And PPSA is hopeful that ICE will be forced to explain its unconstitutional detention of George Reyes when it faces his lawsuit under the Federal Torts Claim Act.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

One Nation Under Watch: How Borders Went from Being Physical to Digital

11/10/2025

 

​“If you want to keep a secret, you must also hide it from yourself.”

​- George Orwell

Picture
​Imagine a dish called Surveillance Stew. It’s served anytime multiple privacy-threatening technologies come together, rather like a witch’s brew of bad ideas. It's best served cold.

The latest Surveillance Stew recipe includes location data, social media, and facial recognition. Nicole Bennett, who studies such things, writes in The Conversation that this particular concoction represents a turning point: borders are no longer physical but digital. The government has long held that the border is a special zone where the Fourth Amendment has little traction. Now the government is expanding border rules to the rest of America.

Immigration and Customs Enforcement (ICE) has put out a call to purchase a comprehensive social media monitoring system. At first glance, Bennett notes, it seems merely an expansion of monitoring programs that already exist. But it’s the structure of what’s being proposed that she finds new, expansive, and deeply concerning. “ICE,” she writes, “is building a public-private surveillance loop that transforms everyday online activity into potential evidence.”

The base stock of Surveillance Stew came with Palantir’s development of a national database that could easily be repurposed into a federal surveillance system. Add ICE’s social media monitoring function and the already-thoroughgoing Palantir system becomes “a growing web of license plate scans, utility records, property data and biometrics,” says Bennett, “creating what is effectively a searchable portrait of a person’s life.”

Such a technology gumbo seems less a method for investigating individual criminal cases than a sweeping supposition that any person anywhere in the United States could, at any moment, be a “criminal.” It’s a dragnet, says Wired’s Andrew Couts, noting that 65 percent of ICE detainees had no criminal convictions. Dragnets are inimical to privacy and corrosive to the spirit of the Constitution.

Traditional, law-based approaches to enforcement are one thing – and enforcement, of course, is ICE’s necessary job. The problem now, warns Bennett, is that “enforcement increasingly happens through data correlations” rather than the gathering of hard evidence.

We agree with Bennett's conclusion that these sorts of “guilt by digitization” approaches fly in the face of constitutional guardrails like due process and protection from warrantless searches. To quote Wired’s Couts again, “It might be ICE using it today, but you can imagine a situation where a police officer is standing on a corner and just pointing his phone at everybody, trying to catch a criminal.”

The existence of Palantir’s hub makes it inevitable that ICE’s expanded monitoring capability will migrate to other agencies – from the FBI to the IRS. And when that happens, what ICE does to illegal immigrants can just as easily be done to American citizens – by any government entity, for any reason.
​
When our daily lives are converted into zeroes and ones, the authorities can draw “borders” wherever they want.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Just What We Need – Hack Makes Recordings by Wearable Glasses Undetectable

11/3/2025

 

“Privacy is not just about hiding things or keeping secret, it’s about controlling who has access to your life.”

​- Roger Spitz

Picture
​Here’s a quick news update on one of the privacy stories of the year: Meta’s Ray-Ban smartglasses. Joseph Cox and Jason Koebler of 404 Media told the story of Bong Kim, a hobbyist who engineered a way to disable the LED light intended to shine conspicuously whenever Meta’s glasses are recording or taking photos.
 
Let’s be clear: Meta has nothing to do with hacks like this one. The company tried to prevent privacy violations by designing the glasses so that if someone covered up the LED light, the recording function wouldn’t work. So we'll skip the “we told you so” part where we question the wisdom of building a modern Prometheus (powered by an app and AI, of course) while clutching at pearls when it gets compromised – as it now is.
 
We’ll also refrain from asking what could possibly go wrong. But here’s one possibility out of 10,000 would-be privacy violations: Imagine a stalker no longer having to worry about an LED light giving him away. Or industrial spies. Or actual spies. Or the colleague at work tricking you into saying something that will get you fired.
 
From a privacy standpoint, wearables (including smartglasses) are a non-starter, a set of technologies primarily in search of a hack. And if you don’t believe that, you probably haven’t been on Reddit lately.
 
According to 404’s reporting, Kim’s modification is advertised on YouTube and costs just $60 (though it’s unclear whether shipping is included). That’s what your privacy is worth these days.
 
So what can you do? At the very least, familiarize yourself with the look of these new wearable glasses from a host of companies. And quietly read yourself a Miranda warning: “anything thing you say can and will be used against you in a court of law.” Or, maybe just in a meeting with HR.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Bay State Drivers Can Now Be Tracked by 7,000 Flock Customers

11/3/2025

 

“There is something predatory in the act of taking a picture.”

- Susan Sontag

Picture
​Search our news blog for "Flock" and you'll hit the jackpot. This company has been a consistent source of concern for privacy watchdogs.
 
Just last week, the ACLU’s Jay Stanley summarized the results of a detailed Massachusetts open-records investigation. Thanks to Flock’s contracts with more than 40 Massachusetts police departments, Bay State drivers can now be tracked by 7,000 of the company’s customers – “in real time, without a warrant, probable cause, or even reasonable suspicion of wrongdoing.” To be clear, that surveillance of Massachusetts drivers can be conducted from other parts of the country… because why wouldn’t Texas authorities want to know what Massachusetts drivers are up to?
 
This chilling state of affairs is the result of Flock’s boilerplate contract language, which only changes if a police department demands it (most have not). The company’s contracts include an “irrevocable, worldwide, royalty-free, license to use the Customer Generated Data for the purpose of providing Flock Services.”
 
Stanley’s article includes additional anecdotes about Flock’s propensity for over-sharing that suggest the issue goes far beyond Massachusetts. In Virginia, for example, reporters found that “thousands of outside law enforcement agencies searched Virginians’ driving histories over 7 million times in a 12-month period.” As we’ve written before, Virginia is already one of the most surveilled states in the country, thanks largely to vendors like Flock Safety.
 
Consider following the ACLU’s advice for pushing back against this kind of Orwellian oversight. If we don’t say anything, nothing is going to change.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Keep Lummis-Wyden in the NDAA to Secure the Pentagon – and Our Democracy – from Foreign Hackers

10/31/2025

 
Picture
Sen. Cynthia Lummis (Left) and Sen. Ron Wyden (Right)
National security wake-up calls do not get louder than the revelation that a Chinese government-linked hacking group, known as Salt Typhoon, successfully penetrated major U.S. telecommunications carriers in 2024.  AT&T and Verizon were among the companies compromised, exposing the communications of Members of Congress, senior officials, and even both major-party presidential candidates.
 
This was not an isolated breach. It followed a 2023 cyberattack in which Chinese state hackers infiltrated Microsoft’s cloud-hosted email systems, compromising accounts at multiple federal agencies, including the Departments of State and Commerce. According to the Cyber Safety Review Board, the attackers downloaded roughly 60,000 emails from the State Department alone. Pilfered correspondence included those of Cabinet-level officials.
 
These events underscore an uncomfortable truth – the Department of Defense and the intelligence community cannot defend the nation with unencrypted communications routed through a handful of vulnerable providers.
 
The good news is that we do not have to accept this status quo. As the House and Senate negotiate the National Defense Authorization Act (NDAA) for Fiscal Year 2026, conferees must retain the Lummis-Wyden amendment, which mandates secure, interoperable, end-to-end-encrypted collaboration tools for the Pentagon.
 
A Pattern of Foreign Infiltration
From defense contractors to cloud service providers, adversarial regimes have repeatedly exploited weak communication infrastructure to spy on U.S. institutions. The Salt Typhoon and Microsoft incidents illustrate how a single breach in a major service can compromise thousands of sensitive conversations. When communication systems lack end-to-end encryption, even one point of failure can expose entire networks to foreign intelligence agencies.
 
What Lummis-Wyden Would Do
This measure requires the Department of War to use only collaboration systems that meet rigorous cybersecurity standards – including true end-to-end encryption that ensures only the sender and intended recipient can read a message, even if servers in between are hacked.
 
Just as importantly, Lummis-Wyden mandates interoperability. Today, the Pentagon is confined to using a small set of proprietary, “walled garden” platforms that block seamless communication across systems. Interoperable standards would allow the Defense Department to adopt superior tools as they emerge, preventing vendor lock-in that traps communications in the domains of single companies, while enhancing long-term resilience of the Pentagon’s digital networks.
 
By promoting interoperability and strong encryption, Lummis-Wyden would open the door to competition, inviting companies to develop more secure, agile, and affordable solutions. America’s defense and intelligence agencies should never be dependent on single-point-of-failure vendors whose systems are ripe targets for global espionage.
 
A Strategic Imperative
From the theft of federal employee records to the infiltration of telecom carriers, the pattern is unmistakable: insecure communications infrastructure is a strategic liability.
 
Passing Lummis-Wyden would do more than patch vulnerabilities: it would redefine what secure collaboration means in the 21st century. It would signal that America prizes both privacy and resilience, and rewards technologies that deliver genuine end-to-end security rather than superficial compliance checkboxes.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

How TikTok Helps the Stalkerverse Infiltrate Tinder

10/28/2025

 

“Do I not know you by your face?” - Twelfth Night, Act 1 Scene 5

Picture
​Another day, another TikTok story. Last time, reporters found that TikTok Shop was allowing ads tailored to GPS-savvy stalkers. This time, it’s ads for Cheaterbuster – which represents yet one more invasive abuse of facial recognition technology, often with images taken from the Tinder dating app.
 
Cheaterbuster’s “Facetrace” feature, which 404 Media verified, allows users to “discover someone’s online presence from a single selfie.” That’s right, you need only upload a photo of your “loved one” and Cheaterbuster’s AI scours the web in search of that person’s Tinder profile, for $18 per search.
 
Notably, Tinder itself has nothing to do with this according to Sullivan Davis and other bloggers. “Not only do we not authorize this practice, it is squarely against our policies,” the company told 404 Media. It appears that sites like Cheaterbuster (sadly, there are others) are scraping publicly available profiles (pro tip – pay for Tinder tiers that allow private mode).
 
The Mary Sue webzine points out that any number of TikTok accounts are really just paid marketing fronts for Cheaterbuster. “Aurora” was applauded by naïve users who believed that she was literally dumping her boyfriend (by driving him to the landfill) after Cheaterbuster saved the day. According to 404, Cheaterbuster’s affiliate program pays more than YouTube does.
 
About a year ago, two Harvard students hacked Meta’s Ray-Ban smartglasses to identify strangers on the subway. As we wrote at the time, “Armed with this technology, your neighborhood creep could easily spot a woman walking down the street and be there when she arrives at her front doorstep.”
 
Now thanks to TikTok and Cheaterbuster, he could know all about her and just what to say.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

Don’t Look Up: Those Satellites Are Leaking

10/27/2025

 

“To have good data, we need good satellites.”  - Jeff Goodell

Picture
​Sigh. As if we didn’t have enough to worry about already. While privacy experts were focusing on the security of undersea fiberoptic cables, government surveillance, and corporate subterfuge, our data is being broadcast unencrypted all around the Earth by satellites.

Satellites are leaky – and it isn’t fuel they’re off-gassing; it’s our personal information. “These signals are just being broadcast to over 40 percent of the Earth at any point in time,” researchers told Wired’s Andy Greenberg and Matt Burgess.

A few years ago, those researchers (at UC San Diego and the University of Maryland) followed up on a whim: Could we eavesdrop on what satellites are broadcasting? The answer was a big fat “yes” – and it took only about $800 in equipment. Their complete findings are detailed in a newly released study. They had assumed, or at least hoped, to find very little – that almost every signal would be protected by encryption – the ne plus ultra of privacy protection.

Instead, among the many things they found floating in the ether were:
​
  • Miscellaneous corporate and consumer data (such as phone numbers)
  • Actual voice calls
  • Text messages
  • Industrial communications
  • Decryption keys
  • Even in-flight Wi-Fi data for systems used by 10 different airlines (including users’ in-flight browsing activities).

Researchers also “pulled down a significant collection of unprotected military and law enforcement communications,” including information about some U.S. sea vessels.

The Wired article’s authors are quick to note that the National Security Agency warned about the security of satellite communications more than three years ago.

Will the publication of such research encourage bad actors to take advantage of these weaknesses?

In the short term, perhaps, but the study’s authors are hopeful that various companies will respond like T-Mobile did and immediately get their encryption house in order (a spokesperson noted the issue was not network-wide). Another affected company, Santander Mexico, responded: “We took the report as an opportunity for improvement, implementing measures that reinforce the confidentiality of technical traffic circulating through these links.” (It should be noted that the affected organizations were notified many months prior to the study’s release.)

In the meantime, let’s hope most hackers haven’t renewed their Wired subscriptions.
​
After all, the scale of the problem is enormous. A Johns Hopkins expert told the magazine: “The implications of this aren't just that some poor guy in the desert is using his cell phone tower with an unencrypted backhaul. You could potentially turn this into an attack on anybody, anywhere in the country.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Wi-Fi Turns Spy-Fi

10/15/2025

 

“We are profoundly bad at asking ourselves how the things we build could be misused.”

​- Brianna Wu

Picture
​In terms of surveillance tech, Wi-Fi is having its moment. This is the fourth time in 2025 we’ve covered the growth of an invasive concept that three years ago seemed remote, even arcane: Wi-Fi sensing.

Increasingly, Wi-Fi turned Spy-Fi is ready for prime time. The Karlsruhe Institute of Technology (KIT), a German research university and think tank, found that Wi-Fi networks can use their radio signals to identify people. Any Wi-Fi network can be made to do this, no fancy hardware required. The people being identified don’t have to be logged into these networks, either. In fact, they don’t even need to carry electronic devices for this subterfuge to work; it’s enough simply to be present, minding one’s own business, within range of a given Wi-Fi router.

But given the ubiquity of Wi-Fi networks, that leaves very few places to hide. “This technology turns every router into a potential means for surveillance,” warns security/privacy expert Julian Todt of KIT. “If you regularly pass by a café that operates a Wi-Fi network, you could be identified there without noticing it and be recognized later – for example by public authorities or companies.” (Or hackers, autocrats, or foreign agents).

How does it work? By exploiting a standard feature and turning it into a vulnerability – leveraging weaknesses must be taught at Bad Actor 101 at Spy School. In this case, connected devices regularly send feedback signals to Wi-Fi routers. According to the researchers, these signals are frequently unencrypted – which means anyone nearby can capture them. Then, with the right know-how, that data can be converted into images.

Not photos exactly, but close enough – analogous to ultrasound, sonar, or radar. The more devices that are connected to a given Wi-Fi network, the fuller the picture provided – height, shape, gestures, gait, hats, purses, and more. With a little help from machine learning, our bodies turn out to be uniquely identifiable, not unlike a fingerprint.

Are there easier ways to spy on us? Most certainly – CCTV, for example. But what Wi-Fi sensing lacks in ease it makes up for in reach. As technologies go, it’s practically everywhere that humans are. The vast majority of people don’t have CCTV cameras in their homes, but they (or their neighbors) are almost guaranteed to have Wi-Fi.
​
Wherever you’re reading this from, take a moment to see how many Wi-Fi networks your phone detects. If the KIT research proves correct, any one of them could be used to track your movements and determine your identity.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

The Feds Have Your Number… And Your Location… And a lot More

10/6/2025

 

“A day-in-the-life profile of individuals based on mined social media data.”
​

- Ellie Quinlan Houghtaling, The New Republic

Picture
​You might think that where you go and with whom you meet is your private information. And it is. But now it’s also accessible to the government, with a federal agency purchasing software to track the location of your phone.

Joseph Cox of 404 Media reports that the U.S. Immigration and Customs Enforcement (ICE) is buying an “all-in-one” surveillance tool from Penlink to “compile, process, and validate billions of daily location signals from hundreds of millions of mobile devices, providing both forensic and predictive analytics.”

That chilling quote is ICE’s own declaration. Apparently, acquiring Penlink’s proprietary tools are the only way to beat criminals at their own game.

ICE is not taking us down a slippery slope. It is going straight to the gully, discarding any concept of the prohibition against warrantless surveillance in violation of the Fourth Amendment. From there, monitoring the movements of the general population is simply an act of political will. As with facial recognition software, notes the Independent’s Sean O’Grady, it is one more example of the “creeping ubiquity of various types of surveillance.”

Indeed, location is but one element of commercial telemetry data (CTD), the industry term for information acquired from cellphone networks, connected vehicles, websites, and more. PPSA readers know that banning the sale of CTD to government agencies is one goal of the bipartisan Fourth Amendment Is Not For Sale Act, which passed the House in the previous Congress.

Collecting and selling CTD is the shady business of the data broker industry, a practice the Federal Trade Commission once tried, meekly, to rein in. Indeed, for one brief shining moment, even ICE previously announced it would stop buying (but continue to use) CTD after the Department of Homeland Security’s own Inspector General found that DHS agencies weren’t giving privacy protections their due.

And yet here we are. As the Electronic Frontier Foundation’s Beryl Lipton recently put it in Forbes:

“This extension and expansion of ICE’s Penlink contract underlines the federal government’s enthusiasm for indiscriminate and warrantless data collection on as many people as possible. We’re still learning about the extent of the government’s growing surveillance apparatus, but tools like Penlink can absolutely assist ICE in turning law-abiding citizens and protestors into targets of the federal government.”
​
These tools are in the hands of ICE today, but they could be in the hands of the FBI, IRS, and other federal agencies in the blink of an eye. Congress should take note of this development when it debates reauthorization of a key surveillance authority – FISA Section 702 – next spring.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

Heard on the Street? Our Voices, Apparently

10/6/2025

 

“Don’t eavesdrop on others – you may hear your servant curse you.”
​

- Ecclesiastes 7:21

Picture
Image via https://www.flocksafety.com/
​Flock Safety is a frequent PPSA subject (this is our tenth article on the company). But instead of the company’s license-plate reader cameras, today’s discussion was inspired by Flock’s listening device, Raven.

According to Ben Miller of Government Technology, Raven was developed to detect gunshots and other crime-related noises, then activate nearby Flock Falcon cameras and alert authorities. Flock began marketing the Raven-Falcon combo to schools in 2023. The camera integration is meant to be Raven’s primary selling point, giving law enforcement immediate alerts about gunshots, breaking glass, screeching tires, and whatever it's programmed to listen for.

Funny thing – it can also listen for human voices.

Matthew Gauriglia of the Electronic Frontier Foundation (EFF) reports that Flock has been touting Raven’s ability to detect screaming and other forms of vocal distress. The obvious implication, of course, is the product’s ability to “listen” and record human speech. Raven competitor ShotSpotter proved it could be done when its system recorded the words of a dying man in 2014.

Critics, meanwhile, challenge the notion that technology like Raven and ShotSpotter are good listeners – or even solid policing strategy. ShotSpotter published its own study claiming nearly 97 percent accuracy, though that level required six well-placed (and expensive) sensors in a given area.

Public research tells a different story. Chicago’s Inspector General was highly critical of the technology, finding that “alerts rarely produce evidence of a gun-related crime.” Instead, its use increased stop-and-frisk tactics due to officers’ changed perceptions of the areas where the sensors were deployed. It was deemed not to be worth the $33 million the city had paid for the contract.

Northwestern University’s MacArthur Justice Center published the most comprehensive set of findings to date – claiming that “on an average day, ShotSpotter sends police into these communities [mostly of color] more than 61 times looking for gunfire in vain.” Meanwhile, a National Institute of Justice report last year essentially concluded the technology brought little in terms of meaningful impacts on policing and crime reduction.

And now Raven is joining the audio sensor party, which, as parties go, is turning out to be a veritable Fyre Festival of public safety based on the combined testimony of multiple watchdog groups. In addition to those noted above, the list of audio sensor detractors includes the ACLU, Surveillance Technology Oversight Project, and Electronic Privacy Information Center. We also recommend EFF’s summary of the entire audio sensor industry.

Yet law enforcement continues to hail these too-good-to-be-true, quick-fix “solutions” to public safety challenges, potentially wasting millions of taxpayer dollars and eschewing much-needed transparency. The boosterism continues, despite concerns raised by the communities this technology purports to protect.
​
Audio-sensing tech capable of being deployed at scale nearly completes the mass surveillance infrastructure needed to destroy our privacy once and for all. After all, it is not a great leap for government to go from listening for screams to eavesdropping on private conversations.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

The More We Fly, The More They Spy

9/23/2025

 

How Airlines Sell Our Travel Itineraries to the Government

Picture
​We previously wrote about the Airlines Reporting Corporation (ARC), which began as a humble transaction clearinghouse in the analog days of the 1980s but has since become a full-fledged data broker.

Among the ARC’s best customers is the U.S. government, whose appetite for its citizens’ personal data is matched only by its desire to avoid acquiring that data constitutionally. More specifically, government agencies use third-party data brokers like ARC to dodge obtaining search warrants based on probable cause – in stark defiance of the Fourth Amendment. 

New reporting from Joseph Cox at 404 Media sheds more light on the scale of ARC’s partnership with the federal government. FOIA requests paint a picture of near-total reach when it comes to tracking where and when we fly:

  • 270 airlines participate
  • 12,800 travel agencies provide data
  • Data includes passenger names, itineraries, and financial details

Cox’s ongoing coverage of this subject also reveals that the sale of traveler data isn’t a one-off or even occasional transaction. On a daily basis, ARC supplies passenger information to power TIP, the Traveler Intelligence Program. Despite the name, passengers’ IQs are probably the only piece of data not being sold.

We now know that buyers of that data include the Customs and Border Protection. 404 Media also found that other customers include ATF, the SEC, TSA, the State Department, the U.S. Marshals Service, and the IRS.

Are the skies really overflowing with so much rampant criminality that the government is justified in spying on all passengers? Should the IRS have warrantless access to your travel itinerary?

“ARC's sale of data to U.S. government agencies is yet another example of why Congress needs to close the data broker loophole,” Sen. Ron Wyden (D-OR) told 404.

When you last bought airline tickets, do you remember giving permission to have your itineraries and credit card information sold, either to the government or anyone else? Neither do we, nor any of the other five billion passengers whose records ARC has collected and made searchable.

“Governments,” wrote Jefferson, derive “their just powers from the consent of the governed.” Consent is inconvenient to authority, so it’s little wonder we were never asked. There’s nothing just, consensual, or constitutional about mass surveillance.
​
For the record, the Traveler Intelligence Program was ARC’s own idea, back in 2001. And of course, they knew exactly which doors to knock on.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR RIGHT TO PRIVACY

Clearview AI: Giving the US Government A Clear View of Its Citizens

9/18/2025

 
Picture
​Clearview AI is raking in the cash with its facial recognition software, signing lucrative contracts that make all Americans easier targets for government surveillance. The latest award is a $10 million deal with the Department of Homeland Security (DHS) to support Immigration and Customs Enforcement (ICE) operations.

Clearview was previously fined more than $30 million by Dutch regulators for privacy violations related to data collection. It also settled privacy violation charges in the U.S. for tens of millions more. But none of that has stopped the company from becoming a favorite of law enforcement and government intelligence agencies in the United States. In fact, we’ve written about the dangers of facial recognition more times than we can count. Its continued popularity only proves that the federal government cares more about purchasing facial recognition software than regulating its use. As a result, states have had to step in and fill the regulatory gap.

The new ICE contract means that Clearview will be used to help identify individuals accused of assaulting its officers – a commendable goal. But the accumulation of Americans’ faces into a single database is an immense temptation for abuse in many other domains, including surveillance for political reasons.

You may applaud or deplore ICE’s new aggressiveness. The larger is issue what the government, or Clearview itself, will do down the road with the mass collection of America’s facial data. Our faces, along with the rest of our biometric data – and our privacy in general – remain for sale. Of course, we’re assuming that the software will actually recognize us rather than mistake us for someone else.

As spy tech goes, facial recognition can’t seem to win for losing.
​
It’s enough to make one yearn for the quaint times of Oscar Wilde, who once said, “I never forget a face, but in your case I will make an exception.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

The Wearable Revolution Will Be A Boon For Data Harvesters

9/15/2025

 

“There’s no federal law that is going to protect against these companies weaponizing this data.”

Prof. Alicia Jessop
Picture
​We recently reported that the popularity of wearables is eroding confidence in the idea that private, candid conversations will always remain private. Now Charlie McGill and The American Prospect report that HHS Secretary Robert F. Kennedy Jr. “wants a wearable on every American body.” They described this announcement as “curious” given that five years ago the Secretary himself blasted wearables and other smart devices as being about “surveillance, and harvesting data.”

That was then. A massive, government-funded pro-wearables ad campaign will soon promote Secretary Kennedy’s long-held view that eating right and exercising is superior to pharmaceutical remedies. He also wants HHS to popularize wearables: “You know the [sic] Ozempic is costing $1,300 a month, if you can achieve the same thing with an $80 wearable, it's a lot better for the American people.”

Persuading people to take better care of themselves is certainly a commendable goal for an HHS Secretary. But the security and privacy risks inherent to wearables are also a veritable bonanza for data brokers. On the Dark Web in 2021, healthcare data records were worth $250 each, compared to $5.40 for a payment card record. Just imagine what they’ll be worth in four years’ time if the HHS plan comes to fruition. Meanwhile, companies are lining up to cash in on the wearables boom that the department is promoting.

Companies that buy our data usually just want to target customers with ads and appeals. On a more sinister level, our health data derived from wearables – about as personal as information can be – will be sold by data brokers to about a dozen federal agencies, ranging from the FBI and the IRS to the Department of Homeland Security.

Health data from wearables will surely become part of a single, federal database of Americans’ information. “Techno-utopianism” observes Natalia Mehlman Petrzela “assumes more sophisticated technology always yields a better future.” Without constructing the requisite privacy guardrails for the data new technologies generate, quantifying ourselves on such an extreme scale may invite unwanted scrutiny.
​
Do we really want the FBI or the IRS to be able to warrantlessly access our deeply personal health issues? The wearables revolution, and the data it generates, is just another privacy violation that should prompt Congress to enforce the Fourth Amendment by forbidding the government from warrantlessly purchasing our most personal data.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

When Police Profit From Protection

9/8/2025

 

“Ethics is knowing the difference between what you have a right to do and what is right to do.”

- Justice Potter Stewart
Picture
​Local police departments are spending billions of dollars on surveillance technology, from cameras, to cell-site simulators, to drones. Customers in blue range from the New York Police Department, which has invested $3 billion in surveillance in recent years, to small-town departments willing to fork out tens of thousands.

With so much money sloshing around, it is reasonable to wonder how careful local officials are in maintaining clear boundaries between customer and vendor. Events in Atlanta suggest that sometimes these boundaries are, at best, blurry.

Marshall Freeman is the Chief Administrative Officer of the Atlanta Police Department (APD) and a former leader at the non-profit Atlanta Police Foundation. Together, the Foundation and the APD devised Connect Atlanta, a camera network that makes Atlanta one of the most surveilled cities per capita in the United States.

The Atlanta Community Press Collective (ACPC) was combing through public records when they noticed Freeman’s name on a Conflict of Interest Disclosure Report. Citing “financial interest” in Axon, a law enforcement tech company, he recused himself from contract-related “matters and dealings” that could impact Axon financially. “I have interest in a company that is currently in talks with Axon around acquisition and investment,” he wrote, without specifics.

ACPC discerned that Freeman’s unnamed stake was in a company called Fusus, whose software fuels the Connect Atlanta surveillance system. Axon acquired it for $240 million barely a week after Freeman filed his disclosure. More red flags followed. Freeman was the only public official quoted in Axon’s press release announcing the acquisition: “I wholeheartedly encourage all agencies to embrace this cutting-edge technology and experience its transformative impact firsthand.”

Using open records requests, ACPC also reports it also found emails indicating that Freeman “boosted Fusus and Axon products to other agencies in Georgia and around the U.S.” on multiple occasions post-disclosure. When the reporting first surfaced, APD responded tersely: “The appropriate ethics filings were submitted.”

A few weeks later, though, the City of Atlanta Ethics Office begged to differ, announcing an investigation into Freeman’s post-recusal behavior. Fifteen months later, the body released an official report totaling 313 pages. The findings suggest that Freeman’s relationship with the camera-pushing Fusus dated back to his days at the Atlanta Police Foundation, a relationship he brought with him to APD and continued to nurture. According to The Guardian, he consulted for Fusus for at least a year after joining APD, “crisscrossing the country in person and by email while repping the company, including conversations with police departments in Florida, Hawaii, California, Arizona and Ohio.”

All told, the Ethics Office found 15 separate matters in which Freeman used his official position as an influencer for Axon and Fusus. For at least part of this time, he served on the board of two Fusus subsidiaries in Virginia and Florida – a fact he did not disclose to ethics investigators. 

Writing in The Intercept, Timothy Pratt and Andrew Free detail how Freeman’s impropriety (the “appearance” of which is the only thing he’s admitted to) is making all of us less free – taking the Great Atlanta Mass Surveillance Experiment and replicating it from sea to monitored sea: Seattle, Sacramento, New York City, Omaha, Birmingham, Springfield, Savannah, and counting.

Freeman may be an exception. But he might be the rule. It doesn’t matter, given the outsized influence even one public official can have when it comes to the proliferation of the police surveillance dragnet in the United States. Then again, by the time robust surveillance systems get to smaller, heartland cities like Lawrence, Kansas, it may already be too late.
​
At the very least, police procurement processes would benefit from tighter rules, like those that govern Pentagon officials when they assess contracts.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Note to Protestors: Turn Off Your Wi-Fi

9/4/2025

 
Picture
Philip K. Dick, the 20th century writer whose science-fiction stories proved prescient, once declared: “My phone is spying on me.” He might have been paranoid then, but he wouldn’t be now.

Wi-Fi has become the newest battlefield in the surveillance war. First, researchers showed it could sense bodies and furniture in the dark. Then came “WhoFi,” a variant that can detect the size, shape, and makeup of those bodies. A once obscure technology is now advancing at a disturbing clip.

Now comes something simpler – and just as insidious, from Australia. In July 2024, the University of Melbourne used Wi-Fi location data, cross-referenced with CCTV footage, to identify student protestors at a sit-in, reports Simon Sharwood of The Register. This was after the school ordered protestors to leave and warned that anyone who stayed could face suspension, discipline, or police referral.

Despite the students’ misbehavior, the Victoria state’s Information Commissioner investigated this use of technology, citing possible violations of the 2014 Privacy and Data Protection Act. The final report cleared the university’s CCTV use but found its Wi-Fi tracking out of bounds. Why? Because the school had never clearly disclosed this purpose in its Wi-Fi policies. The Commissioner reports:

“Even if individuals had read these policies, it is unlikely they would have clearly understood their Wi-Fi location data could be used to determine their whereabouts as part of a misconduct investigation unrelated to allegations of misuse of the Wi-Fi network.”

The Commissioner called this “function creep.” Or as we would say, mission creep. Whatever the name, it’s a serious problem. Surveillance technologies rarely stay in their lane. Once deployed, they inevitably “creep” unless nailed down by clear rules, ethical guardrails, and organizational cultures that prize transparency over convenience.

To its credit, the university cooperated with the investigation and promised reforms.

But let’s be fair, the University of Melbourne isn’t unique here. We’re all naïve about the countless ways our gadgets betray us. And it’s not just CCTV. No one should be shocked when cameras are used as surveillance tools. It is far less obvious that almost every modern technology can be repurposed to follow us wherever we go.

Yes, Virginia, Wi-Fi tracks location. It always has. And whenever location data is on the table, the odds of being spied on shoot through the roof.

What else relies on location data? Practically everything with a battery. If you want to reduce your surveillance footprint, you can’t rip down the cameras – but you can shut down your phone, smartwatch, Fitbit, smartglasses, and every other blinking, beeping device. Or better yet, leave them at home.
​

With the possible exception of pacemakers, of course.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Watching the Watchers: On Its Own, AI Isn’t Watching, Or Thinking

9/2/2025

 
Picture
Image: Citizen website.
Joseph Cox of 404 Media reminds us of three things that we know to be true about the new era of generative artificial intelligence:

  1. AI isn’t a substitute for people.
  2. AI isn’t a substitute for people.
  3. AI isn’t… well, you get the picture.

As we’ve written before, AI works best when there’s a human in the loop. Take the case of Citizen.com, whose app is increasingly taking an AI-only approach to crime fighting. Because, really, what could possibly go wrong?

Plenty, as you can imagine. Without further ado, here’s 404 Media’s report on what happens when AI is left to its own devices, Citizen-style. It is prone to:

  • Mistranslating “motor vehicle accident” as “murder vehicle accident.”
 
  • Misinterpreting addresses.
 
  • Publishing incorrect locations.
 
  • Adding gory or sensitive details that violate Citizen’s guidelines.
 
  • Sending notifications about police officers spotting a stolen vehicle or homicide suspect, potentially putting operations at risk.
 
  • Writing alerts as if officers had already arrived on the scene, when in fact the dispatcher was only providing supplemental information while officers were en route.
 
  • Duplicating incidents, failing to recognize that two pieces of dispatch audio are related to the same singular event.
 
  • This was especially common with police chases, where dispatch continually provided new addresses. The “AI would just go nuts and enter something at every address it would get and we would sometimes have 5-10 incidents clustered on the app that all pertain to the same thing,” one source said.
 
  • Omitting important details, such as whether a person was armed with a weapon.
​
The stakes are as strategic as they are tactical. One of Cox’s sources told him, “This could skew the perception of crime in a particular area,” as AI-created incidents proliferated.
 
By the way, the original name of Citizen – both the app and the company – was, perhaps tellingly, Vigilante. But that’s a story for another day.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS
<<Previous

    Categories

    All
    2022 Year In Review
    2023 Year In Review
    2024 Year In Review
    Analysis
    Artificial Intelligence (AI)
    Call To Action
    Congress
    Congressional Hearings
    Congressional Unmasking
    Court Appeals
    Court Hearings
    Court Rulings
    Data Privacy
    Digital Privacy
    Domestic Surveillance
    Facial Recognition
    FISA
    FISA Reform
    FOIA Requests
    Foreign Surveillance
    Fourth Amendment
    Fourth Amendment Is Not For Sale Act
    Government Surveillance
    Government Surveillance Reform Act (GSRA)
    Insights
    In The Media
    Lawsuits
    Legal
    Legislation
    Letters To Congress
    NDO Fairness Act
    News
    Opinion
    Podcast
    PPSA Amicus Briefs
    Private Data Brokers
    Protect Liberty Act (PLEWSA)
    Saving Privacy Act
    SCOTUS
    SCOTUS Rulings
    Section 702
    Spyware
    Stingrays
    Surveillance Issues
    Surveillance Technology
    The GSRA
    The SAFE Act
    The White House
    Warrantless Searches
    Watching The Watchers

    RSS Feed

FOLLOW PPSA: 
© COPYRIGHT 2024. ALL RIGHTS RESERVED. | PRIVACY STATEMENT
Photo from coffee-rank