Project for Privacy and Surveillance Accountability (PPSA)
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE

 NEWS & UPDATES

Note to Protestors: Turn Off Your Wi-Fi

9/4/2025

 
Picture
Philip K. Dick, the 20th century writer whose science-fiction stories proved prescient, once declared: “My phone is spying on me.” He might have been paranoid then, but he wouldn’t be now.

Wi-Fi has become the newest battlefield in the surveillance war. First, researchers showed it could sense bodies and furniture in the dark. Then came “WhoFi,” a variant that can detect the size, shape, and makeup of those bodies. A once obscure technology is now advancing at a disturbing clip.

Now comes something simpler – and just as insidious, from Australia. In July 2024, the University of Melbourne used Wi-Fi location data, cross-referenced with CCTV footage, to identify student protestors at a sit-in, reports Simon Sharwood of The Register. This was after the school ordered protestors to leave and warned that anyone who stayed could face suspension, discipline, or police referral.

Despite the students’ misbehavior, the Victoria state’s Information Commissioner investigated this use of technology, citing possible violations of the 2014 Privacy and Data Protection Act. The final report cleared the university’s CCTV use but found its Wi-Fi tracking out of bounds. Why? Because the school had never clearly disclosed this purpose in its Wi-Fi policies. The Commissioner reports:

“Even if individuals had read these policies, it is unlikely they would have clearly understood their Wi-Fi location data could be used to determine their whereabouts as part of a misconduct investigation unrelated to allegations of misuse of the Wi-Fi network.”

The Commissioner called this “function creep.” Or as we would say, mission creep. Whatever the name, it’s a serious problem. Surveillance technologies rarely stay in their lane. Once deployed, they inevitably “creep” unless nailed down by clear rules, ethical guardrails, and organizational cultures that prize transparency over convenience.

To its credit, the university cooperated with the investigation and promised reforms.

But let’s be fair, the University of Melbourne isn’t unique here. We’re all naïve about the countless ways our gadgets betray us. And it’s not just CCTV. No one should be shocked when cameras are used as surveillance tools. It is far less obvious that almost every modern technology can be repurposed to follow us wherever we go.

Yes, Virginia, Wi-Fi tracks location. It always has. And whenever location data is on the table, the odds of being spied on shoot through the roof.

What else relies on location data? Practically everything with a battery. If you want to reduce your surveillance footprint, you can’t rip down the cameras – but you can shut down your phone, smartwatch, Fitbit, smartglasses, and every other blinking, beeping device. Or better yet, leave them at home.
​

With the possible exception of pacemakers, of course.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Watching the Watchers: On Its Own, AI Isn’t Watching, Or Thinking

9/2/2025

 
Picture
Image: Citizen website.
Joseph Cox of 404 Media reminds us of three things that we know to be true about the new era of generative artificial intelligence:

  1. AI isn’t a substitute for people.
  2. AI isn’t a substitute for people.
  3. AI isn’t… well, you get the picture.

As we’ve written before, AI works best when there’s a human in the loop. Take the case of Citizen.com, whose app is increasingly taking an AI-only approach to crime fighting. Because, really, what could possibly go wrong?

Plenty, as you can imagine. Without further ado, here’s 404 Media’s report on what happens when AI is left to its own devices, Citizen-style. It is prone to:

  • Mistranslating “motor vehicle accident” as “murder vehicle accident.”
 
  • Misinterpreting addresses.
 
  • Publishing incorrect locations.
 
  • Adding gory or sensitive details that violate Citizen’s guidelines.
 
  • Sending notifications about police officers spotting a stolen vehicle or homicide suspect, potentially putting operations at risk.
 
  • Writing alerts as if officers had already arrived on the scene, when in fact the dispatcher was only providing supplemental information while officers were en route.
 
  • Duplicating incidents, failing to recognize that two pieces of dispatch audio are related to the same singular event.
 
  • This was especially common with police chases, where dispatch continually provided new addresses. The “AI would just go nuts and enter something at every address it would get and we would sometimes have 5-10 incidents clustered on the app that all pertain to the same thing,” one source said.
 
  • Omitting important details, such as whether a person was armed with a weapon.
​
The stakes are as strategic as they are tactical. One of Cox’s sources told him, “This could skew the perception of crime in a particular area,” as AI-created incidents proliferated.
 
By the way, the original name of Citizen – both the app and the company – was, perhaps tellingly, Vigilante. But that’s a story for another day.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Data Privacy Laws Sweeping the States

9/2/2025

 

Will Congress Follow Montana by Closing the Data Broker Loophole?

Picture
Twenty states have enacted major consumer data privacy laws. When will Washington, D.C., wake up and restrict the open season on Americans’ personal information at the federal level?

California lit the fuse in 2018, passing laws that set limits on how businesses collect and sell consumers’ data. This year, new privacy laws have taken effect, or soon will, in New Hampshire, Delaware, Iowa, Nebraska, New Jersey, Tennessee, Minnesota, and Maryland.

Montana may offer the best model for federal action. The Montana Consumer Data Privacy Act, which went into effect late last year, mirrors many other state laws, while giving strong, clear rights. In Montana, consumers have the right:

  • To opt-out of data sales, targeted ads, or profiling that drives automated legal decisions.
 
  • To know if a data “controller” is processing their personal information and to access that data.
 
  • To correct errors.
 
  • To demand deletion of personal data.
 
  • To exercise these rights without retaliation.

Like many other states, the Montana law also adds special protections for minors, requiring consent for data sales and targeted ads to children aged 13 to 16.  

But where Montana truly shines is by closing the notorious “data broker loophole.” That loophole lets government agencies dodge the Fourth Amendment’s warrant requirement by simply buying consumers’ data.

Montana now flatly bars law enforcement from purchasing sensitive electronic data – such as electronic communications metadata and precise geolocation information – without a warrant.

The federal government has no such restraint. Agencies from the FBI and IRS to the Department of Homeland Security, and the Department of Defense, routinely buy and access Americans’ sensitive personal data. Government lawyers insist this is fine because we all “agree” through terms of service – though almost no one reads them, and they never warn consumers that third-party data brokers might be selling their data to the FBI.
​
As more states pioneer privacy laws, the pressure builds. An intense debate on the data-broker loophole in Congress is inevitable. Lawmakers would do well to take a cue from one of Montana’s favorite sons, Gary Cooper, who said: “One nice thing about silence is that it can’t be repeated.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

“Wearables” – A Euphemism for “Spy Tech”

8/26/2025

 
“I don’t think you can make it off the record once you’ve said it – you can’t call dibs after the fact.”

​- Journalist Philip Corbett
Picture
Wearables are defined by their comfort. But there is a lot about wearable technology that is distinctly uncomfortable, if not Orwellian.

Wearable computers hit the mainstream with the introduction of Fitbits and smartwatches in the 2010s. Now, says The San Francisco Standard, the rise of artificial intelligence is adding spy tech to the wearable computing family tree. The newest devices are akin to smartglasses but take that technology’s most invasive feature – recording the environment – and turn the creep factor up to 11. The new wearables are stylish and somewhat stealthy and designed to do two things very well: listen and remember.

They come in the form of pendants, necklaces, lapel pins – or, in a twist, might even look like a Fitbit or smartwatch. But they are all recording devices capable of capturing the wearer’s every conversation and meeting, then transcribing them, and – the pièce de la résistance – using AI to organize, analyze, and mine them for insights (think personal assistant on steroids, or maybe your very own opposition researcher). In some cases, the devices may only transcribe conversations rather than record them, but they’re still listening and processing conversations, so such distinctions are hardly comforting.

The San Francisco Standard suggests that everyone in Silicon Valley should assume that everything they say, especially at work, is being recorded. Which means the rest of America – and its kitchen tables, coffee houses, and classrooms – won’t be far behind.

One venture capital partner told the Standard’s writers that she knew a fellow VC who records all in-person meetings “without telling the other meeting participants. It's an invasion of privacy and I seriously disapprove of it." Then, presumably referring to herself and the rest of us would-be audience members, she added, “Of course, this is a horrible way to live your life.”

In terms of the privacy concerns raised by this new generation of wearables, Julian Chokkattu of Wired cracked the code. Earlier generations of recording devices and software “at least required active engagement like a tap or a wake word to activate their ability to eavesdrop.” For the most part, the new devices are passive and always on, which places responsibility for gaining consent on the instigator. In other words, “Fox, meet henhouse.”

In the research, there are lots of names for the chilling effects that even consensual recording has on conversations, but one of the keenest is “spiral of silence.” People will varnish the truth, if they bother to speak it at all. They will hold back, self-censor, even shut down. As for the possible effects on creativity that this sort of tech might have – as in a brainstorming session, for example – we invite you to judge for yourself.

If you think all of this seems like a claim just waiting for a plaintiff, we agree: It’s a one-way express ticket to litigation city. But as with most things AI, the laws governing them are in their infancy and court rulings sparse. One corner of Silicon Valley is already fighting back though: Confident Security is developing Don’t Record Me, a browser plugin that could potentially detect illicit recordings and disrupt them.

What about audible cues or flashing lights to indicate that one of these devices is collecting data? Don’t count on it. One entrepreneur told Wired, in effect, “That would drain too much battery life.” Another claims that all you have to do is think about recording to activate his product. Thankfully, for that mode to work, the wearable has to be affixed to the side of your temple with medical tape.
​
But don’t expect other forms of personal surveillance to be so obvious. All the more reason for requiring disclosure for private recording and warrants when government agents listen in on what we say.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

TikTok’s Stalkerware Ads

8/25/2025

 
“I think the very word stalking implies that you're not supposed to like it. Otherwise, it would be called 'fluffy harmless observation time'.”

Author Molly Harper
Picture
​TikTok was already a privacy nightmare:

  • The EU fined it $600 million for breaching data privacy rules.
  • An FCC commissioner asked Apple and Google to remove the app from their stores because of mounting evidence that China had access to all user data.
  • The FBI opened an investigation into alleged use of the app to track American journalists.

To this troubling list we can now add the following: In violation of the platform’s own policies, sellers are using TikTok to market GPS trackers to stalkers, reports Rosie Thomas of 404 Media.

“Unlike AirTags,”one vendor boasts, “this thing doesn’t make a sound, doesn’t send alerts, she will never know it’s there.” In the comments section of a similar ad, one user bragged, “I bought some and put it on cars of girls I find attractive at the gym.”

Lest there be any doubt, Thomas’ report quotes Eva Galperin at the Electronic Frontier Foundation: “This is absolutely being framed as a tool of abuse.” Galperin, co-founder of a non-profit that keeps tabs on such products, categorizes these products broadly as “stalkerware.”

The central legal and moral issue underlying stalking, as with all violations of privacy, is consent. Expert Market’s page summarizing GPS tracking laws by state underscores the point: The word “consent” appears in these laws 115 times.

When asked about the viral proliferation of ads for these tracker tools, TikTok told 404 Media that they “prohibit the sale of concealed video or audio recording devices on our platform.” And yet, Thomas and her colleagues continued to find such ads every time they looked.
​
Which, of course, should come as a surprise to absolutely no one. This is just one more good reason why President Trump should cease suspending the law requiring TikTok to be sold or shuttered.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Stop Letting Hackers Win: Pass the Lummis-Wyden Cybersecurity Amendment

8/25/2025

 
Picture
America’s enemies aren’t storming our shores with tanks and planes – they’re breaking into our email, phone, and data systems. And right now, we’re making their job too easy.
 
The U.S. Senate can toughen up America’s defenses by passing the Lummis-Wyden amendment (S. Amdt. 3186) to the 2026 National Defense Authorization Act. This bipartisan fix would finally force the Pentagon to use secure, encrypted communications – and end its costly dependence on a handful of Big Tech vendors.
 
The Scale of Attacks
 
In 2023, Chinese hackers broke into Microsoft-hosted government email accounts, stealing 60,000 messages from the State Department alone. A year later, another Beijing-backed group hacked into AT&T and Verizon, tapping phones of Americans that included presidential candidate Donald Trump and then-Sen. J.D. Vance.
 
But Vance’s conversations were kept safe. How? He relied on Signal, the end-to-end encrypted app that even the hackers couldn’t crack.
 
The obvious takeaway is that without end-to-end encryption, our most sensitive communications are one hack away from the front page of Beijing’s intelligence briefings.
 
The Lummis-Wyden Fixes
 
  • Mandates encryption. The Pentagon must be required to use secure, end-to-end encrypted systems whenever possible.
 
  • Ends vendor lock-in. No more being trapped inside Microsoft Teams or Google Docs. Interoperability will be the law, so new and better tools can compete.
 
  • Saves money and boosts innovation. Opening the market to smaller, nimbler companies means lower costs and stronger security.
 
Why It Matters

Our military today is stuck in walled gardens built by giant tech firms that all too often proved eminently hackable. That’s bad for taxpayers and disastrous for national security. Hackers don’t need to break into every office at the Pentagon – they just need to knock down the door of one weak provider. The Lummis-Wyden amendment puts a lock on those doors.
 
Congress Must Choose Security
 
Congress can keep letting foreign spies read Cabinet-level emails and tap presidential phone calls, or it can finally demand that the Pentagon use the best tools available. This amendment is a wake-up call that we can’t defend the country with outdated software. Encryption and competition would at least give our country a fighting chance to keep China and other bad actors out of our business.
 
PPSA calls on the Senate to pass the Lummis-Wyden Amendment to stop giving hackers the upper hand. This measure will better protect our service members, the American homeland, and the private deliberations of our leaders.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Flock Appears to be Combining Driver Surveillance with Personal Data

8/19/2025

 
Picture
​Where you drive is personal. So is what you click on and who you communicate with. Combine the two, and suddenly a revealing picture emerges of your political, romantic, financial, and religious beliefs and activities – in short, a comprehensive dossier of your private life.
 
That appears to be what is happening with Flock, which is mashing up its camera surveillance of millions of drivers in 5,000 communities across the United States with digital information gathered on us by data brokers.
 
According to 404 Media, the good news is that after internal deliberations, Flock told its employees in May it would not merge stolen dark web data with information from its network of license plate readers (LPRs). Joseph Cox of 404 Media reported that in a meeting, a Flock supervisor told employees that after a “policy review process,” the company’s new search tool Nova would not incorporate hacked data from the dark web.
 
So far, so good. Dealing in stolen merchandise is never a good look for a company. Flock, however, announced that it will combine “public records data, Open Source intelligence, and license plate reader data” for law enforcement and other customers.
 
This marks a policy shift. Flock has long insisted that its license plate readers do not collect personally identifiable information, claiming they merely provide law enforcement with a way to track cars tied to crimes. But Jay Stanley of ACLU reports that the company now plans to plug its systems into commercial data brokers offering “people lookup” services.
 
ACLU’s Stanley writes:
 
“In the 1970s, after some government agencies were found to be building dossiers on people who aren’t suspected of involvement in crime like the East German Stasi, Congress enacted the Privacy Act banning agencies from such recordkeeping. Yet the ethically shady and frequently inaccurate data broker industry does basically the same thing, and when law enforcement becomes a customer of those data brokers, it represents an end-run around the law. By tying its LPR data together with data brokers, Flock is effectively automating and scaling the end run around our checks and balances that law enforcement data broker purchases represent …
 
“Imagine that a police officer stood on your street writing detailed notes about you every time you drove or walked by them. All the details about what your car looks like (make, model, color, distinguishing characteristics, bumper stickers, etc.), as well as details about visible occupants and pedestrians – how many, at what time, their activities, demographic data, what they are wearing, attributes they may have such as a beard, hat, tattoo, or T-shirt, and what that hat, T-shirt, or tattoo might say. Now imagine that there is an army of police officers doing this on every block.”
 
Thus, algorithms can now seek patterns in vehicle movements to identify and alert law enforcement to drivers who are “suspect.” Stanley pinpoints why this approach clashes with both the letter and the spirit of the Fourth Amendment. He writes there is a big difference between “providing tools for officials to use in investigating suspicion to generating suspicion.”
 
The fusion of your purchased data with your movements could do exactly that. One day, something as ordinary as making a right on red or casual U-turn could transform you from a routine driver into a suspect.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Mind Reading Is No Longer Sci-Fi

8/18/2025

 
Picture
​Larry Niven, the acclaimed science-fiction writer, once drolly observed, “I do suspect that privacy was a passing fad.”

It certainly seems so today, with networked Ring cameras on every door linked to public and private CCTV, license plate readers, and government agencies buying up our digital lives from data brokers… all of it potentially connected to AI and facial recognition software.

Even inside our home, drones can look through our windows. Thermal imaging cameras in the hands of police can penetrate walls to watch us move around in our living rooms and bedrooms.

But at least there is one place where surveillance cannot penetrate, one last refuge of absolute, inviolable privacy – the inside of our skulls. We are free to think any thought, sacred or profane, sublime or silly, without fear of detection or punishment by any human authority. 

But maybe not for much longer.

The science journal Cell reports that a computer system has been trained to decode brain waves from people who silently move their mouths while mentally sounding the words to themselves. The signal from the brain is then translated into speech in real time on a computer screen with an error rate of 26 percent to 54 percent.

Annika Inampudi in Science reports that this technology, as it is refined, will be a godsend to speech-impaired people paralyzed by strokes or neurological conditions such as amyotrophic lateral sclerosis (ALS). To protect test subjects from blurting out private, inner speech, users can be given unique, nonsense phrases like “chitty chitty bang bang” to cue the device to read their thoughts only when they want it to.

It is this latter development that gives us pause. The fact that a safeword is needed to defend against unwanted exposure of thought is concerning. Also concerning is that scientists have had significant success decoding thought even when the subject is not silently mouthing the words he or she is thinking about. The system at times can read mere inner thoughts.

At a time when digital technology evolves on fast-forward, it is not too early to be concerned about how this technology might be abused. After all, a few years ago AI couldn’t pass the Turing test. Now ChatGPT is regularly writing entertaining short stories, poems with striking imagery, and student papers that get A’s from naïve professors. The same progression could enable mind-reading technology to rapidly allow authorities to dip into people’s skulls against their will.

Imagine, for example, how this technology might be used in interrogations.

In this country, at least, the Fifth Amendment prohibition against self-incrimination should make results from such mind-readings inadmissible. But in professions in which polygraphs are routine, from law enforcement to intelligence and some retail positions, it is easy to imagine how such technology could be abused.

Overall, speech recognition technology is a boon for handicapped people who are desperate to communicate. It is a heartening and praiseworthy development that scientists – often caricatured as amoral agents of progress – are diligently thinking of procedures to compartmentalize the reading of thoughts only when subjects permit it.
​
Still, this story should give us pause. Something to think about…

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Meet America’s Latest Mass Surveillance Tool: GeoSpy

8/18/2025

 
“Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.”

– Ian Malcolm, Jurassic Park
Picture
Just a quick update about the ever-expanding toolkit of the technocratic mass surveillance state: The new kid on the block is GeoSpy, which can examine a photograph and extrapolate your location in seconds. It claims to accomplish this by using only visual data in the image rather than metadata. From a purely technical perspective, that’s a big achievement.

From a privacy standpoint, it’s a nightmare.

According to an account first reported by 404 Media and summarized by Alex Hively of SlashGear, the original open source version of GeoSpy was quickly removed when it became clear that it could be used to stalk people. Company founder Daniel Heinen later admonished Joe Rogan and guests in a tweet reminding them that GeoSpy is “only for Law Enforcement and Government” use (which, at the time of the tweet had recently become true).

That GeoSpy is now “only for” law enforcement and government use is cold comfort. It seems an all-too-familiar narrative, reminding us of Clearview AI’s similarly reckless approach to the ethics of identification technology. By the end of 2021, the facial recognition startup had scraped ten billion images from the web and social media, providing agencies with a powerful new tool to instantly identify us and aid in the quick construction of dossiers of our beliefs, activities, and relationships.

And now, thanks to breakneck developments in technology, the government can now both identify us and locate us. Consider this statement from GeoSpy founder Heinen:

“My job as a leader in my space is to build the best technology that customers are asking for. It's not my job to play the ethics game because our elected officials will eventually figure that out. I have full faith in the American people to decide who to elect and what to vote on.”

(If this were a video, here is where we’d cut away to a dark screen and the sound of crickets.)
We won’t belabor the point as our readers know full well where all of this is likely to lead. But we will quote ourselves from a related article decrying the surveillance capabilities of drones and satellites: “What is cutting-edge technology today will be standard tomorrow. This is just one more way in which the velocity of technology is outpacing our ability to adjust.”

With the rise of GeoSpy, we now have one more reason for Congress and the states to hit pause and reassert the privacy guarantees inherent in the Fourth Amendment.

One last thing: Don’t assume you’re safe just because GeoSpy found a picture that you took indoors. It appears they’ve cracked that nut too, having discovered that their visual model can learn “regional architectural cues.” Silly us, we thought all apartment kitchens looked the same.

As Malwarebytes advises, “It’s just become even more important to be conscious about the pictures we post online.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

A Primer on Government Purchases of Your (Very Personal) Data

8/18/2025

 
Picture
​In Washington, D.C., they call it the “data broker loophole.” This is the legal maneuver by which a dozen federal agencies, ranging from the IRS to the FBI, Department of Homeland Security, and the Pentagon, purchase records of Americans’ personal digital activity from third-party data brokers.
 
What is this loophole? With a straight face, the government claims that while the Fourth Amendment forbids “unreasonable searches and seizures” of our personal effects, nowhere does the Constitution forbid the government from opening its wallet and simply buying our data. And to be fair, we all routinely click the “agree” box that allow these transfers when scanning social media platform’s long and hard-to-read terms of service.
 
This is still disingenuous at best. The digital trails we leave online – our communications, the identities of our friends and associations, our personal financial, romantic and health secrets, not to mention our search histories – reveal information that can be more intimate than a diary.
 
Americans are noticing this violation of their privacy. A recent Ipsos poll finds that roughly 90 percent of Americans respond that it is not acceptable for private data brokers to sell our personal data to the government.
 
Congress is certain to soon turn to legislation that will require the government to obtain, as the Constitution requires, a probable cause warrant before inspecting our data.
 
In the meantime, if you want more background on the nature, extent, and abuses of the data broker loophole, here are some useful resources:
 
1. What Are Data Brokers, And How Do They Work?
(Proton) / June 20, 2025
 
A detailed primer on data brokers and the risks posed to consumers, including the sale of such data to government agencies without warrants.
 
2. “Anyone Can Buy Data Tracking U.S. Soldiers and Spies to Nuclear Vaults and Brothels in Germany,” (Wired), Nov. 19, 2024.
        
Despite what has to be the most clickable headline in recent history, Wired presents a deep and substantive investigative report that reveals the extent to which the sale of personal data collected by personal devices is putting Americans in uniform and national security at risk.
 
  • “More than 3 billion phone coordinates collected by a U.S. data broker expose the detailed movements of U.S. military and intelligence workers in Germany – and the Pentagon is powerless to stop it.”
 
3. “A Continuing Pattern of Government Surveillance of U.S Citizens,”
         (Americans for Prosperity: James Czerniawski) See p. 4, April 8, 2025.
        
Eighty percent of Americans agree that the government should “obtain warrants before purchasing location information, internet records, and other sensitive data about people in the United States from data brokers.” And yet federal agencies routinely buy our data, threatening our most basic constitutional rights.
 
4. “The Intelligence Community Plan to Make It Easier to Buy All Your Data,”
(Project for Privacy and Surveillance Accountability), June 2, 2025.
 
The Office of the Director of National Intelligence has instituted a plan to make sure Americans’ private data is no longer decentralized, fragmented, siloed, overpriced, and limited – literally everything you might hope your personal data would actually be.

5. Montana Becomes First State to Close the Law Enforcement Data Broker Loophole
(EFF) / May 14, 2025
 
Montana is the first state to close the data broker loophole, preventing law enforcement from buying personal digital data – like location, communications, and biometrics – without a warrant. Under SB 282, such data can only be accessed with a warrant, user consent, or an investigative subpoena. The law goes into effect 10/1/2025.
6. Federal Government Circumventing Fourth Amendment by Buying Data From Data Brokers
(Criminal Legal News) / April 15, 2025
 
Summarizing much earlier reporting from Reason and WSJ, the focus is on efforts to dodge Carpenter v. United States (2018). Federal agencies routinely purchase commercial cellphone data – which tracks individuals’ movements – without warrants, skirting Carpenter, which requires a warrant for such data.
 
  • Companies like Google and Meta collect massive user data for advertising, while others sell it to brokers, who then sell it to law enforcement. ICE, IRS-CI, CDC, and the DIA have all used such data. Agencies claim Carpenter doesn’t apply to purchased data, revealing a legal loophole Congress has yet to close.
 
7. “FISA and the Second Amendment: Gunowners Beware,” (Reason[A1] ), Feb. 1, 2024.
 
If you’re a gun owner and use Apple products, you should be deeply concerned about the ability of federal law enforcement agencies to get a lot of data on you – without ever having to get a warrant.
 
8. EPIC White Paper Finds Gaps in State and Federal Privacy Law Coverage of Data Brokers
(EPIC) /July 29, 2025
 
This report argues that data brokers exploit legal loopholes in the Fair Credit Reporting Act (FCRA) and Gramm-Leach-Bliley Act (GLBA) to avoid compliance with modern privacy laws.
 
  • EPIC urges lawmakers to close these loopholes, eliminate exemptions, and regulate data brokers under unified, enforceable privacy standards. References foreign adversaries.
 
9. Government Purchases of Private Data
         (Wake Forest Law Review, 59:1) / April 2024

This paper questions the widespread assumption that the Fourth Amendment can never apply to commercial purchases. Yet police officers can generally purchase an item available to the public without constitutional restriction.
 
  • The paper challenges the idea that consumers waive their rights to their cellphone data when they use apps or other services. The explanations customers see when an app asks for permission to access their data are often insufficient or misleading, and typically say nothing about personal data being sold. Further, penalizing users for disclosing their data to service providers creates harmful incentives and is incompatible with the Fourth Amendment.

This paper draws broader lessons about the inadequacy of consumer privacy law in the United States. It examines the potential for private surveillance to become government surveillance through technical and legal interoperability. It assesses possible solutions.
 
10. Federal Acquisition of Commercially Available Information
(POGO) / Dec. 16, 2024

The Project On Government Oversight (POGO) warns that federal agencies' unchecked use of commercially available information (CAI), including sensitive personal data purchased from brokers, circumvents Fourth Amendment protections.
 
POGO urges the Office of Management and Budget to end warrantless surveillance practices, increase transparency, and implement strong regulations. The comment highlights risks to privacy and civil liberties, especially for marginalized communities, and documents past abuses by agencies like DHS, ICE, and the FBI.
Finally, if you are interested in solutions, start with The Fourth Amendment Is Not For Sale Act, which passed the House of Representatives last year. If enacted, this measure would require the government to obtain a warrant before buying Americans’ personal information.
 
PPSA looks forward to this or some similar legislation being introduced in the 119th Congress.

 [A1]This links to a Cato article, which does not contain any links to a Reason article, so I don't know if he embedded the wrong link or wrote the wrong text. The text itself is from Cato.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Your Personal Data Should Be Priceless, But Now It’s Only $50.

8/12/2025

 
“Moral bankruptcy is common in this industry, but I rarely see a company so proud of it.”
– Callie Schroeder, Electronic Privacy Information Center
Picture
Farnsworth Intelligence sells highly personal data on the cheap. Its business plan is as revolutionary as it is mercenary and brazen: Positioning itself as a legitimate business but selling data previously brought to market in the twilight corners of the dark web.

The realm in which this company operates is euphemistically known as “open-source intelligence,” or OSINT. Once upon a time, OSINT was primarily composed of publicly available data. But don’t be fooled. To quote PC World writer Michael Crider, “This is information apparently sourced directly from data breaches, stolen from companies and services in ways that just about every country considers a crime.”

And it’s all repackaged to sell at various price points. To wit, 404 Media, whose Joseph Cox broke the story, bought a tiny slice of Farnsworth’s data wares for a mere $50. 404 Media reports that is all they needed to eventually mine the addresses of numerous identity theft victims. Perhaps that’s why journalists report the company’s website says customers can find up-to-date addresses for debtors. Need data for your multi-million-dollar divorce case? Farnsworth can do that too.

As for the potential for trade secret violations, corporate espionage law, and the general use of stolen data in courts, EPIC’s Callie Schroeder says stay tuned. There are likely statutes that apply to what Farnsworth is doing, but as with all things digital, judicial rulings have been inconsistent to date.

And don’t even get us started on the surreptitious value government agencies of all stripes will place on this kind of dark web data – it could be a warrantless surveillance extravaganza.

​But there is no denying that shamelessness sells. After the story broke, Farnsworth issued a “404MEDIA” promo code on LinkedIn to celebrate the fact that it “has been getting a lot of attention.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

AI And Schools: Cheating Isn’t The Problem

8/12/2025

 
“The future of AI is not about replacing humans, it's about augmenting human capabilities.”
- Sundar Pichai, Google
Picture
​After you read this, you’ll wish that students using AI to cheat was the biggest problem with the technology. Turns out, a bigger issue is just how inconsistent AI is at monitoring students for “safety risks.” It’s a privacy nightmare we’ve written about before, with laptops snapping pictures of students at home, and the chilling effect such surveillance has on creative expression and First Amendment rights.

But almost four years after we first reported on this increasingly popular trend in secondary education, it shows no signs of letting up – even as we wait for the outcome of a major lawsuit by Columbia’s Knight Institute designed to compel a school district to disclose the nature of their surveillance tech.

Instead, we continue to read more headlines like this one from Sharon Lurye from the Associated Press: “Students have been called to the office – and even arrested – for AI surveillance false alarms.” You can read the details of the story for yourself, but the gist is this: A student made a joke on a school-related chat account. The joke was both culturally insensitive and had a reference to feigned violence. It was also somewhat self-deprecating. It was therefore exactly the kind of crass, completely innocent sarcastic drivel that you would expect from a teenager.

The only difference is that AI was watching (and, apparently, without the aid of humans possessed of common sense). So, of course, the student was arrested and separated from her parents for 24 hours. Then, somehow, a court made up of non-AI judges ordered eight weeks of house arrest, a full psych evaluation, and 20 days at an “alternative” school. When asked about the incident, the CEO of Gaggle, the company that made the software, opined, “Golly, I wish that was treated as a teachable moment, not a law enforcement moment.” (Okay, we added the “Golly.”)

In all such cases, best as we can tell, these are traditional AI systems – unthinking, rules-based programs that have absolutely no sense of context. Traditional student surveillance products are close to 20 years old. The systems that schools pay companies like Gaggle six figures to operate as elaborate keyword-matching programs don’t “think,” and they certainly don’t understand context.

Just imagine a student paraphrasing one of Shakespeare’s characters crying, “O, I am slain!” Should that student be flagged for suicide watch? That, of course, is a rhetorical question – something that we’re genuinely worried students in these surveillance-based school systems might never learn. (Of course, we have no idea if any Shakespeare character ever uttered anything like that because we used AI to suggest it.)

We get that being proactive about student safety is critical. But monitoring what they type isn’t the right way to do it. Students type – and say – all kinds of tasteless statements because that’s what being in elementary, junior high, and high school is all about. Students should not get arrested (and traumatized) merely for writing sarcastic or ironic language – the kinds of expressive skills school are supposed to teach them in the first place.

This isn’t working and it’s time for parents and school systems – and yes, the students themselves who have filed lawsuits – to stand in solidarity and demand at least an overlay of common sense. Without human discernment, AI-powered surveillance systems are unthinking, non-stop monitors designed to destroy privacy, creativity, and individual expression.
​
We would also remind the school administrators who surely mean well when they initially deploy such systems not to forget the cardinal rule of any AI system: Always keep a human in the loop. Every flagged item should be reviewed by at least one school system employee – preferably a principal with, perhaps, the addition of a school counselor – before anything gets reported to law enforcement.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Americans’ Medicaid Data Is Now a Government Surveillance Search Tool

8/5/2025

 
​“The real danger is the gradual erosion of individual liberties through the automation, integration, and interconnection of many small, separate record-keeping systems, each of which alone may seem innocuous, even benevolent, and wholly justifiable.”
U.S. Privacy Protection Study Commission, 1977
Picture
To try to find people in the United States illegally, the Department of Homeland Security (DHS) directed the Centers for Medicare and Medicaid Services (CMS) to comply with its request to sift through the health data of 79 million Medicaid recipients. This includes giving DHS access to sensitive personal information, including addresses, birthdates, ethnicity, IP addresses, banking data, immigration status, and Social Security numbers.

Twenty states have sued in response, arguing that giving DHS access to such personal data violates privacy protections under multiple federal laws, including the Administrative Procedures Act, the Social Security Act, HIPAA, and, of course, the Privacy Act.

Several civil liberties groups, including the Electronic Frontier Foundation, EPIC, and Protect Democracy Project, have filed an amicus brief in that case.

  • In June, the deputy director of Medicaid protested the DHS plan in a memo: “Multiple federal statutory and regulatory authorities do not permit CMS to share this information with entities outside of CMS.”
 
  • The ACLU’s Cody Venzke told Wired the move was tantamount to Medicaid “being entirely repurposed as a law enforcement database,” prompting the magazine’s writers to observe that the current plan “seemingly involves vacuuming up data from across the government.”

Our own take is that this is the weaponization of data, a characterization articulated by many others. The thing about weapons, of course, is that they can be pointed in any direction. Today it’s illegal aliens. Next time, it could just as easily be wealthy taxpayers, political dissenters, or those who engage in unpopular speech. Orange County official Jose Serrano told the L.A. Times that such targeting is dangerous because “the information is being used against people.”
​
In other words, it could be used for any reason by this or a future administration against you.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Cockroaches Now AI-Enabled Spies

7/31/2025

 
Picture
​Do you get a creepy feeling, a tingling on the back of your neck, when you think of a camera behind or above you illicitly watching your every move? Now, imagine if that camera has legs – and it’s crawling up your pants.
 
Yes, good, old-fashioned German ingenuity has resulted in Kakerlaken – the humble cockroach – being transformed into a robust surveillance platform. These roaches, the actual insects, are fitted with tiny AI backpacks that use neural stimulators under wireless control to steer the little spies to their targets, while they carry miniature cameras and sensors on their tiny, slimy carapaces. In case you are wondering, German scientists at Swarm Biotactics are using the Madagascar hissing variety, one of nature’s largest, and surely among your all-time, favorite cockroaches.
 
These cyborg-bugs will do more than provide real-time video reconnaissance. They will also detect toxic gas, radiation, and heat. And the technology’s neural interface can coordinate a large number of cyber-roaches to converge on a target as a swarm. Even better!
 
These technologies, reports The Times of India, allow “remote control and autonomous swarming in tight or inaccessible environments.” Germany has invested more than $15 million to perfect this surveillance army, no doubt with Russia’s military threat in mind.
 
Germany’s invention appears to be an advance over U.S. and Chinese military projects to develop tiny, mosquito-like drones to carry out surveillance. But those tiny robots are limited by range and battery life. A roach lives off the land and can scuttle for months to a year. They make the perfect sleeper agents and require no fake beards or forged passports.
 
We can certainly see the military utility of this project. We also can’t help but note that exotic technologies developed for warfare have a way of migrating – or, perhaps in this case, scuttling – from military to civilian uses. It is inevitable perhaps that similar off-the-shelf AI and sensors will make their way into commercial and law enforcement uses.
 
It’s bad enough when you see an ordinary Kakerlaken on the floor when you turn on the kitchen light. It would be even worse if someone recorded you shrieking before you smash it with your shoe.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY rIGHTS

WhoFi and Power Meters: Surveillance Begins at Home

7/28/2025

 

“The houses have eyes now.”
– Eva Wiseman

Picture
The idea of one’s home as a castle is, increasingly, only true for actual castles – provided they’re off the grid. For the rest of us, devices are encasing us in surveillance boxes. Two recent articles bring home just how unprivate the home actually is.

First, Electronic Frontier Foundation contributors Hudson Hongo and Adam Schwartz explained that power meters – yes, power meters – are more than innocuous metal boxes we haven’t looked at since we bought the house. In California, where “government” and “overreach” have long been synonyms, it seems that even local power companies have gotten into the spy game.

EFF found that for ten years, Sacramento’s utility district (SMUD) targeted primarily Asian customers who seemed to be using more power than officials thought was needed. It was based on stereotypes that Asians tend to live collectively. SMUD was also looking for customers who grow weed illegally. The utility’s analysts didn’t even try to hide their motives: Among the 33,000 tips SMUD passed on to local law enforcement, a house that used 4,000 kWh was described as “4K, Asian,” and another noted with suspicion “the multiple Asians that have [been] reported” as living there.

Those nastygrams are exhibits in a lawsuit EFF filed on behalf of an Asian American advocacy group and individual residents. The complaint is simple: Order SMUD to stop searching its entire customer database without cause and prevent Sacramento police from performing dragnet searches (think entire ZIP codes), limiting them to court-warranted searches within the confines of active criminal investigations.

In other words, do what the Fourth Amendment has always required. Not to mention, SMUD should start following California’s own law prohibiting such privacy violations. In the meantime, the state might consider changing its motto from “Eureka” to “Yikes.”

The second story is even creepier. Writing in The Register, Thomas Claburn gave Black Mirror writers what might be great material for Season 8. Earlier this year we wrote about a disturbing new Wi-Fi capability discovered by deeply naïve researchers: The ability to sense bodies.

“Wi-Fi sensing,” as the broader field is known, is something the Wi-Fi Alliance should not have started promoting in 2020, but did anyway. Turns out Wi-Fi can do a lot more than get you addicted to your phone. In addition to sensing bodies, it can see through walls to map rooms and recognize human gestures, including sign language. And now? Now apparently Wi-Fi can determine the actual identity of the humans whose bodies it senses.

Or as the Pandora-like academics who cracked the code like to call it, “WhoFi.” If you don’t feel like diving into signal processing equations, we’ll break it down: Wi-Fi waves carry information about what happened to them while they were bouncing around. Signal strength or weakness, delays, whether the signal was twisted or scattered – all of that can be fed to AI to determine the features that make our body composition unique to us, right down to our bones and organs. And all this time you were preoccupied with getting fingerprinted. Get with the times.
​
Finally, we’ll close with our decision to nominate the paper’s authors for “Achievements in Doublespeak.” Despite repeatedly describing their technique as a biometric method for re-identifying people, they insist that it can be made “privacy-preserving.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

PPSA FILES BRIEF: Searches of Your Private Data in the Cloud Amount to Illicit State Action

7/28/2025

 
Picture
We share our most personal information with banks, telecoms, online search engines, and social media platforms. They know what we spend our money on, with whom we communicate, what we search for, and what we read and post. What could be more personal than that? So whenever we allow corporations to hold such personal information in digital form, can that be taken as a presumption that we’ve just given away our right to privacy?
 
Advocates for a sweeping interpretation of the “third-party doctrine” believe so. The very act of sharing our data, they hold, automatically relinquishes any right to privacy. The government thus doesn’t need to seek a probable cause warrant to review our private information, as the Constitution requires. This is not a matter of theory. A complex web of federal and state law effectively requires communications companies – through a risk of ruinous fines – to search through the content of their customers’ data, and report suspicious results to law enforcement.
 
This is what happened to a Wisconsin man, Michael Gasper. His data was flagged by Snapchat’s automated scans as child sexual abuse material, and reported to the National Center for Missing and Exploited Children. Based on this tip, a law enforcement officer was the first actor to perform a human review of the flagged file, though he did so without bothering to obtain a probable cause warrant, as the Fourth Amendment requires.
 
Initially, a lower court recognized that Gasper had a reasonable expectation of privacy in data he uploaded to the cloud through Snapchat. But the Wisconsin Court of Appeals held otherwise, reasoning that Snapchat’s Terms of Service – a lengthy contract most users “agree” to by checking a box, without ever reading it – eliminated any expectation of privacy. Now PPSA has filed a brief before the Wisconsin Supreme Court demonstrating that this ruling would undermine the heart of the Fourth Amendment. It would also defy a line of U.S. Supreme Court precedent that has long condemned overbroad interpretations regarding government access to third-party data.
 
  • PPSA told the Wisconsin Supreme Court that the Fourth Amendment protects the degree of privacy that existed at the Founding despite advances in technology. This is not a reach. In the 18th century, Americans often entrusted their private property – and with it, their personal information – for limited uses by third parties, such as for custody, repair or transportation. Property owners maintained an expectation of privacy over their property, including their documents, when entrusted to a holder. In the 19th century, the Supreme Court held that letters sent through the mail “can only be opened and examined” under a warrant. Why should the cloud be treated any differently?
 
  • Snapchat informed users, through its Terms of Service, that it performed automated searches for illicit material – essentially warning that it complies with the law. The state argues that this means Snapchat users have no expectation of privacy. But we told the Wisconsin high court: “when private reporting is mandated with significant penalties for noncompliance, such reports are state action, not private searches.”
 
  • What about the eyeball search conducted by the law enforcement officer? We told the court: “But even if they were private searches, law enforcement cannot use them as a stepping stone to later, more expansive searches without complying with the Fourth Amendment.”

We reminded the Wisconsin Supreme Court that the U.S. Supreme Court in Carpenter v. United States held that the government did not have the right to warrantlessly track a suspect’s location through historic call records.
 
We urge the court to realize that we can protect children from exploitation and abuse while taking the time to obtain a warrant based on probable cause. Otherwise, policy will continue to subject the private data of all Americans to warrantless searches.
 
The ransacking of our cloud-based data is much like the “general warrants” of the colonial era, when agents of the Crown could rifle through anyone’s documents at will. This practice was one of the prime outrages that sparked the American Revolution. We should not tolerate the government’s general warrants today.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Could Ring’s Policy Reversals ‘Coldplay’ Anyone?

7/25/2025

 
Picture
It is hard to believe that it has been only a week since the CEO and chief people officer of the data company, Astronomer, were exposed by a “kiss cam” at a Coldplay concert.

In retrospect, the couple might have evaded public scrutiny if they had not panicked and jumped apart once they realized that they were on camera. “Wow, what?” said band frontman Chris Martin in real time. “Either they’re having an affair, or they’re just very shy. I’m not quite sure what to do.”

Astronomer’s board knew what to do. Within the span of a few days, the CEO resigned. The reaction of the internet was instinctive, a matter of keyboard muscle memory. Within a few hours of the event, the image of two people in blissful embrace prompted a thousand meme-jokes showing odd-fellow global politicians and celebrities resting against each other.

As the Coldplay image settles into the geologic layers of the internet, the meme may live on alongside the ever-elastic Distracted Boyfriend and Woman Yelling at Cat memes. The sober realization that we’re always under surveillance and that you’re only one bad act from becoming a global meme is also settling in.

Wyatte Grantham-Philips of Associated Press sums up this state of affairs:

“From CCTV security systems to Ring doorbells, businesses, schools and neighborhoods use ample video surveillance around the clock. Sporting and concert venues have also filmed fans for years, often projecting playful bits of audience participation to the rest of the crowd. In short, the on-scene viewer becomes part of the product – and the center of attention.

“And of course, consumers can record just about anything if they have a smartphone in their pocket – and, if it's enticing to other social media users, that footage can quickly spread through cyberspace ...

“‘I'm not sure that we can assume privacy at a concert with hundreds of other people,’ adds Mary Angela Bock, an associate professor in the University of Texas at Austin's School of Journalism and Media. ‘We can't assume privacy on the street anymore …’

“‘It's not just the camera,’ Bock says. ‘It's the distribution system that is wild and new.’”

This distribution system includes free image locators, AI-driven facial recognition systems that can quickly match your facial features to your LinkedIn Page or other images that you publicly posted. Tyrannical governments use such systems to locate dissidents and drug cartels use them to locate and kill informants.

Surveillance is only going to become more pervasive now that Ring is reversing reforms it made in 2024. The company had earlier pulled its “Request for Assistance” feature that made it easy for police to request and obtain footage from Ring camera owners.

  • EFF reports that Ring is adding AI identifiers to its product, promoting widespread facial recognition.
 
  • Ring is also building a new tool with police technology company Axon. Mashable reports that this new tool will not only once again allow police to request Ring footage. It will also allow users to give police permission to livestream whatever their Ring device sees.
​
The kiss-cam on the jumbotron is a matter of chance. If you don’t want to be recorded, you are still better off lurking among the crowd in a concert than walking down your street. Professor Bock is correct – we live in a world that is wild and new.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

In Mexico, One Digital ID to Rule Them All

7/22/2025

 

“Relying on the government to protect your privacy is like asking a Peeping Tom to install your window blinds.”

​- John Perry Barlow

Picture
​The ruling Morena party in Mexico is moving forward with plans to require a biometric identity document for all citizens, to be rolled out by February 2026. It’s an expansion of the existing CURP program (roughly equivalent to a Social Security number). The government justifies this surveillance saying it will help track 125,000 missing persons, while mass grave sites continue to proliferate.

Yet those deeply disturbing statistics say far more about the unabated power of Mexico’s cartels than anything else. And mandating a national biometric identity system – replete with fingerprints, iris scans, signatures, and facial photographs – will soon give anyone with real power in Mexico a national surveillance system. This chilling prospect is made even colder when one considers the ease by which the cartels have corrupted and used official resources. We recently reported on the blood the Sinaloa cartel shed when it used CCTV cameras and hacked phones to hunt and kill witnesses who helped expose El Chapo to the FBI.

Watchdogs everywhere are understandably alarmed. Beyond the obvious assault on the very idea of a right to privacy (according to Mexico’s own Supreme Court ruling in 2022), other concerns include:

  • Irreparable damage: Kaspersky’s Isabel Manjarrez pointed out that biometric data is uniquely risky: “Unlike a password, once exposed, biometric data cannot be modified.” In other words, biometric data represents true identity whereas something like a number is merely an identifier. Such breaches are irreversible.
 
  • One-stop shop: The most vulnerable system in any scenario is, by definition, one that has a single point of attack, which a mandatory system like this would appear to represent.
 
  • Pay to play: Mandatory registration means essential services could be denied to anyone without it, or with a technical issue. It also increases the motivation for fraudulent actors and greatly increases the stakes when inevitable glitches occur. All three scenarios happened with Aadhaar, India’s version of biometric identity. Mexico’s program is intended to eventually be linked to most available services, including the National Health System Registry.
​
  • Deepfake city: Biometric data is the AI hacker’s favorite data. Faces, voices, and fingerprints are exactly the raw material needed to convincingly impersonate anyone and manipulate everyone, especially with the help of AI.

And just to make extra sure that anything that can go wrong with the revised CURP system will go wrong, authorities plan to include a unique QR code associated with each individual’s identity. Because having a single point of entry, mass-surveillance system should be as easy as possible to access, right?
​
For an even more detailed critique of Mexico’s CURP reforms, we recommend this piece by the editorial board of La Derecha Diario. Read it as a cautionary tale for modern democracies, especially our own.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

House Subcommittee Agrees: AI Crimes Lack Regulation

7/20/2025

 

​“You never change things by fighting the existing reality.”
- Buckminster Fuller

Picture
Last week the House Judiciary Subcommittee on Crime and Federal Government Surveillance held a hearing on AI and crime and something remarkable happened: Everyone agreed:

  1. The administration’s desired moratorium on state AI regulation is a bad idea, and
    ​
  2. Existing criminal statutes everywhere need to be retrofitted to include AI-based offenses.

As for the first area of agreement, there was a collective sense that the country dodged a bullet last week when the Senate removed the moratorium from the budget bill and the House declined to reinstate it. Regarding the second issue, the consensus was clear: Buckle up. We have work to do.

Perhaps getting to work should start with persuading Members of Congress to show up at AI hearings. Other than the Chair and Ranking Member, only three of ten regular members were present. Those who did attend, however, heard from witnesses who, in combined testimony that ran 77 pages, struck similar chords:

  1. Criminals can use AI impersonation to do things we’ve scarcely imagined.

  2. AI is removing the technical barriers to crime. In the hands of industrious criminals, said witness Zara Perumal of Overwatch Data, “AI agents can learn by doing,” meaning the criminals themselves no longer have to be technical experts. “AI is removing human bottlenecks. It’s not just enhancing traditional fraud – it’s creating entirely new categories of criminal threat,” agreed Ari Redbord of TRM Labs. Just imagine, as one of the witnesses portended, “child abuse at scale.”

  3. Law and policy are late to the party. For example, “Artificial Intelligence” isn’t even a category on the drop-down menu of the National Conference of State Legislatures. Overall, a handful of states have passed a few random measures, while hundreds of initiatives either failed or remain pending. As for the federal government, forget it. The administration’s recent moratorium attempt proved that the attitude du jour is recklessly laissez-faire. Which is exactly why subcommittee chair Andy Biggs (R-AZ) needed to call this hearing and promisingly referred to it as the first of many.

  4. When it comes to fighting AI-powered crime, the best offense is a good defense. Given that the proverbial cats and genies are already out of their respective bags and bottles, it’s time to “shift the technical advantage to the defenders,” said Perumal.

Oregon and California, for example, intend to repurpose existing laws to include AI abuses. Some from-scratch legislation is also emerging, like Texas’ Responsible AI Governance Act and Tennessee’s ELVIS Act.

While most of the discussion centered on how criminals can misuse AI, we should not forget how it may be misused by our own government, which has a voracious appetite for our purchased data. AI is the critical ingredient to turn all that raw personal data into a working surveillance state.

The ACLU’s Cody Venzke reminded everyone not to overlook the Swiss Army Knife of our democracy – the Bill of Rights, especially the First and Fourth Amendments. Such protections, Venzke said, do not lose their power “simply because a new tool such as artificial intelligence was used.” They are both our sword and shield against criminals and government surveillance abuse, especially in the age of AI.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

PPSA Congratulates Rep. Tom Emmer as House Passes Anti-CBDC Bill

7/17/2025

 
Picture
The House today passed the Anti-CBDC (Central Bank Digital Currency) bill, forbidding the Federal Reserve Board from ever establishing a government-issued digital currency.

“The House action was prescient, but not at all premature,” said Bob Goodlatte, PPSA Senior Policy Advisor and former chairman of the House Judiciary Committee. “With an official digital dollar, the government would have been able to surveil every transaction, no matter how small or how personal.

“Such a central bank digital currency would enable mass surveillance of American consumers, and the debanking of any targeted group,” Goodlatte said. “We are grateful to President Trump for issuing an executive order in the early days of his administration to forbid the establishment of such a digital currency which would, the president said, ‘threaten the stability of the financial system, individual privacy, and the sovereignty of the United States.’

“That was a bold and necessary move by the president, but an executive order would not keep a future administration from someday taking us down that road. Such assurance can only come from a law. Today’s victory is a testament to the perseverance of House Majority Whip Rep. Tom Emmer (R-MN) who sponsored and tirelessly advocated for passage of the anti-CBDC bill by the House.
​
“We shouldn’t be tracked by our dollars or our spending. PPSA and our followers urge the Senate to keep the momentum going and get this bill to the president’s desk.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Discovery Order in New York Times Case Against ChatGPT Threatens Us All

7/14/2025

 
Picture
Who better to consult about the personal issues that ChatGPT users ask than the AI chatbot itself? So we inquired.
 
On the personal side, ChatGPT says users most commonly ask about:
 
  • “Mental health and emotional support,” including how to handle “anxiety and stress, depression, relationship struggles and self-esteem.”
 
  • “Romantic Relationships & Dating,” including “dating advice” and “breakup help.”
 
  • “Career and Work Issues,” including “quitting a job” and “difficult coworkers or bosses.”
 
  • “Identity and Life Decisions,” including “sexual orientation or gender identity,” “religious and spiritual doubts,” and “major life choices.”
 
For several years, consumers have freely asked such questions, confident in ChatGPT’s promise that it doesn’t retain their queries once deleted.
 
Now, thanks to a pliable magistrate judge in New York, all such queries by hundreds of millions of users will be permanently stored and subject to exposure by discovery for future lawsuits or by official warrants.
 
  • This is not to say that this case over copyright violations lacks merit. While developing ChatGPT, OpenAI and some of its competitors freely helped themselves to voluminous (as in Library of Congress-sized) databases, including the contents of The New York Times, without any licenses, permission, or compensation to the holders of the rights to that content. Copyrights were ignored.
 
  • But what is leaving civil libertarian and digital industry observers agog is the sweeping order by which a judge is forcing ChatGPT to violate its promise to its customers and store all users’ queries, no matter how personal.
 
  • Courts may well find that OpenAI’s free use of copyrighted material – allegedly lifted from Russian pirate websites – was an insane business plan from the start. But the judge’s order to lock down and preserve the private queries of 800 million people is equally insane.

Only a few business and education customers are exempt. As for the rest of us, virtually anything asked – no matter how personal – is a permanent record that lawyers in a nasty divorce or commercial dispute, or a government agent, could pry open with the right legal tools.

The actual number of users affected is estimated to be 10 percent of the world population. Yet as staggering as the number of affected users is, The Hill contributor and privacy attorney Jay Edelson says the case’s legal implications are of far greater concern:

“This is more than a discovery dispute. It’s a mass privacy violation dressed up as routine litigation … If courts accept that any plaintiff can freeze millions of uninvolved users’ data, where does it end? Could Apple preserve every photo taken with an iPhone over one copyright lawsuit? Could Google save a log of every American’s searches over a single business dispute? …

“This precedent is terrifying. Now, Americans’ private data could be frozen when a corporate plaintiff simply claims — without proof — that Americans’ deleted content might add marginal value to their case. Today it’s ChatGPT. Tomorrow it could be your cleared browser history or your location data.”

Blame not the plaintiff in this case, understandably concerned about the ransacking of its copyrighted material. Blame the judge for ordering such broad discovery. A better approach would have been a randomized sampling of a large number of users’ queries, anonymized to protect their privacy.

Users – all of us whose private data is now at risk – were never consulted by the court. Two attempts from private citizens to intervene were smugly dismissed by the judge.
Edelson writes:

“Maybe you have asked ChatGPT how to handle crippling debt. Maybe you have confessed why you can’t sleep at night. Maybe you’ve typed thoughts you’ve never said out loud. Delete should mean delete.”
​

Let us hope appellate courts replace this magistrate judge’s chainsaw with a scalpel.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

New Week, New Android Spyware Threat

7/1/2025

 
Picture
​Malware continues to evolve more quickly than Android operating systems. Lifehacker says the original 2021 version of the Godfather banking trojan is back and more prolific than ever. 
We’ll spare you the technical details, just know that the previous version could draw a fake screen on top of banking and crypto apps to mimic them. Unsuspecting users would then enter their credentials assuming it was business as usual. The Godfather targeted hundreds of financial apps around the world.

The new iteration of Godfather creates a complete virtual environment on phones and then makes copies of financial apps to run there. When users open one of their real apps, they are invisibly redirected to the virtual environment where everything they do is captured and harvested. The malware can even control those apps remotely, initiating transfers and payments while users go about their day. And because everything is hidden in a virtual environment, on-device security measures are likely to miss it.

Fortunately for most of our readers, the Godfather is presently focused on financial apps in a few European countries. But if it’s anything like the last version, that could soon be a dozen nations. Given that some estimates put Android’s smartphone market share at 72 percent worldwide, it’s just a matter of time until the new Godfather finds its way to app-loving Americans.

In the meantime, say experts, Android users should make sure Google Play Protect is enabled and that every app is kept up to date via the Google Play Store. Out-of-date apps are dangerous apps in any operating system.

Finally, depending on the specific version of Android OS, there’s some variation of a setting called “Install unknown apps” – which most users probably don’t even realize is there. Review that list and make sure no apps, especially browsers, have permission to do so.

We’re in an all-out footrace against the fraudsters. Knowing that you are in this race is the first necessary step to protecting your hard-earned savings. Then keep up with your security measures to keep private information private.
​
Or, as Vito Corleone says in the original Godfather, “Never tell anyone outside the family what you’re thinking again.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

How To Build A Surveillance State Without Really Trying: Naïve Magistrate Declares “Privacy In Our Time”

6/30/2025

 
Picture
If you wanted to build a mass surveillance program capable of monitoring 800 million people, where would you start? Ars Technica’s Ashley Belanger found the answer: You order OpenAI’s ChatGPT to indefinitely maintain all of its regular customer chat logs, upending the company’s solemn promise of confidentiality for customers’ prompts and chats.

Which is what Ona Wang, U.S. Magistrate Judge for the Southern District of New York, did on May 13. From that date forward, OpenAI has had to keep everything – even users’ deleted chats. All of the rest is now stored “just in case” it’s needed someday.

We asked ChatGPT about this, and it told us:

  • Yes, your current chat questions (and past ones you may have deleted or used in “temporary mode”) are being retained in a secure, segregated legal-hold system.

So our lives – health, financial, and professional secrets – are now being stored in AI chats that Judge Wang thinks should be kept on file for any warrant or subpoena, not to mention any Russian or Chinese hacker.

Not included in the judge’s order are ChatGPT Enterprise (used by businesses) and Edu data (used by universities). Problem: Many businesses and students use regular ChatGPT without being Enterprise subscribers, including entrepreneur Jason Bramble. He asked the judge to consider the impact of her ruling on – well, you name it – his company’s proprietary workflows, confidential information, trade secrets, competitive strategies, intellectual property, client data, patent applications, trademark requests, source code, and more.

  • Perhaps the greatest irony of the judge’s order is that it decimates the privacy-focused “Temporary Chats” feature OpenAI recently debuted. They are “temporary” no longer. Originally, those chats were designed to vanish once you closed them, nor were they part of the user’s account history or memory. They were meant to be secret, one-off conversations with no record. Now, they are digitally accessible memories.

As for the underlying case giving rise to all of this overreach, it involves a copyright infringement lawsuit between OpenAI and the New York Times. It’s a big case, to be sure, but no one saw this coming except for Jason Bramble and one other ChatGPT user, Aidan Hunt.

Hunt had learned about the judge’s order from a Reddit forum and decided it was worth fighting on principle. In his motion, he asked the court to vacate the order or at least modify it to exclude highly personal/private content. He politely suggested that Judge Wang was overstepping her bounds because the case “involves important, novel constitutional questions about the privacy rights incident to artificial intelligence usage – a rapidly developing area of law – and the ability of a magistrate to institute a nationwide mass surveillance program by means of a discovery order in a civil case.”

Judge Wang’s response was petulant.

She noted that Hunt mistakenly used incident when he meant incidental. And then she casually torpedoed two hundred years of judicial review by denying his request with this line: “The judiciary is not a law enforcement agency.” Because, after all, when have judicial decisions ever had executive branch consequences?

Judge Wang had denied business owner Jason Bramble’s earlier request on the grounds that he hadn’t hired a lawyer to draft the filing. The magistrate is swatting at flies while asking ChatGPT users to swallow the herd of camels she’s unleashed. Even if a properly narrowed legal hold to preserve evidence relevant to The New York Times’ copyright infringement claim would be appropriate, the judge massively overstepped in ordering ChatGPT to preserve global chat histories. 

The complaints of Bramble and Hunt, as well as similar pleadings from OpenAI, aim true: The court’s uninformed, over-reaching perspectives ignore the pressing realities of pervasive surveillance of those who accepted the promise that their conversations with ChatGPT were truly private.

Judge Wang wondered Hamlet-like whether the data could be anonymized in order to protect users’ privacy. As we’ve written before, and is now commonly understood, government and hackers have the power to deanonymize anonymized data. As MSN points out, the more personal a conversation is, the easier it becomes to identify the user behind it.

In declaring that her order is merely about preservation rather than disclosure, Judge Wang is naively declaring “privacy in our time.” As in 1938, we stand at the edge of a gathering storm – this time, not a storm of steel, but of data.

What can you do? At the least, you can start minding your Ps and Qs – your prompts and questions. And take to heart that “delete” doesn’t mean what it used to, either.

Here's a chronology of Ashley Belanger’s detailed reporting on this story for Ars Technica:
​
June 4
June 6
June 23
June 25

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Spy On a Wrist: How Smartwatches Can Penetrate “Airgapped” Laptops

6/24/2025

 
Picture
There you are in an overstuffed chair at your favorite coffee shop, sipping a vanilla sweet cream cold brew and working on that top secret professional project. But you know your laptop is vulnerable to snoopers through local Wi-Fi, so you “airgap” it – cut it off from networks.

This everyday form of airgapping means keeping your laptop unplugged from a physical internet or ethernet line. You would also disable all but the most basic programs, and turn off your Wi-Fi and Bluetooth. You might also want to arrive with plenty of juice to keep your laptop charged, given that some public USB ports used for charging have been known to be converted into data extractors, or “juice jacking.” (TSA and the FBI warns that this is common at airports).

Are you safe? Probably. But now we know that a person with a smartwatch seated several tables away might still be able to extract some of your data – by pulling it out of the air. All because you forgot to disable your laptop’s audio systems.

This is the finding of Ben-Gurion University researcher Mordechai Guri, who has made a career of finding exploitable weaknesses in computer networks of all kinds. He excels in identifying ways to break into standalone systems, long considered the gold standard in cyber security because they’re not connected to the outside world. Where the rest of us see only air, Dr. Guri observes an invisible world of electromagnetism, optics, vibration, sound, and temperature – all of them potential channels for covertly stealing and transmitting our data.

Now he’s suggesting that the humble smartwatch can take advantage of sound waves to defeat airgapped systems.

But just as no man is an island, no computer is completely, truly airgapped. Dr. Guri writes:
“While smartphones have been extensively studied in the context of ultrasonic covert communication, smartwatches remain largely unexplored. Given their widespread adoption and constant proximity to users, smartwatches present a unique opportunity for covert data exfiltration.”

It isn’t easily done, to be sure, but it’s doable. Here’s what Dr. Guri describes:

  • An insider compromises a secured network or device (or your laptop) and installs malware.
 
  • A nearby smartwatch has been modified to take advantage of various connectivity capabilities, turning it into a covert listening device. It makes for easy tracking, for example, everything you’re typing into that text editor or spreadsheet.
 
  • The malware and the smartwatch connect. Beyond the range of human hearing, the malware transmits its stolen data at ultrasonic frequencies using the computer’s speakers.
 
  • Computer and smartwatch can be up to 18 feet apart and still exchange data. That’s more than enough to open the door to compromise an airgapped computer to steal a password in a minute or a 4,096-bit encryption key in about an hour.
 
  • The smartwatch decodes the transmission and sends it where it needs to go via its many available connections.
 
  • Mission accomplished.

What makes the overlooked smartwatch so effective in this scenario? Pretty much everything about it, says Dr. Guri: “Smartwatches possess several technological features that enable them to receive ultrasonic signals effectively.” These include high-sensitivity microphones, advanced signal processing software, and powerful chips. (Dr. Guri’s personal site is appropriately named covertchannels.com and offers a deep-dive into his extensive research history.)
​
A smartwatch attack is a low-probability event for most people, at least for the moment. But the takeaway is that airgapping is still at best one layer of protection, not a guarantee of perfect security.  

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Will Meta Do on WhatsApp What It Appears to Have Done on Facebook and Instagram?

6/23/2025

 
Picture
The news broke last week that Meta will soon post ads on a dedicated segment of WhatsApp. This is a big change for a popular messaging app that has long shunned advertising.

Ads will not appear on WhatsApp’s chat feature with friends, instead appearing in a special “Updates” section. But in order for ads to be effective, Meta will still need to collect users’ location and language data to target ads to individual user’s accounts. Meta insists no information will be gleaned from messages or calls.

“The fact that Meta has promised that it’s adding ads to WhatsApp with privacy in mind does not make me trust this new feature,” Lena Cohen of the Electronic Frontier Foundation told Fast Company. “Ads that are targeted based on your personal data are a privacy nightmare, no matter what app they’re on.”

This story comes on the heels of another recent big story about Meta, one that should inform any evaluation of the company’s promises about WhatsApp. Meta has been making aggressive use of users’ data on its other two main platforms. Here’s what we know about that:

1) The Washington Post reports that Meta, desperate to build a “digital” version of real customers for advertising purposes, secretly positioned Facebook and Instagram to silently track Android users’ browser activity, then forwarded that information to its servers. If you think about all the private searches you might have performed on your smartphone browser, that is a sobering realization.

2) Meta’s apparent tactics touch on multiple areas of ethical and legal concern:

  • If true, Meta bypassed Android’s privacy safeguards using some of the same tactics as malware. Android was designed to prevent apps from tracking what users were doing in browsers like Chrome, Firefox, etc. Apps are intentionally walled-off or “sandboxed” to keep them from snooping around. But Meta manipulated its popular “Pixel” JavaScript code to allow its apps to secretly track Android users’ web activity (on more than one million sites).

  • “Sandboxing” – or the segregation of data in apps – has been common practice in browser security since the early 2000s, and Google’s Chrome helped lead the way. Sandboxing later became a core architectural principle once smartphones went mainstream – to help guard against the possibility of rogue apps accessing one’s personal data. Ars Technica calls sandboxing one of the web’s “fundamental security principles.”

  • As long as users were logged into Facebook or Instagram, the apps were surreptitiously tracking and reporting all browser activity. Known privacy guardrails were deliberately bypassed, including incognito mode, clearing cookies, and the use of VPNs.

  • To succeed, Meta is suspected of deliberately finding and then hiding work-arounds that exploited Android’s native weaknesses. Android is designed and maintained by Google, which was none too pleased when the story broke. Meta’s work-arounds abused Android’s capabilities to “blatantly violate our security and privacy principles,” the company told Sky News.
​
  • Because Meta’s methods bypassed user consent, their actions are an apparent violation of the EU’s GDPR and perhaps some U.S. regulations. While not explicitly about sandboxing, the FTC Act clearly covers misleading or harmful data practices related to those protections.

For its part, Meta called the whole affair a “potential miscommunication,” but agreed to pause the “feature.”

Meta wasn’t the only offender. A Russian tech company called Yandex has apparently been doing the same since 2017, but flatly denies any wrongdoing. Anyone with Yandex apps on their phones (Android or otherwise), should immediately click “Uninstall.” And in terms of using a relatively more secure Android browser, consider Brave. Some reporting suggests that browser successfully protected its users from Meta and Yandex’s incursions.

We understand that consumers give away a bit of privacy in exchange for a free service that selects ads for them on an anonymized basis. As Meta expands its ad presence to WhatsApp, however, Congress and the public need a better understanding of what the company has already done with apps on Facebook and Instagram. PPSA will watch developments in this story closely.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS
<<Previous
Forward>>

    Categories

    All
    2022 Year In Review
    2023 Year In Review
    2024 Year In Review
    Analysis
    Artificial Intelligence (AI)
    Biometric Data
    Call To Action
    Congress
    Congressional Hearings
    Congressional Unmasking
    Court Appeals
    Court Hearings
    Court Rulings
    Data Privacy
    Digital Privacy
    Domestic Surveillance
    Facial Recognition
    FISA
    FISA Reform
    FOIA Requests
    Foreign Surveillance
    Fourth Amendment
    Fourth Amendment Is Not For Sale Act
    Government Surveillance
    Government Surveillance Reform Act (GSRA)
    Insights
    In The Media
    Lawsuits
    Legal
    Legislation
    Letters To Congress
    NDO Fairness Act
    News
    Opinion
    Podcast
    PPSA Amicus Briefs
    Private Data Brokers
    Protect Liberty Act (PLEWSA)
    Saving Privacy Act
    SCOTUS
    SCOTUS Rulings
    Section 702
    Spyware
    Stingrays
    Surveillance Issues
    Surveillance Technology
    The GSRA
    The SAFE Act
    The White House
    Warrantless Searches
    Watching The Watchers

    RSS Feed

FOLLOW PPSA: 
© COPYRIGHT 2026. ALL RIGHTS RESERVED. | PRIVACY STATEMENT
Photo from coffee-rank