Project for Privacy and Surveillance Accountability (PPSA)
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE

 NEWS & UPDATES

The Wall Street Journal Is Wrong – We Can Reform Section 702 Without Endangering National Security

4/14/2026

 
Picture
​Did you see The Wall Street Journal editorial Monday morning entitled “Playing National Security Roulette”? The editors argue that anything less than a clean reauthorization of the FISA Section 702 surveillance authority will “put the lives of Americans at risk.”
 
The Journal editors acknowledge that this authority, enacted by Congress to surveil foreign threats abroad, was misused by FBI agents who ran searches on political protesters, political donors, and Members of Congress. “But the intelligence community has since instituted safeguards on how searches must be authorized,” the editors tell us.
 
Thus, according to The Journal, adding any amendments to Section 702 would be a reckless gamble with national security – and reforms are not needed anyway, because the Reforming Intelligence and Securing America Act (RISAA) fixed all the problematic parts of Section 702.
 
Wrong on both counts.
 
Reforms Would Not Compromise National Security
 
Reformers want to amend the law to make the program consistent with the Fourth Amendment by requiring probable cause warrants before inspecting Americans’ communications.
 
But the warrant requirement being proposed for surveillance of Americans contains very clear exceptions for “exigent circumstances,” such as terrorist threats, as well as exceptions for every single other type of search the administration has claimed is helpful in protecting national security, including defenses against cyberattacks. Not only would these reform proposals allow the FBI to proceed without obtaining a warrant in an emergency, but the Bureau would also have great latitude as to what constitutes an emergency.
 
In short, warrants would be required in cases where the government is conducting a fishing expedition with no nexus to national security – such as an agent searching for the communications of his Tinder date, or searching for the communications of thousands of donors to a congressional campaign – but would not be required in exigent cases with national security implications. 
 
The FBI Continues to Violate the Law
​

A FISA Court opinion in March 2025 revealed that the FBI had been systematically violating statutory requirements. In August 2024, DOJ overseers learned that the FBI was operating a “filtering” tool that allowed it to query Section 702 data under the radar. These U.S. person “searches” or queries were not counted, tracked, or audited, nor were they approved by an attorney or supervisor, as required by law.
 
Thus, the actual number of U.S. person queries for 2024 remains unknown and outside of any audits.
 
A new FISA Court opinion found that the systemic violations continue. According to The New York Times and The Washington Post, the FISA Court issued a classified opinion that reportedly reveals that even though DOJ shut down the filtering tool the FBI used in 2024, the FBI has been using another, similar filtering tool to conduct queries without following the requirements of RISAA.
 
Thus, the systemic violations of RISAA are not fixed. They are ongoing.
 
In Summary:
 
The warrant requirement proposals contain sufficient exceptions to counter potential terrorists, cybersecurity attacks, and other threats to the American people. And contrary to The Journal’s assertion that the RISAA “reforms appear to be working,” they are clearly not.
 
One final note – while the reauthorization of the Section 702 statute has an April 20 deadline, FISA Court surveillance orders are in effect through next spring. The House has plenty of time to debate these reform measures. There is no need for the kind of panic The Journal – obviously influenced by intelligence community spin – is fomenting.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

ICE’s Zero-Click Threat to Privacy and the First Amendment

4/12/2026

 
Picture
DHS Secretary Markwayne Mullin. PHOTO CREDIT: Gage Skidmore
Immigration and Customs Enforcement (ICE) is now using powerful “zero-click” commercial spyware that can break encrypted communications – a step that should alarm anyone concerned about privacy, civil liberties, and constitutional limits on government surveillance. 

At the center of the NPR story is “Graphite,” a tool developed by Paragon Solutions. Unlike traditional hacking methods, Graphite relies on “zero-click” exploits – meaning it can infiltrate a phone without the user doing anything at all. No suspicious links. No malicious attachments. Just silent compromise. 

If that sounds familiar, it should. As PPSA has previously warned in our analysis of Pegasus spyware, zero-click tools represent the cutting edge of surveillance: invisible, unaccountable, and extraordinarily intrusive. Like a pathogen spreading without contact, they turn personal devices into government multimedia surveillance devices.

From Counterterrorism to Domestic Use

ICE says the technology is aimed at dismantling fentanyl trafficking networks and other serious threats. But NPR’s reporting raises serious concerns about how broadly such tools might be used – and against whom. 

ICE has expanded its surveillance footprint domestically, including monitoring protests and other constitutionally protected activities. The risk is clear: tools justified for national security can quickly veer into routine domestic enforcement – or even the surveillance of constitutionally protected protests. Once established, Graphite will almost certainly migrate to other agencies, from the FBI to the IRS, supercharged by AI technology.

If spyware of this power can be deployed with minimal judicial oversight, it becomes the digital equivalent of a general warrant – precisely what the Fourth Amendment was designed to forbid.

A Tool with a Troubling Track Record

The risks are not hypothetical. NPR reports that Graphite has already been used by foreign governments to target journalists and members of civil society. Researchers identified cases in which phones belonging to journalists and humanitarian workers were compromised through messaging platforms like WhatsApp. 

This mirrors the global experience with Pegasus and similar tools, which have repeatedly been used not just against criminals, but against dissidents, reporters, and political opponents.

The Constitutional Stakes

The deployment of zero-click spyware inside the United States raises profound constitutional questions. Unlike traditional surveillance, which might be constrained by warrants or physical limitations, these tools allow the government to access the most intimate details of a person’s life – messages, photos, location, even real-time communications – without detection.

Layer that capability onto the federal government’s growing practice of purchasing Americans’ data from brokers, and the result begins to resemble a comprehensive, warrantless surveillance architecture.

Even ICE’s assurances that its use will “comply with constitutional requirements” ring hollow without transparency or meaningful oversight. 

The Section 702 Debate

Congress now faces a choice. It can allow this technology to take root in domestic law enforcement with minimal guardrails, or it can insist on strict warrant requirements, transparency, and accountability before such tools become entrenched. The House vote on the reauthorization of the FISA Section 702 surveillance authority, set to take place within days, is the best chance Congress will have to set the precedent for guardrails on out-of-control federal surveillance.
​

If zero-click surveillance becomes routine, the line between targeting criminals and monitoring citizens may disappear altogether.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

You May Be a Domestic Terrorist and Not Even Know It

4/11/2026

 
Picture
FBI Director, Kash Patel (Center). Official White House Photo by Molly Riley.
​Yes, you – and us, and everyone else. We may all soon be tracked in the FBI’s proposed database for domestic terrorism. As Ken Klippenstein reports, buried inside the administration’s 2027 budget “is a new FBI-led center dedicated to ‘proactively’ hunting Americans the government classifies as so-called domestic terrorists.”

It’ll be a busy place, by the looks of it, operating as a joint mission center where 10 federal agencies watch out for any hint of the following beliefs:

·      anti-Americanism
·      anti-capitalism
·      anti-Christianity
·      support for the overthrow of the U.S. government
·      extremism on migration
·      extremism on race
·      extremism on gender
·      hostility towards those who hold traditional American views on family
·      hostility towards those who hold traditional American views on religion
·      hostility towards those who hold traditional views on morality

With the exception of overthrowing the government, this is a highly subjective list, capable of being interpreted (or added to) as any current or future administration of any stripe sees fit. It could include any atheist or agnostic, any supporter of Bernie Sanders, anyone who has leftward views on gender and the family. These standards, of course, could be inverted by the next administration to make suspects out of people who are critical of progressive policies, restrictive gun laws, or big government.

Today, we target atheists. Tomorrow, the FBI could once again target “radical traditional Catholics.”

The free speech implications alone are beyond chilling, but as a privacy matter, it’s draconian. It blurs the distinction between George Orwell’s “thoughtcrime” and actual terrorism. And to work, it must rely on artificial intelligence crunching vast amounts of social media data in ways that reduce the Fourth Amendment to an afterthought.

No less concerning, when this database is paired with the personal data these same federal agencies obtain by purchasing Americans’ digital records from third-party data brokers, you can see all the elements of a total surveillance state falling into place.
​
It is hard to imagine that such broad categories can yield meaningful intelligence about real terrorists. But it may be just enough for the government to build a dossier on you.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Clapper, Brennan, Gerstell, and Ledgett Misinform Congress About the Strong National Security Exceptions Contained Within Proposed Reform Amendments to FISA Section 702

4/10/2026

 

The Fibbing Four Are at It Again

Picture
Former Director of National Intelligence James Clapper (Right). PHOTO CREDIT: LBJ Library photo by Jay Godwin
​“Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?”

That was the question Sen. Ron Wyden (D-OR) put to then-Director of National Intelligence James Clapper in an open hearing in 2013.

“No sir,” Director Clapper responded, then qualified his statement by saying, “not wittingly.”
It has since been proven – and is a matter of government record – that the NSA’s global trawl of data has pulled in the communications of Americans by the millions over the last five years. Quite a record for a surveillance authority enacted by Congress to surveil foreign targets on foreign soil.

See for yourself the misuse of this authority revealed in a rare public scolding of the FBI by the secret FISA Court over “widespread violations” of Americans’ privacy with Section 702 data. Or look at the revelations issued by that court of specific instances of how the FBI misused warrantless Section 702 material against U.S. political figures. It is widely reported that the FBI has freely helped itself to Section 702 data, searching the data of more than 19,000 congressional donors, a state judge, and Members of Congress.

The Hunter Biden Laptop Deceit

In 2016, former Director Clapper was joined by former CIA Director John Brennan, former NSA General Counsel Glenn Gerstell, and former NSA Deputy Director Richard Ledgett, along with almost 50 other former senior intelligence officials in signing a letter released just before the 2020 election. They chimed in on a New York Post story about the contents of a laptop owned by Joe Biden’s son, Hunter.

This time, the Fibbing Four solemnly told the American people that the contents of the Hunter Biden laptop had “all the classic earmarks of a Russian intelligence operation.” The FBI later determined that the emails and contents of the laptop were “not tampered with or manipulated.” Even The New York Times was forced to report that the laptop and its contents were genuine.

The irony is that former intelligence officials, abusing their continued access to classified information to skew a national election, is about the most Russian thing they could do.

Misinformation About Reform Legislation

Now Director Clapper, and his Hunter Biden colleagues Brennan, Gerstell, and Ledgett, have fired off another letter. This one is directed at Congress telling Members not to allow any reform amendments to the Foreign Intelligence Surveillance Act authority, Section 702, because that would degrade the government’s ability to protect Americans.

“If Congress fails to authorize Section 702, history may judge the lapse of Section 702 authorities as one of the worst intelligence failures of our time,” they write, joined by enough of their colleagues to get the number of signatories up to around 50. “As Members of Congress know, we face sophisticated threats from China, Russia, Iran, and North Korea, including the real possibility of devastating cyber-attacks and state-sponsored terrorism directed at Americans.”

These are, of course, real and active threats. But the Fibbing Four gloss over the fact that all of the reform proposals being proposed in Congress contain exceptions for “exigent circumstances.” These exceptions would allow intelligence agencies to react to time-sensitive emergencies, such as the so-called “ticking time bomb” scenario. These reform proposals also contain exceptions for cybersecurity and warrantless searches of metadata, requiring court approval only to examine the content of Americans’ communications.

Fool Me Once…

The good news is that Congress is getting wise to such shenanigans just before every vote. Before the last Section 702 reauthorization two years ago, the champions of the intelligence community put out a cryptic story about “a serious national security threat” that turned out to be theoretical, not imminent, reports about “Russian space nukes.”

Our advice to Congress is to look at the plain language of the reform legislation that allows the intelligence community to continue to defend America – while upholding our constitutional rights as well.
​
We can defend America and obey the Constitution at the same time. Don’t let anyone tell you otherwise.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Congress, Take Note – Americans Are Worried About Their Personal Data

4/6/2026

 
Picture
​As Congress prepares to debate the reauthorization of FISA Section 702, lawmakers should understand one simple fact: Americans do not trust the government with their data. A new poll shows that 74 percent of Americans are concerned about the privacy and security of their personal data in government hands.
 
The poll, released last week by the Center for Democracy & Technology (CDT), shows that 79 percent of respondents agreed that: “Congress should use its authority to hold the government accountable when it ignores privacy laws.”
 
“People want their privacy protected,” said CDT’s Elizabeth Laird, “and bipartisan majorities want their elected leaders to do something about it. Lawmakers who ignore privacy are significantly out of step with their constituents.”
 
The high level of public concern about the warrantless access by government agencies to Americans’ data – at the heart of the Section 702 debate – was consistent regardless of respondents’ political affiliation or age group. The survey also revealed specific concerns about how that data is used – and misused:
 
68 percent are concerned about personal data being shared with law enforcement across the federal, state, and local levels
 
67 percent are concerned about personal data being shared with the Department of Homeland Security
 
83 percent are concerned about a breach of a government database exposing their personal data
 
73 percent agree that, without privacy laws, government agencies would track and monitor anyone they choose
 
44 percent say they would forgo government benefits rather than risk misuse of their personal data
 
These numbers are a warning. Poll after poll has shown that Americans across the political spectrum are deeply uneasy about how the government collects, searches, and uses their data. That concern is especially acute when it comes to warrantless searches of Americans’ communications under Section 702 – so-called “backdoor searches” that bypass the Fourth Amendment.
 
Nor are these fears hypothetical. From millions of warrantless queries in recent years to the government’s routine purchase of Americans’ data from brokers, the gap between surveillance authorities and constitutional protections has become impossible to ignore. If “trust is the lifeblood of democracy,” then these findings suggest that America is running dangerously low.
 
Congress now faces a choice. It can once again rush through a “clean” reauthorization of Section 702, ignoring both public opinion and constitutional concerns. Or it can act – by requiring warrants for searches of Americans’ communications, closing the data broker loophole, and imposing real oversight.
 
Fortunately, the path forward is clear:
 
—Reform Section 702.
 
—Restore the warrant requirement.
 
—Rebuild public trust.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

House Members Should Not Be Stampeded – Congress Has All Year to Debate and Fix Section 702

3/31/2026

 
Picture
​As the April 20 expiration of FISA Section 702 approaches, a familiar script is playing out on Capitol Hill. Members are warned that any delay in reauthorizing Section 702 – which enables U.S. intelligence agencies to surveil foreign threats – risks allowing a terrorist attack to unfold on American soil.

This “you will have blood on your hands” argument is not just wrong. It is a cynical ploy to short-circuit a debate that Congress owes the American people, one that would in no way endanger national security.

Here is the reality: Letting the statutory authority of Section 702 lapse does NOT mean America’s surveillance goes dark. Surveillance continues under Section 702 certifications issued by the Foreign Intelligence Surveillance Court, which remain valid until their expiration – currently extending to March 2027.

This is not speculation. It is how this law works. As The New York Times has reported, legal directives to communications providers “shall continue in effect” under existing court authorizations.

Yet lawmakers are again being told by the intelligence community to act immediately or risk catastrophe. This fear-based messaging has become routine, repeatedly stampeding Congress into reauthorizing Section 702 without strong reforms to protect Americans’ privacy.

Enacted by Congress to target foreign threats abroad, Section 702 has been used to conduct millions of warrantless searches of Americans’ communications – peaking at 3.4 million in 2021. These are the predictable results of allowing the government to conduct “backdoor searches” without a warrant.

In 2024, a bipartisan amendment to require warrants for searches of Americans’ communications failed in a 212–212 tie in the House. That vote showed how close meaningful reform is – if lawmakers are given the time to pursue it.

Supporters of a “clean” extension – one without any reform amendments – are once again promising a debate on reforms later. Such promised reform debates never arrive. Recent history gives no reason to believe that this time will be different.

Congress has time to debate well beyond April 20. It has time to patiently consider reforms, such as adding a warrant requirement before 702-derived communications of Americans can be inspected.
​
The choice for Congress is not between national security and civil liberties. It is between rubber-stamping a flawed surveillance authority and doing the hard work of fixing it for their constituents.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Coming Soon: Cars that Decide If You Should Drive

3/30/2026

 

“Think Minority Report, But for Your Morning Commute”

Picture
“Zero crash fatalities” was the way some advocates touted the vehicle safety mandates authorized by the infrastructure package that Joe Biden signed in 2021. As admirable as such goals sound, the mandates are an ill-conceived, undefined approach that, from a privacy standpoint, has more holes in them than a cocktail strainer.

Now, three years past its original deadline, the NHTSA is barreling ahead with a model-year 2027 implementation while still not having posted a draft rule. The possible design architecture is a nightmare – including AI-powered infrared cameras that actively monitor biometrics (e.g., pupil dilation) to determine whether a driver is “impaired.”

“Your car simply watches and decides whether you’re fit to drive,” Gadget Review contributor C. Da Costa writes – “Think Minority Report, but for your morning commute.”

Unlike drunk driving laws that already exist and work, warns Lauren Fix, the vagueness of these mandates takes them beyond traditional constitutional safeguards:

“No breath test is required. No police officer is involved. The judgment is made by software. Once flagged, the vehicle can refuse to start or restrict operation – and here is the critical issue: there are no federal rules defining how a driver gets out of that lockout. No required appeal process. No mandated reset timeline. No human review. Drivers are placed into what critics now call ‘kill switch jail,’ with no clear exit. This is not targeted enforcement. It applies to every driver, every time, regardless of driving history.”

“Advanced impaired driving prevention technology” (in the words of the original mandate) seems unlikely to work as advertised. Instead of saving perhaps 10,000 lives annually, it will merely make already too-expensive vehicles even more expensive as reluctant manufacturers pass these costs on to consumers.
​
From a privacy standpoint, it will create a massive public-private database of biometric data that will be the envy of government agents and hackers alike. In doing so, it will permanently end one of the few remaining bastions of American personal freedom, and one that is already under serious threat – the privacy we enjoy behind the wheel.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Your “Private” Zoom Call Could Easily Become a Public Webinar

3/30/2026

 
Picture
​404 Media just uncovered something that should unsettle anyone who uses Zoom: An AI-powered site called WebinarTV is using third-party apps to access online meetings and record them, with meeting presenters repackaged and marketed as “experts” in public-facing webinars.
 
If they're lucky, these unwitting experts – who gain nothing from their newfound notoriety and, indeed, rightly thought they were mere participants in an invited Zoom call – might receive an AI-generated email from an AI-generated agent at AI-based WebinarTV announcing the surprise “rebranding” of their Zoom meeting participation, and perhaps providing the means to opt out. All of this, of course, is after the fact, so it's a privacy-last approach (not to mention that such notifications are probably being routed to spam).
 
For privacy-first users, it's yet another lesson in the ever-growing lecture titled “Pay Attention to Those Settings!” To wit, when informed by 404, Zoom did a review and found that WebinarTV is accessing meeting links that have been publicly shared. That’s the key weakness: Various browser extensions and other digital tools make it possible to record and edit such publicly accessible meetings – even if the meetings themselves aren't being recorded.
 
Using third-party tools (and perhaps their own) WebinarTV is capturing a meeting's audio and video in real time. “Third-party screen recording” is how a Zoom spokesperson described it to 404, while also suggesting that the company was technically powerless to stop it.
 
So it’s up to us for the time being. Consider taking the following steps to avoid getting “webinarred” (or “Zoomjacked”?):
 
  1. Review the privacy settings for your Zoom meetings and be as strict as your situation allows. Yes, it’s mildly inconvenient, but the price of convenience in the digital age is rarely worth it.

  2. Publicly posting meeting links (or sharing those links indiscriminately) is asking for trouble. You can still invite a large number of people to your Zoom call, but work with your tech advisors to make joining the soiree a matter of jumping through some simple, common-sense privacy hoops (like manually admitting attendees – a perfect job for work/study interns, by the way).

  3. Require meeting attendees to register – and why wouldn’t you want to do that, anyway?
    ​
  4. Display a watermark to indicate copyrighted or private content. Tell WebinarTV and its ilk: “We’re on to you and we’re not going to make this easy.”
 
The CEO of WebinarTV told 404 that because the meeting links were publicly accessible, attendees shouldn't have any expectation of privacy. Hence, his company is justified in its actions and isn't guilty of any violations.

Our recommendation? See items 1 through 4 above and don't give WebinarTV the satisfaction. In the meantime, and at this rate, caveat emptor.
 
If you want a deeper dive into WebinarTV’s shenanigans, CyberAlberta details the steps.

How to Hide Your Heartbeat

3/24/2026

 
Picture
​Researchers at Rice University have worked out how to camouflage your heartbeats from unwanted surveillance with “biometric decoys.” Wait, what? Excuse me, you ask, why might I soon want to camouflage my heartbeat?

Remote heart rate monitoring is just one of many threats to privacy emerging from the mushrooming field of biometric tracking. This common, everyday technology ranges from radar-based imaging used for facial authentication to wearables that monitor signals like heart rate variability, respiration, temperature, steps, calories ingested, and the quality of your sleep cycles. Biometric tracking is designed to make everyday life safer and easier, telling you how much of your last night was spent in deep, light, and REM sleep, or whether your heartbeat is showing signs of arrhythmia.

In today’s world, however, no good data feed goes unexploited. Off-the-shelf devices such as millimeter-wave radars can be used to eavesdrop on phone conversations and monitor daily movement patterns. They can also be used to monitor subtler signals like breathing and heart rate to gauge your stress, activity, or emotional state.

“Sensing technologies are becoming higher resolution and more pervasive, and concerns around what that means for privacy should be taken seriously,” said Edward Knightly, the senior researcher on the study. “It is important to explore potential vulnerabilities and think about how we might address them.”
​

Despite the benefits of biometric monitoring, as with almost all new technologies it comes with a privacy downside. Without policy or legal guardrails, employers might soon monitor your heart rate as soon as you log into your work computer. Or imagine how a negotiator might exploit the knowledge that the person on the other side of the table had a terrible night’s sleep.

The complete study was published in the journal Computer Communications via ScienceDirect. And none too soon, given that the market for biometric systems (and their highly desirable data) is expected to roughly double between now and 2030. So it is not too early to worry about such things – as technology can change in a heartbeat.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

New York City Debates Biometric Ban in Businesses – “You Can Cancel a Credit Card But You Cannot Cancel Your Face”

3/23/2026

 
Picture
After the end of the pandemic, retail theft became rampant in New York City, as it did in San Francisco, Los Angeles, and elsewhere.

Retail theft has evolved into a multibillion-dollar industry for highly organized criminal gangs. Last year, Queens District Attorney Melinda Katz charged a theft ring with hitting Home Depot outlets up to four times a day, only taking breaks from larceny for team lunches. New York Gov. Kathy Hochul said that the state, after toughening laws and putting money behind enforcement, had driven down retail theft crimes in New York City and the state with double-digit reductions.

Yet retail theft continues to eat away at the profits of stores, from big chains to mom-and-pop shops. It is understandable that businesses would turn to biometric identifiers to spot serial offenders and block them before they can enter a store.

But there is a cost to such surveillance – one that we all pay.

“Many of us know the feeling of discovering our credit card information has been stolen,” said New York Councilmember Shahana Hanif. “It’s invasive and frightening, but you can cancel a credit card and get a new one. You cannot cancel your face. You cannot cancel your iris.”

Hanif is sponsoring legislation that would prohibit biometric identifying technology in “public accommodation” spaces such as concerts and grocery stores. (Hat tip to Liam Quigley of Gothamist.)

The city already requires stores to post notice to customers that they collect biometric data. Is this a simple case of caveat emptor? Or is the better question: should we give up our privacy just to buy groceries?

There is more at stake than just what store managers see. It is what happens to this biometric data after it is collected. Hanif’s legislation would stop businesses from selling, leasing, or trading biometric data for profit. It would also require written consent from customers who wish to share their data, including in stores where biometrics are accepted for payment.
​
At the very least, protecting our biometric data – and blocking its sale to other businesses, as well as preventing it from being sold or given to government agencies – would be a reasonable guardrail for New York City and other municipalities to adopt.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

New Poll Shows Americans Oppose Reauthorization of Section 702 Without Reform Amendments

3/19/2026

 

Majority Oppose Forced AI Surveillance

Picture
Talk of a “clean reauthorization” of Section 702 of the Foreign Intelligence Surveillance Act (FISA) is growing on Capitol Hill. But as Washington starts to dream of an easy vote that includes no surveillance reforms, the American people are not having it.
 
FISA Section 702 is an authority enacted by Congress to enable the surveillance of foreign threats on foreign soil, but it has often been used by the FBI in recent years to spy on the communications of millions of Americans. Included in that debate is concern over the way in which a dozen federal agencies – ranging from the FBI to the IRS – are purchasing Americans’ personal information from shady third-party data brokers.
 
A new poll commissioned by Demand Progress shows that Americans are paying attention to this threat to privacy – and they don’t like what they see.

  • Only 12 percent of voters, including 17 percent of Republicans and eight percent of Independents, believe Congress should renew surveillance and monitoring activities without reforms.
 
  • Some 37 percent of voters, including a plurality of 41 percent of Republicans, think FISA should only be reauthorized if it contains restrictions on government purchases of our personal data.
 
  • Another 37 percent don’t want the program reauthorized at all.

The poll also shows that the recent dust-up between the Pentagon and AI company Anthropic is focusing the public’s attention on the potential for the government to use artificial intelligence to drive the surveillance of the American people to unprecedented levels.
 
This is especially true as the administration works to dismantle long-standing information silos and remove safeguards that once limited the sharing of Americans’ private data between agencies – from the Department of Homeland Security to the FBI and the IRS.
 
AI surveillance, with data collected under Section 702, could allow government employees across the federal bureaucracy to run warrantless searches of Americans’ private communications. Combined with the vast amounts of Americans’ personal data that federal agencies purchase from third-party data brokers, AI-run surveillance programs will have truly frightening reach.
 
The poll also shows that Americans are watching the AI debate and that a majority see it as a threat to privacy.
​
  • Sixty-six percent of voters – including 76 percent of Independents and 52 percent of Republicans – believe the government should not be able to force AI companies to grant unrestricted access to analyze Americans’ personal data.

Before Congress embraces a comfortable conformity on a “clean” reauthorization of Section 702 or any other surveillance authority, Members would do well to pay attention to the rising alarm over surveillance among their constituents.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

PPSA to Supreme Court: Geofence Warrants Threaten Religious Liberty

3/13/2026

 
Picture
​The Project for Privacy & Surveillance Accountability has filed an amicus brief in the U.S. Supreme Court case United States v. Chatrie, warning that geofence warrants threaten not only Americans’ Fourth Amendment rights, but also our religious liberty and freedom of association.

PPSA previously urged the Court to hear this case and rein in geofence warrants as modern digital general warrants. These warrants compel technology companies to turn over location data for every device within a defined geographic area. Investigators then sift through the movements of potentially hundreds –sometimes thousands – of people in hopes of identifying a suspect.

Now that the Court has granted review, PPSA explains in its amicus brief that this dragnet surveillance exposes something far more sensitive than physical location. Location data can reveal belief, identity, and association.

“Geofence warrants also threaten core First Amendment freedoms by enabling surreptitious mass intrusions into sensitive spaces like places of worship,” the PPSA brief explains. 

A geofence warrant could easily capture the identities of everyone attending a church service, synagogue gathering, mosque prayer, or religious conference. In practice, that means the government could obtain what amounts to a list of worshippers.

The facts of the case illustrate the danger. The geofence search used by investigators in Chatrie encompassed Journey Christian Church in Midlothian, Virginia, capturing the location data of anyone present at the church at that time who carried a smartphone with Google location services enabled. 

That possibility raises profound First Amendment concerns. Location data can expose deeply personal religious information, including “faith affiliation; sacrament participation; belief shifts via changing attendance or visiting a new church; or involvement in recovery ministries.” 

The Supreme Court has long recognized that government surveillance of association can chill constitutional rights. Americans who believe their religious participation may be quietly recorded by the government may think twice before attending services or participating in religious life.

That chilling effect is precisely what the First Amendment was designed to prevent.

PPSA’s brief urges the Court to recognize that geofence warrants do more than raise Fourth Amendment questions about search and seizure. They also threaten the First Amendment freedoms that protect Americans’ ability to worship, gather, and associate without government monitoring.

After all, in the digital age, tracking where people go can reveal who they are, what they believe, and whom they stand beside.
​
The Supreme Court now has the opportunity to make clear that the Constitution protects those freedoms from the reach of dragnet surveillance.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

How Hackers Can Use Tire Sensors to Track Your Driving Habits

3/9/2026

 
Picture
​The Internet of Things (IoT) strikes again. Most modern vehicles possess a tire pressure monitoring system (TPMS), a legal requirement since 2007. A recent study shows that it is possible to capture unencrypted Wi-Fi messages sent by TPMS sensors. Each sensor sends a unique ID number, which makes tracking specific vehicles child’s play for a hacker.

Think about this for a moment – the average car or truck is broadcasting four such unique IDs (one per tire), with no need for license plate readers with high-tech cameras and AI software. That, says the IMDEA Networks Institute, “makes TPMS-based tracking cheaper, harder to detect, and more difficult to avoid than camera-based surveillance, and therefore a stronger privacy threat.”

A motivated hacker need only place a series of low-cost receivers near the appropriate parking lots and roads. Within weeks:

“These tire sensor signals can be used to follow vehicles and learn their movement patterns. This means a network of inexpensive wireless receivers could quietly monitor the patterns of cars in real-world environments. Such information could reveal daily routines, such as work arrival times or travel habits.”

It gets worse: TPMS signals can even be captured from moving vehicles. Some sensors reveal actual tire pressure values (as opposed to merely “Low”), which could, for example, be used to determine if a vehicle is carrying a heavy payload or to distinguish vehicles by type. Pretty soon we’re in Mission: Impossible territory.

As is so often the case with the IoT, safety was the motivation behind the development of tire pressure monitoring systems in the first place. Because privacy was never a consideration, privacy-by-design protections were missing from the start. The result is a familiar IoT pattern: unencrypted signals and wide-open vulnerabilities becoming the rule rather than the exception. When it comes to privacy issues, safety never seems to stay in its lane.

“Our findings show the need for manufacturers and regulators to improve protection in future vehicle sensor systems,” notes researcher Yago Lizarribar. If nothing changes, yet another safety tool will be perverted into an instrument of general population surveillance.

But change does not seem to be an industry priority. As Aaron Pruner of CNET points out, we’ve had sixteen years to address this vulnerability. A study by Rutgers University and the University of South Carolina identified the problem in 2010, a mere three years after TPMS was mandated.
​
Which means that if TPMS sensors were kids, they’d be old enough by now to start driving – and be tracked every mile of the way.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR FOURTH AMENDMENT RIGHTS

What the Anthropic/OpenAI Story Is Really About

3/8/2026

 
Picture
​The media reported on the drama of the Pentagon’s AI contracts as a horse race: Anthropic tried to limit what the War Department could do with the company's Claude AI product. The administration subsequently rescinded all government contracts with the company. OpenAI offered its products as the alternative and won the day.

But beneath this drama lies a deeper and more dangerous reality: In the absence of meaningful guardrails, the AI tech of any company can be used for surveillance and – if combined with data collected under Section 702 of the Foreign Intelligence Surveillance Act (FISA) – could allow government employees across the federal bureaucracy to run searches on Americans’ private communications.

Such AI-powered surveillance could extend far beyond the Department of War’s use cases and even the Justice Department’s FBI investigations. Government AI-enabled mass surveillance of the domestic population would:

  • Not be subject to any oversight authority – constitutional or statutory
 
  • Not be encumbered by recent reforms like 2024 RISAA (Reforming Intelligence and Securing America Act)
 
  • Be supercharged by the dismantling of long-standing information silos and the removal of safeguards that once limited the sharing of Americans’ private data between agencies – from the Department of Homeland Security to the IRS.
 
  • All done without a warrant – without any court supervision of the government’s invasion of your privacy.

The danger of AI surveillance in a government that shares data between agencies should prompt Congress to strengthen Fourth Amendment privacy protections. With such a vast datascape available to the world's most powerful government – where many existing restrictions have already been weakened – we otherwise risk the irrevocable loss of personal privacy and the rise of a permanent surveillance state.

We need to come to terms with the fact that AI tech makes rummaging through our private lives and personal histories easier and faster than anyone could have imagined even a few years ago. Americans’ communications could become permanently accessible to the prying eyes of government agents in almost any agency with a whim (or a political directive) to pursue.

It wasn't supposed to be this way. AI was supposed to have guardrails, as was Section 702, enacted by Congress to enable the surveillance of foreign threats on foreign soil, but has instead been used by the government to search the private communications of Americans without a warrant.

RISAA was a noble attempt to rein in the misuse of Section 702 as a domestic spy tool. Its reforms included oversight and restrictions on FBI searches involving people inside the United States. It implemented rules for queries involving high-profile groups or individuals. It established training and accountability measures, while enhancing oversight of the two secret courts FISA created.
​
These were important reforms, but they were weakened by last-minute changes to the bill. When Section 702 comes up for renewal next month – this time in the context of an AI juggernaut – it may well be our last chance to protect our freedoms while protecting national security. 

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR FOURTH AMENDMENT RIGHTS

Wisconsin’s Supreme Court Sidesteps the Need for a Warrant for Data in the Cloud

3/5/2026

 
Picture
​The Wisconsin Supreme Court recently upheld the conviction of Andreas W. Rauch Sharak for possession of child pornography. This crime is contemptible – and we support every lawful means to apprehend and convict the vile people who traffic in such material.

But it needs to be pointed out that in this case, the court and prosecutors sidestepped the need for a probable cause warrant, as required by the Fourth Amendment. In so doing, they inadvertently widened a loophole in the treatment of data held by third parties, from Google to Apple, from servers to the cloud. As a result, the privacy of law-abiding Americans and the security of our most personal and intimate data are now more vulnerable than ever.

The Case

Rauch Sharak’s conviction involves Google, which routinely flags files containing potential child sexual abuse material (CSAM) for the National Center for Missing & Exploited Children. If that non-profit organization deems files to contain child pornography, they are forwarded on to law enforcement. In this case, the files were referred to the Jefferson County Sheriff’s Office in Wisconsin, where a detective viewed them without a warrant.

The detective then obtained a search warrant to search Rauch Sharak’s home and devices. This resulted in Rauch Sharak being charged with 15 counts of possession of child pornography.

Wisconsin’s Ruling

The state’s highest court upheld a circuit court’s conviction on the grounds that Google had not acted as “an instrument or agent of the government.” This distinction matters, because if Google was deemed a government actor, its searches would necessarily be subject to the Fourth Amendment’s requirement that law enforcement obtain a warrant based on individualized probable cause before conducting a search.

Nor did the court believe that the detective needed to obtain a warrant to view the forwarded files.

“In this case, we determine that law enforcement did not need a warrant before opening and viewing the files in the CyberTip because law enforcement’s search falls under the private search doctrine,” the Wisconsin Supreme Court held. “Under that doctrine, the government does not conduct a ‘search’ under the Fourth Amendment when it repeats a search by a private actor and stays within the scope of the private search.”

The court also stated:

“Seemingly without exception, federal circuit courts and other state supreme courts have held that ESPs [electronic service providers] like Google are private actors when searching for CSAM on their platforms.”

We commend Google for its “zero tolerance” policy for CSAM in its terms of service. But when the government gets involved, so should the Fourth Amendment.

PPSA’s Brief

In our amicus brief before the Wisconsin Supreme Court, PPSA took issue with such “overbroad interpretations of the third-party doctrine.”

The court overlooked a major exception to the private-actor theory – Carpenter v. United States (2018) – in which the U.S. Supreme Court unanimously held that obtaining a suspect’s historical cell-site data constituted a search under the Fourth Amendment.

We told the court that “Carpenter recognized that the Fourth Amendment protects privacy interests that would have been recognized as reasonable at the time of the Founding, notwithstanding advances in technology that make encroachments upon such interests easier.”

Like the postal systems of early America, the Founders would have easily understood that individuals maintain an expectation of privacy when entrusting personal communications or materials to third parties for storage or delivery. Today, however, it is nearly impossible to store private information without relying on third-party providers like Google, Apple, Amazon, and others. For users, the password-protected accounts of Google Photos would have established a subjective expectation of privacy.

The evidence also clearly shows that when Google conducts automated searches, it may function less like a private actor and more like a deputized investigator. At least one court applied state law holding a third party that possesses CSAM-detection software may face liability if it fails to deploy it. Google – a heavily regulated company operating under significant legal pressure – thus begins to resemble a government partner, raising serious Fourth Amendment concerns.

In the wake of this ruling, the government’s ability to compel private actors like Google to perform warrantless searches will only grow. Powers used today to catch CSAM crimes could be used tomorrow to open up our emails, texts, personal photos, and online searches to the government for any reason it chooses. According to the Wisconsin Supreme Court’s interpretation of the private search doctrine, if Google viewed your data, then the government can too. That means the Fourth Amendment becomes a dead letter for any data entrusted to a third party, i.e., nearly all data in our digital age.
​

As lower courts continue to chip away at Carpenter, the Supreme Court has an opportunity in United States v. Chatrie to revisit these issues for the first time since Carpenter. We hope they decide to reaffirm the clear, bright constitutional line defining when digital searches conducted through private intermediaries become government action – and when Americans’ most personal data must be protected from unreasonable searches and seizures.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Meta’s AI Training Includes Smart Glasses Footage Capturing Users Undressing, Having Sex, Sitting on the Toilet

3/5/2026

 
Picture
​There is a point early in a marriage when spouses get comfortable and uninhibited around each other in the bedroom and even the bathroom. That’s because there is no third set of eyes in the room… unless one of them just happens to be wearing a pair of smart glasses.

We recently covered the perils and pitfalls of Meta adding facial recognition software to its Ray-Ban smartglasses. Now Victor Tangermann of Futurism has uncovered a genuine horror story about private images captured by these glasses, millions of which are already in circulation.

Meta, in order to refine its AI imaging, sends footage from consumers’ glasses to contractors in Kenya and other countries to label them for training. This tedious process is necessary to enable AI to learn to recognize everyday objects.

At that point, almost anything recorded by Meta glasses is liable to be sent abroad for data annotation.

“I saw a video, where a man puts the glasses on the bedside table and leaves the room,” one data annotator told two newspapers in Sweden. “Shortly afterwards his wife comes in and changes her clothes.”

Another data annotator said: “In some videos you see someone going to the toilet, or getting undressed.”

Tangermann reports that other footage included “imagery of people’s bank cards, users watching porn, or even filming entire ‘sex scenes.’”

Meta customers have no recourse. Data protection lawyer Kleanthi Sardeli told the Swedish press, “Once the material has been fed into the models, the user in practice loses control over how it is used.”
​
Of course, as the Internet of Things weaves together Ring cameras, cloud-based voice-activated AI assistants, baby monitors, and robot vacuums, we are all subject to being surreptitiously recorded at, well, inconvenient moments. But none of them have the reach into personal privacy that happens when one spouse is wearing a pair of smart glasses and the other announces that the toilet paper holder is empty.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Will Meta Sneak Facial Recognition Smartglasses Past “Distracted” Privacy Advocates?

2/24/2026

 
Picture
​“Great to see you … Bob … How’s … Maggie ... and those three wonderful … dogs of yours.”

You have to admit, it will be a boon to politicians. Adding facial recognition software to smartglasses will enable them and anyone at a cocktail party to dispense with all those tiresome strategies for remembering names and familiar facts about the person in front of them. According to a 2025 internal company memo obtained by The New York Times, Meta plans to quietly equip its line of smartglasses with facial recognition technology dubbed “Name Tag.”

Facial recognition technology is one of the most robust privacy-destroying tools. This was an idea that was floated and dropped five years ago for Meta’s social media platforms. Now it is back, this time as a wearable in Meta’s Ray-Ban and Oakley smartglasses. The strategy behind this policy reversal is breathtakingly cynical.

The Meta memo held that the new feature’s debut would go largely unnoticed if it were launched “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”

This presumably is a nod to the looming FISA Section 702 debate in April, as well as a torrent of other privacy-destructive technologies, like the unfolding national network of Flock cameras.

So plan ahead. You might be at lunch several years from now with a bunch of business prospects wearing Ray-Bans or Oakleys, finding them unusually quiet from time to time. That’s because they will be reading up on you in real time with the help of Meta’s AI assistant.

Meta is weighing identification only of people who are on its platforms, not strangers you pass on the street. But we are skeptical. Even without facial recognition tech installed, the company’s smartglasses can be hacked and made to identify strangers. And let’s not forget that Meta smartglasses already offer livestreaming and the ability to post directly to Instagram.
​
But try to think of the bright side: You’ll never have to introduce yourself again.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

If Social Media Is a Drug, Can Speech Be Medically Regulated?

2/24/2026

 
Picture
Mark Zuckerberg in a suit, possibly attending a meeting or conference, representing his role as a leader in the technology and social media industry, California, U.S, October 09, 2025. PHOTO CREDIT: FotoField
​Anonymity online can be a mask that allows people to say ugly, hateful or untrue things without taking responsibility for them. But it can also be a shield that protects women hiding from abusers, whistleblowers one step ahead of their pursuers, journalists reaching out to confidential sources about wrongdoing, and consumers searching online for answers to questions about their health that they’d rather not have anyone know about.

This is why the current effort by the Immigration and Customs Enforcement (ICE) agency to use emergency subpoenas to force Big Tech companies to reveal the identities of Americans who make critical posts about ICE is so dangerous. If this practice sticks, it will likely migrate to other federal agencies and erode anonymity online.

But the shedding of anonymous speech might come by a different route – not from executive-branch meddling or legislative mistakes, but from lawsuits claiming harms from child internet “addiction.”

Dan Frieth of the digital anti-censorship advocacy group, Reclaim The Net, listened to five hours of Meta CEO Mark Zuckerberg’s testimony in a Los Angeles civil case and distilled it to a jarring and important warning – the age of anonymity could be coming to an end at the hands of the trial bar.

Zuckerberg testified in one of 1,600 lawsuits over internet addiction. In this case, a woman claimed that at age nine Meta’s Instagram addicted her, plunging her into a hell of anxiety, body dysmorphia, and suicidal thoughts.

Frieth notes that the science of internet addiction is “genuinely disputed.” He writes:

“None of this means the harms alleged are fabricated. It means the word ‘addiction’ is doing heavy rhetorical and legal work, and the policy consequences are far beyond anything a jury in Los Angeles will decide.

“‘Addiction’ is how you get a public health emergency. A public health emergency is how you get emergency powers and make it easier for people to overlook constitutional protections. Emergency powers applied to the internet mean mandatory access controls. And mandatory access controls on the internet mean the end of anonymous and pseudonymous speech.

“When social media is classified as a drug, access to it becomes a medical and regulatory matter” justifying “identity verification, access controls, and a surveillance architecture that follows users across every platform and device.”

Frieth notes that a win for the plaintiff in this case would strip the current law protecting platform design decisions. This danger is not theoretical. Frieth reports that Zuckerberg repeatedly suggested that any age verification mandate – and thus identification – be shifted from platforms to owners of operating systems. Zuckerberg would thus toss his liability hot potato from Instagram to Apple and Google.

“This is more than age verification,” Frieth concludes. “It is a national digital ID layer baked into the two operating systems that run the majority of the world’s smartphones.”

There are a lot of competing interests in this case – the safety of children, the nature of the internet, and the value of free speech. Juries don’t have to balance these equities. They can just side with the plaintiff and inadvertently make policy for U.S. tech – and by extension, the world.

Any new approach to child safety should not require adults to give up speech rights recognized in this country since Alexander Hamilton, James Madison, and John Jay wrote collectively as the pseudonymous “Publius” in The Federalist Papers.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

ICE Wants to Spy on Americans’ Political Opinions – Soon Other Agencies Will

2/22/2026

 
Picture
​ICE has become enough of a household word that, like NASA, it’s no longer necessary to spell out its acronym. ICE’s aggressive enforcement of immigration law, now the nation’s hottest political flashpoint, is dividing Americans like nothing else in recent memory. Regardless of where you stand on ICE and illegal immigration, we should all agree that ICE’s massive expansion into domestic surveillance is a grave concern for anyone who values the Fourth Amendment and privacy.

When a protester recording video on her phone wants to know why a masked agent is taking down her information and he replies – “Because we have a nice little database and now you’re considered a domestic terrorist!” – Sheera Frankel of The New York Times rightly suggests that we’ve entered uncharted territory. Political dissent is now being treated as domestic intelligence.

The masked agent was not kidding. The Department of Homeland Security (DHS) is launching a pressure campaign to get Big Tech to identify persons who post content deemed “critical” of ICE. Rather than traditional investigative work, the government appears to be leaning on something akin to an abuse of process, filing hundreds – if not thousands – of subpoenas intended to compel tech giants to cough up user data.

This data grab of lawful speech is unprecedented. It amounts to using an exceptional legal maneuver – an emergency procedure meant for crimes like child trafficking – to collect constitutionally protected political expression. And let’s be clear about the constitutional claim: The contents of our “friends-only” digital posts are modern “papers and effects,” private possessions the Fourth Amendment was designed to shield from generalized searches.

If tech companies cave (and, as highly regulated companies, they likely will), and ICE plugs the data of protesters into its increasingly Orwellian surveillance architecture, then the genie will already be out of the bottle. Once such a capability is developed, it rarely remains confined to a single mission or a single agency. Surveillance tools migrate. Authorities expand. Bureaucracies replicate what works.

These tools – algorithms housed in digital fortresses – will almost certainly be shared with the FBI, IRS, FTC, SEC, and a dozen other agencies eager for their piece of the silicon pie. And they won’t just target Americans who are anti-ICE. Depending on the political winds of the day, databases built to track one form of dissent can just as easily be turned against pro-choicers, pro-lifers, critics of the administration in power, progressives, or MAGA supporters.
​

This looks less like law enforcement and more like the construction of a permanent political-intelligence system – the start of a security-state apparatus on a scale never before seen, primarily and perversely used to surveil and catalog the political beliefs of Americans. Congress should examine this emerging capability and look to install guardrails when it debates surveillance policy in March and April.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Federal Court Rejects Attorney-Client Privilege for AI Chatbot

2/16/2026

 
Picture
The confidentiality of attorney-client conversations may be a cornerstone of American law, but it has some cracks.

One defendant, Bradley Heppner, on trial for securities fraud and other crimes related to his role as the former CEO of Beneficient, learned the hard way that this privilege does not extend to legal questions put to AI chatbots and virtual assistants.

Federal Judge Jed Rakoff of the Southern District of New York on Tuesday ruled that 31 documents that Heppner generated about his case with Anthropic’s Claude – and shared with his defense attorneys – are not protected by attorney-client privilege.

In an analysis by Moish Peltz and Elizabeth E. Schlissel of the Falcon, Rappaport & Berkman law firm, the reasons for Judge Rakoff’s decision include:

  • The privilege extends to communications only between lawyer and client, made in confidence, not conversations with third parties.
 
  • The terms of service of AI chat tools, including Claude, tell users not to rely on them for legal advice and disclaim an attorney-client privilege. This exposure extends not only to documents generated for Heppner, but any prompts he might have posed to Claude.
 
  • Government prosecutors analogized to the court that “if the defendant had instead conducted Google searches or checked out certain books from the library to assist with his legal case, the underlying searches or library records would not be protected from disclosure simply because the defendant later discussed what he learned with his attorney.”

These are persuasive points about this particular case. Still, the ruling underscores a deeper concern: the ready access the FBI and the judicial process have to all of our financial, legal, and highly personal data being held by third parties.

  • Last year, we noted the gobsmacking ruling by a magistrate judge in New York about a copyright case requiring ChatGPT to preserve billions of user queries. The AI chatbot had promised its 800 million active customers that all their questions and the chatbot’s answers – many of them very personal – were confidential. All it took was one judge in one civil case to undo that promise by requiring ChatGPT to permanently store the queries of one-tenth of the human population.

This order even swept in queries that customers believed they had deleted.

As we noted at the time, “virtually anything asked – no matter how personal – is a permanent legal record that lawyers in a nasty divorce or commercial dispute or a government investigation could pry open with the right legal tools.”

Privacy attorney Jay Edelson wrote in The Hill that this is “a mass privacy violation,” asking: “Could Apple preserve every photo taken with an iPhone over one copyright lawsuit? Could Google save a log of every American’s searches over a single business dispute?”

In a similar way, does the Heppner precedent risk exposing the private reasoning of anyone who has ever asked a chatbot a legal question?

These questions point to the urgent need for guardrails on access to third-party data. At a minimum, consumers deserve clearer warnings, tighter limits on data retention, and stronger legal standards before personal queries are swept into criminal trials or litigation.

A more futuristic concern is the likelihood that AI will one day sit at the counsel’s table. Of course, an attorney will be able to consult his AI under the privilege. But as AI agents specializing in the law earn a credible claim to being part of a legal team, will attorney-client privilege evolve to include client conversations with that AI? Or will consultations between the client and the team AI agent remain a discoverable record?

In the meantime, AI and the cloud should come with their own Miranda warning: Anything you type can and will be used against you in a court of law.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Watching the Watchers: Amazon’s Ring Superbowl Commercial Demonstrates “Terrifying” Surveillance

2/10/2026

 
Watch Amazon’s Super Bowl ad and tell us what you see: a heartwarming story of a family reunited with a lost dog, or another element in America’s comprehensive surveillance state.

As the ad shows, Amazon’s free “Search Party” function connects cameras in a whole neighborhood to look out for a lost dog. Amazon’s AI, trained by tens of thousands of dog videos, can recognize different breeds, fur patterns, shapes and sizes to spot the lost puppy. That is not a bad thing at all.

But many viewers found the ad “terrifying,” not heartwarming, according to Kelly Kazek of al.com. One commenter on X wrote:

“Ring just casually outing themselves as literal spyware that can be accessed by anyone on the network. This is insane.”

Another wrote:

“Amazon owns Ring and they want to use all these devices to make a mesh network for Amazon sidewalk … The American consumer just got a Trojan horse packaged as home security.”

As EFF’s Matthew Guariglia reported last year:

“Not only is the company reintroducing new versions of old features which would allow police to request footage directly from Ring users, it is also reintroducing a new feature that would allow police to request live-stream access to people’s home security devices …

“This is a grave threat to civil liberties in the United States. After all, police have used Ring footage to spy on protestors, and obtained footage without a warrant or consent of the user.”

The Search Party AI function greatly amplifies Ring’s surveillance capability. This default feature of Amazon Ring that can identify Fido can also identify you, where you go, and people you visit.
​
At the very least, Amazon should announce limits on how this technology can be trained to follow Americans in our daily movements.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

How the Puppy Poop Police Threaten Our Privacy

2/8/2026

 
Picture
It seems like such a good idea: You lose your dog Ziggy, and you might – but likely won’t – find him by nailing flyers to telephone poles and making social media posts. But with a massive national database of dog photos and a search image function powered by AI, you can save the day.

Another technology to find individual dogs comes from “snout recognition,” the canine version of facial recognition. This tech has dubious origins – blacklisted Chinese AI giant Megvii, which has been developing such canine facial-recognition technology (for snouts of all shapes) since 2019. 

A more common technology links poop to pups through DNA analysis of dog waste. One innovative company, PooPrints, caters to landlords and HOAs desperate to sniff out dog owners who don’t pick up after their pets. No joke: If you want to live at a swanky condominium along the Hudson in New Jersey, for example, you may be required to have your dog’s DNA swabbed and put on file. (If it can happen in Italy, it can happen here).

But there’s a flip side to these otherwise noble uses of detection/recognition technology – this isn’t really just about our pets. Though well-intentioned, these methodologies can be leveraged as yet another way to bypass our privacy expectations.

At least one published study recounts how canine DNA was used to convict four men of murder. All it took was a crime tip from a caller and some residual dog poop from the scene found on one of the perpetrator’s shoes. All other evidence was inconclusive, but the DNA analysis showed the odds of the sample coming from a different dog other than the one at the crime was 1 in 1.16 billion.

We’re all for analyzing DNA and snouts to solve such criminal cases, as the Fourth Amendment clearly permits. What’s concerning is the cavalier way in which something as deeply and uniquely ours as DNA – and now that of our pets – can be gathered, stored indefinitely, and misused without permission or legitimate purpose. Just add human and canine DNA to the thousands of other data points already purchased and warrantlessly accessed by federal agencies and stolen by bad actors. It's just one more knot in the ever-tightening surveillance net that surrounds us.

Remember that the next time you wonder whether to pick up Ziggy’s contribution at the dog park.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

This App Knows Users’ Masturbation Habits – and Now So Can the World

2/2/2026

 
Picture
​One of the unintended consequences of living in the digital age is that everything, sooner or later, becomes quantified as a data point. That now includes – insert “Rated R” warning here – an app user’s masturbation frequency. (Exercising great discipline, we will resist the temptation to make tasteless puns throughout this piece, though they practically write themselves. So, use your imagination.)

Back to the story – addictions of many sorts are as old as humanity. If there’s a silver lining to the otherwise debatable benefits of social media, it may be the proliferation of apps now claiming to offer support for those who seek to overcome their habits. That includes the category of sexual addiction to pornography and masturbation.

404 Media, which originally broke the story, says that an app devoted to helping users defeat their porn addiction is inadvertently sharing related data. This includes how often users look at porn, how they respond, and how it makes them feel when they do. 404 says the story is “a good reminder to think twice before giving any app your personal information.”

The data also includes the users’ age. 404 Media’s reporting suggests that many of the affected users described themselves as minors – as many as 100,000 of the 600,000 whose records proved to be accessible. These vulnerabilities were apparently first reported to the app maker by an independent security researcher in September.

To date, however, the company has not resolved the issue. In fact, its founder has dismissed the allegations as “a bit of a joke,” suggesting the potential for a data leak was faked. For privacy reasons, 404 isn’t naming the app. The root cause of this vulnerability is a long-understood flaw in Google Firebase, which is used by developers to build apps. This flaw is therefore easily replicated by experts. In other words, it’s no joke.

The report indicates that for reasons unknown, Google itself hasn’t fixed the issue. But it’s even more curious that all app makers and even app marketplaces – in whose trust users place their data – haven’t done so, either. All of which means that when it comes to data security, an entry made in confidence can amount to global oversharing.

"The data they can get on what motivates you, what actually makes you take an action – that's so valuable," says technology journalist Elaine Burke. “This is [about] so much more than what your browsing habits are and what you're interested in.” She warns that developers are sold on the notion that humans are “mathematical problems that can be solved with the right metric.”

This story points to the larger issue of falsely believing that when it comes to defeating age-old personal issues in the 21st century, it’s as simple as thinking there’s an app for that. That impulse leads many to unknowingly risk their most personal data with the tap of a digital button. The promise is self-control. But the price might be a loss of privacy.

This demolition of personal privacy by datapoint is made worse by the regular practice of a dozen federal agencies – ranging from the FBI to the IRS – to purchase Americans’ private digital information from data brokers and review it at will. That is all the more reason for Congress to pass a law that imposes a probable-cause warrant requirement before agencies can inspect Americans’ most private information.
​
In the meantime, practice caveat venditor: seller beware – especially when the product is you.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

PPSA Urges Supreme Court to Rein in Geofence Warrants

1/27/2026

 

Chatrie v. United States

Picture
The Project for Privacy & Surveillance Accountability is asking the U.S. Supreme Court to consider whether the Fourth Amendment allows law enforcement to use geofence warrants to retroactively track the movements of everyone in a defined area. These so-called “reverse warrants” involve law enforcement’s request for information from technology companies – like Google, Apple, Snapchat, Lyft, or Uber – that allows them to identify potential suspects in a crime.
 
This case began with a robbery in 2019 of $200,000 from a credit union in Midlothian, Virginia. Detectives soon hit a dead end in a search for suspects. So they served Google with a geofence warrant to provide certain cellphone data for everyone who passed through a circumscribed area around the credit union.
 
As a result, people suspected of no crime had their personal information examined by police. Targets included residents of a nursing home, diners and wait staff at a Ruby Tuesday restaurant, and guests who had checked into a Hampton Inn. The search led to the arrest and guilty plea of one Okello T. Chatrie, who now seeks to exclude this evidence on constitutional grounds.
 
Federal Judge Mary Hannah Lauck noted that because Google logs cellphone users’ location 240 times a day, technology gives police “an almost unlimited pool from which to seek location data” in a broad area in which everyone has “effectively been tailed.” But the U.S. Court of Appeals for the Fourth Circuit, sitting en banc to review a divided panel decision, held that this geofence warrant did not violate the Fourth Amendment.
 
The U.S. Supreme Court is now set to take up this question. In our brief, we are telling the Court that such dragnet surveillance is fundamentally incompatible with the Fourth Amendment’s core protections. 
 
Geofence Warrants Are “Digital General Warrants”
 
One of the primary abuses that motivated the Founders to create the Fourth Amendment was the use in colonial times of general warrants – broad search authorizations that allowed the King’s agents to rummage through private lives and property without individualized suspicion. Geofence warrants are their modern equivalent.
 
Instead of naming a person or place to be searched based on probable cause, geofence warrants similarly authorize the government to sift through massive location databases to identify people who might be worth investigating.
 
PPSA told the court that these warrants invert the constitutional order – everyone becomes a suspect first, and probable cause, if it appears at all, comes afterward.
 
The Supreme Court’s Carpenter Decision Was Not a Narrow Exception
 
Lower courts have struggled to apply the Supreme Court’s landmark decision in Carpenter v. United States (2018), which held that people have a reasonable expectation of privacy in long-term cellphone location records, even when those records are held by a third party. In Chatrie, the Fourth Circuit treated Carpenter as a narrow exception limited to long-term tracking of a single suspect. PPSA demonstrates that this take misreads the case entirely. 
 
Carpenter reaffirmed a broader principle: Fourth Amendment protections must preserve the level of privacy that existed at the nation’s founding, even as technology evolves. The fact that data is held by a third party – or that the government demands only a “slice” of a much larger tracking database – does not erase reasonable expectations of privacy. A two-hour window into a comprehensive location history can still reveal intensely private information – where someone worships, seeks medical care, attends political meetings, or simply lives their daily life.
 
PPSA is telling the Court that the privacy concerns raised by geofence warrants are even more severe than those in Carpenter, because they involve mass surveillance of unknown and unsuspected individuals. This is not targeted policing. It is suspicionless data mining.
 
Your Privacy Rights Depend on Where You Live
 
Courts across the country are sharply divided on this issue. The Fourth and Eleventh Circuits have suggested that geofence searches may not even trigger the Fourth Amendment. By contrast, the Fifth Circuit has correctly recognized that geofence warrants are unconstitutional in nearly all circumstances because they lack particularity and probable cause.
 
That split leaves Americans’ privacy rights dependent on geography, and in the case of Texas, whether state or federal proceedings are involved. PPSA urges the Supreme Court to step in now, before this powerful surveillance tool becomes permanently normalized.
 
The Constitution Must Keep Up with Technology
 
As PPSA warns, geofence warrants are only the beginning. We told the High Court:
 
“Fourth Amendment protections are not categorically lost when a person shares or stores his data with a third party while maintaining reasonable expectations and assurances of privacy. The Court should …  prevent a contrary understanding of Carpenter from continuing to erode Americans’ privacy – especially now, as third-party storage becomes more ubiquitous and artificial intelligence becomes powerful enough to piece together intimate information from seemingly innocuous details about a target’s life.”
 
The data that this practice puts at risk is not limited to location. The government has used other forms of these “reverse search warrants” to extract other private data, such as identifying anyone who has searched for a specific phrase or forcing commercial genealogy companies to allow access to their DNA databases.
 
Advances in artificial intelligence already allow law enforcement to infer locations from photos and videos, even when no geolocation data is attached. Without firm constitutional limits, today’s location dragnet could become tomorrow’s visual surveillance dragnet.
 
The Fourth Amendment’s precise wording is designed to prevent unchecked surveillance. PPSA’s calls on the Supreme Court to reaffirm that Americans do not surrender their constitutional rights simply by carrying a cellphone.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

AI, Facecrime, and the Growing Risk of Emotional Surveillance

1/27/2026

 
Picture
​Are you having a good day! I certainly am! When I got to work this morning I could barely contain my excitement at seeing such a full inbox of wonderful things to do! I swear, at times it seems almost criminal to accept pay for doing work I love so much!

[Smile in the direction of the workplace surveillance camera.]

Anyway, I’d love to join you in the breakroom, but I really can’t wait to get back to my workstation! Toodles!

Artificial intelligence is getting better at reading human emotion. It is used by commercial technology to perform “sentiment analysis,” reading the emotional tone of written communications – a valuable tool for HR departments, advertisers, and customer-engagement consultants.

The next bold step is already at the threshold: AI that can read emotions in our voices, the fleeting micro-expressions on our faces, and our body language. This technology will certainly expand into policing, hiring, and education. Are you acting guilty? Did you hide something in your job interview? Are you bored by the teacher’s lecture?

As biometric corridors become commonplace in U.S. airports, AI is being tested to read facial expressions and body language that could identify potential terrorists – based on the tidy theory that people who plan to blow themselves up at 35,000 feet tend to be nervous. But so are people who are running late for their connection, who just had an argument with a spouse, got fired, or are jet-lagged.

Emine Akar in a blog for the Institute for the Future of Work enumerated the potential pitfalls of emotional surveillance: “Emotions are not simply reflexes. They are complex, contextual, and culturally shaped experiences. A tear can mean grief, joy, manipulation, or even boredom.”

The other risk is that AI, which improves by the day, will read our emotions all too well. Pervasive emotional surveillance may force us to put on a happy face at work, school, and the airport. To frown may be to risk detention, detainment, or delay. We could even risk committing “facecrime,” to name just one of the clever neologisms of George Orwell’s 1984.

That novel’s protagonist, Winston Smith, was well acquainted with facecrime. One had to always have an expression of love when watching Big Brother on the telescreen. One had to have an expression of rage when engaging in the mandatory two minutes of hate. Smith knew that the “smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself – anything that carried with it the suggestion of abnormality, of having something to hide.”

When we allow machines to read our emotions, we risk giving them power over us. “The danger here is not just that machines fail to understand us,” Akar wrote. “It’s that they may begin to discipline us – nudging our expressions, altering our behavior, shaping our emotional lives in invisible ways.”

This kind emotional manipulation was well captured in the movie Her, in which a man falls in love with an AI (not hard to do when the voice belongs to Scarlett Johansson). Pope Leo XIV is not being prescient – he is simply being current – when he warned us over the weekend about getting involved with “overly affectionate” chatbots, lest they become “hidden architects of our emotional states.”
​
We need to be more concerned about the implications of emotionless minds that can read, exploit, and manipulate our emotions. The European Union’s AI Act is one example of how to restrict emotional surveillance at school, work, and other sensitive areas. It is time for Congress, states, and technology leaders to put proper guardrails on emotional surveillance of Americans as well.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS
<<Previous

    Categories

    All
    2022 Year In Review
    2023 Year In Review
    2024 Year In Review
    2025 Year In Review
    Analysis
    Artificial Intelligence (AI)
    Biometric Data
    Call To Action
    Congress
    Congressional Hearings
    Congressional Unmasking
    Court Appeals
    Court Hearings
    Court Rulings
    Data Privacy
    Digital Privacy
    Domestic Surveillance
    Due Process
    Facial Recognition
    FISA
    FISA Reform
    FOIA Requests
    Foreign Surveillance
    Fourth Amendment
    Fourth Amendment Is Not For Sale Act
    Government Surveillance
    Government Surveillance Reform Act (GSRA)
    Insights
    In The Media
    Lawsuits
    Legal
    Legislation
    Letters To Congress
    NDO Fairness Act
    News
    Opinion
    Podcast
    PPSA Amicus Briefs
    Private Data Brokers
    Protect Liberty Act (PLEWSA)
    Saving Privacy Act
    SCOTUS
    SCOTUS Rulings
    Section 702
    Spyware
    Stingrays
    Surveillance Issues
    Surveillance Technology
    The GSRA
    The SAFE Act
    The White House
    Warrantless Searches
    Watching The Watchers

    RSS Feed

FOLLOW PPSA: 
© COPYRIGHT 2026. ALL RIGHTS RESERVED. | PRIVACY STATEMENT
Photo from coffee-rank