Project for Privacy and Surveillance Accountability (PPSA)
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE

 NEWS & UPDATES

Look, Up in The Sky! That’s Definitely Not Superman

7/15/2025

 
Picture
​“The government will become the master, and the people its slaves.” – Anti-Federalist No. 84

The ACLU is suing Sonoma County for using an intrusive drone program as a general surveillance tool, now in action despite a 40-year-old California Supreme Court ruling that prohibits warrantless aerial surveillance.

The California county’s Code Enforcement Service (CES) began the illegal program six years ago under the auspices of needing help to find non-permitted cannabis growing in rural locations. But the ACLU says CES has expanded this program into a general code enforcement tool, issuing millions of dollars in fines unrelated to cannabis growing, from building code violations to zoning infringements.

To accomplish this, CES employees have swooped in to “inspect” backyards, horse stables, playgrounds, hot tubs, outdoor baths, swimming pools, and more – you know, all the places where weed is usually grown? Sonoma drones have even flown under awnings and suspiciously close to curtainless windows. In fact, one of the plaintiffs captured a photo of a drone flying just outside her bedroom window (viewable at top, center-right; © Nichola Schmitz/ACLU):
Picture
Given what this image implies, would this be a good time to mention that the drones employ zoom lenses? And you thought your HOA board was devious.

At least your HOA is regulated by state laws. As the Los Angeles Times points out, California has no laws regulating the use of drones by code enforcement offices (beyond the state Supreme Court ruling mentioned above). ACLU senior staff attorney Matt Cagle told the press: “When it comes to laws relating to government use of drones, it’s kind of the Wild West.” This legal uncertainty, and the fact that drone use for warrantless surveillance is proliferating, makes the ACLU lawsuit all the more important.

“The use of drones over someone’s private space raises a question of what is considered private,” UC Irvine law professor Ari Ezra Waldman told the Times. Then he gave the best analogy we’ve heard regarding the unconstitutional nature of these drone surveillance operations:

“ … if law enforcement on the ground wants to see on the other side of a tall fence or trees into someone’s property, they have to get the person’s consent or they need probable cause for a warrant. ‘Why shouldn’t that apply above ground too?’”
​

Or as the popular paraphrase of Isaac Newton’s translation of an ancient tablet goes: “As above, so below.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Now, AI Writing Police Reports: What Could Go Wrong?

7/15/2025

 
Picture
EFF’s Matthew Guariglia and Dave Maass have brought to light a development in police stations across America that should concern every criminal defense attorney.

A new artificial intelligence program sold by Axon – the manufacturer of police body cameras and tasers – now offers police departments an AI agent that can take audio from body-worn surveillance cameras and convert it into a police report.

Getting police reports right is critical to ensuring justice in the courtroom. These reports are often the first drafts of a courtroom prosecution of a criminal suspect. The officer writes down details about witnessing a possible crime, the discovery of evidence, and reports whether or not the suspect resisted arrest. A prosecutor will scour the police report for details to craft a narrative of guilt before judge and jury. Thus, the accuracy of a police report can mean the difference between freedom or prison for innumerable defendants.

Axon advertises its product, Draft One, as a convenient way to streamline this process. It does the desk work so officers can spend more time on the streets. Sounds good, but what could go wrong?

  • Any defense attorney will tell you that in the confusion of an arrest, officers will often order suspects to quit resisting arrest when, in fact, the suspect is trying to be cooperative. The straight conversion of the jumbled audio of an arrest risks all kinds of misrepresentations.
 
  • EFF writes: “The public should be skeptical of a language algorithm’s ability to accurately process and distinguish between the wide range of languages, dialects, vernacular, idiom and slang people use.”
 
  • EFF: “Police officers who deliberately speak with mistruths or exaggerations to shape the narrative available in body camera footage now have even more of a veneer of plausible deniability with AI-generated police reports. If police were to be caught in a lie concerning what’s in the report, an officer might be able to say that they did not lie: the AI simply mistranslated what was happening in the chaotic video.”

The Draft One process, by design, thwarts any attempt at an audit or to determine whether a statement was written by an officer or written by AI. In filing a Draft One report, an officer does sign an acknowledgement that he or she reviewed the report and edited it in accordance with that officer’s recollection.

However, once the text is copy-and-pasted into the officer’s police report, and the window is closed, the original AI draft disappears. There is nothing to stop an officer from just downloading the AI report and making it official.

This aspect of Draft One is a feature, not a bug.

EFF dug up a video of a discussion about this new product. In it, Axon’s senior principal product manager said: “So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosures headaches for our customers and our attorney’s offices.”

More disclosure headaches? Like what really happened in an arrest?

  • EFF sums up this position: “Axon deliberately does not store the original draft written by the Gen AI, because ‘the last thing’ they want is for cops to have to provide that data to anyone (say, a judge, defense attorney or civil liberties non-profit).”
​
We are all for ways to make the difficult job of being a police officer easier. It is not too much to ask, however, that drafts be kept of any AI assistance, and that such assistance be disclosed to defense attorneys. One bill before the California Assembly, SB 524, would be a good start by mandating such transparency in America’s largest state by population.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Discovery Order in New York Times Case Against ChatGPT Threatens Us All

7/14/2025

 
Picture
Who better to consult about the personal issues that ChatGPT users ask than the AI chatbot itself? So we inquired.
 
On the personal side, ChatGPT says users most commonly ask about:
 
  • “Mental health and emotional support,” including how to handle “anxiety and stress, depression, relationship struggles and self-esteem.”
 
  • “Romantic Relationships & Dating,” including “dating advice” and “breakup help.”
 
  • “Career and Work Issues,” including “quitting a job” and “difficult coworkers or bosses.”
 
  • “Identity and Life Decisions,” including “sexual orientation or gender identity,” “religious and spiritual doubts,” and “major life choices.”
 
For several years, consumers have freely asked such questions, confident in ChatGPT’s promise that it doesn’t retain their queries once deleted.
 
Now, thanks to a pliable magistrate judge in New York, all such queries by hundreds of millions of users will be permanently stored and subject to exposure by discovery for future lawsuits or by official warrants.
 
  • This is not to say that this case over copyright violations lacks merit. While developing ChatGPT, OpenAI and some of its competitors freely helped themselves to voluminous (as in Library of Congress-sized) databases, including the contents of The New York Times, without any licenses, permission, or compensation to the holders of the rights to that content. Copyrights were ignored.
 
  • But what is leaving civil libertarian and digital industry observers agog is the sweeping order by which a judge is forcing ChatGPT to violate its promise to its customers and store all users’ queries, no matter how personal.
 
  • Courts may well find that OpenAI’s free use of copyrighted material – allegedly lifted from Russian pirate websites – was an insane business plan from the start. But the judge’s order to lock down and preserve the private queries of 800 million people is equally insane.

Only a few business and education customers are exempt. As for the rest of us, virtually anything asked – no matter how personal – is a permanent record that lawyers in a nasty divorce or commercial dispute, or a government agent, could pry open with the right legal tools.

The actual number of users affected is estimated to be 10 percent of the world population. Yet as staggering as the number of affected users is, The Hill contributor and privacy attorney Jay Edelson says the case’s legal implications are of far greater concern:

“This is more than a discovery dispute. It’s a mass privacy violation dressed up as routine litigation … If courts accept that any plaintiff can freeze millions of uninvolved users’ data, where does it end? Could Apple preserve every photo taken with an iPhone over one copyright lawsuit? Could Google save a log of every American’s searches over a single business dispute? …

“This precedent is terrifying. Now, Americans’ private data could be frozen when a corporate plaintiff simply claims — without proof — that Americans’ deleted content might add marginal value to their case. Today it’s ChatGPT. Tomorrow it could be your cleared browser history or your location data.”

Blame not the plaintiff in this case, understandably concerned about the ransacking of its copyrighted material. Blame the judge for ordering such broad discovery. A better approach would have been a randomized sampling of a large number of users’ queries, anonymized to protect their privacy.

Users – all of us whose private data is now at risk – were never consulted by the court. Two attempts from private citizens to intervene were smugly dismissed by the judge.
Edelson writes:

“Maybe you have asked ChatGPT how to handle crippling debt. Maybe you have confessed why you can’t sleep at night. Maybe you’ve typed thoughts you’ve never said out loud. Delete should mean delete.”
​

Let us hope appellate courts replace this magistrate judge’s chainsaw with a scalpel.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Hackers Used CCTV Cameras to Target FBI Informants for Assassination

7/7/2025

 
Picture
We frequently report on the dangers of general surveillance in the hands of government actors willing to disregard quaint notions of privacy and civil liberties. Now comes a sobering reminder that bad actors can use the global surveillance economy to track down people in order to kill them.

According to The Guardian, that’s exactly what the Sinaloa drug cartel did in 2018, as detailed in a new Justice Department report. “El Chapo” Guzmán was extradited to the United States in 2017. As payback, a hacker working for El Chapo’s drug cartel subsequently accessed the phone of an FBI assistant legal attaché at the U.S. Embassy in Mexico City.

The cartel, reports Reuters, used the phone number to obtain records of calls in and out as well as geolocation data. Next, the cartel got into Mexico City’s extensive camera system to track the FBI official to identify everyone who met with him. Some were intimidated and threatened. Others were murdered.

Out of the 10 largest metropolises in the world, Mexico City ranks seventh in CCTV cameras per 1,000 people, or roughly six cameras per person. Yet there, as elsewhere, it has had little impact on the crime index. In the meantime, savvy criminals working for shadowy organizations can use it for criminal surveillance.

The Justice Department report says that the FBI has a strategic plan in the works to help mitigate such vulnerabilities. Such efforts are well-intentioned, to be sure, but likely to be one-sided. Playing defense against hackers with access to every tool they need is not a long-term solution.
​
Let’s start by putting stronger guardrails on camera networks in every country that uses them. The warning is that when governments create systematic surveillance networks, they could easily enable the crimes they seek to prevent.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Computer Vision Research Is a Surveillance Incubator

7/7/2025

 
Picture
Perhaps it was always the height of naïveté to assume that forty years of research into “computer vision” – the field of artificial intelligence that allows computers to interpret images – would only make the world a better place. Now a landmark study published in Nature reports:

  • “Our findings challenge narratives that most kinds of computer vision and data extraction are largely benign or harmless and that only a small portion is harmful. Rather, we found that the computer-vision papers and patents prioritize intrusive forms of data extraction.”

Let’s be clear about what that means.

  • The study found that fully 86 percent of the field’s derived patents focused on using computer vision technology to create and extract human-related data, about bodies, body parts, and the spaces where such things are found. In other words, it is technology that tracks individuals and what we do.

The field of computer vision research, says ScienceBlog, has become “a vast network that transforms academic insights into tools for watching, profiling, and controlling human behavior.”

  • Australian academic Jathan Sadowski told The Register that the discipline’s research conveniently meets the insatiable surveillance needs of its friends in high places – the military, law enforcement, and corporations: “Computer vision's focus on human data extraction does not merely coincide with these powerful interests, but rather is driven by them.”

The Nature study also discovered that, to help normalize their voyeurism, researchers in the computer vision field had adopted some clever linguistic tactics, including referring to humans and our body parts as “objects” – a kind of dehumanizing rhetoric that brings to mind the worst of the previous century. The obfuscating language, notes Sadowski, has the added bonus of abstracting “any potential issues related to critical inquiry, ethical responsibility, or political controversy.”

Such studies underscore a pressing reality, namely that we are rushing full tilt into uncharted territory without brakes and without any guardrails on the road. And in some cases, without any roads at all or even any maps.

Technology, and especially AI, doesn’t have to exist in the absence of privacy and accountability. With that goal in mind, one of the leaders of the Nature study, Dr. Abeba Birhane, recently established the AI Accountability Lab at Trinity College Dublin. She represents the best of us in the struggle against an unrestrained surveillance state. The whole site is worth a deep dive, but we’ll leave you with this excerpt:

  • “AI systems are integrated hastily into numerous social sectors, most of the time without rigorous vetting. As a result, AI systems built on social, cultural, and historical data and operating within such a realm tend to diminish fundamental rights, and keep systems of authority and power intact.”

AI Accountability Lab reminds us that any new powerful technology inevitably has political and personal consequences. When technology drives the times, philosophy and prescience are needed as never before.
​
Congress and the states need to enact – use whatever metaphor you like –  legal maps, guardrails, brakes. In the end, they all represent the same idea – a return to the privacy guaranteed to all American citizens in the Bill of Rights.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

How To Build A Surveillance State Without Really Trying: Naïve Magistrate Declares “Privacy In Our Time”

6/30/2025

 
Picture
If you wanted to build a mass surveillance program capable of monitoring 800 million people, where would you start? Ars Technica’s Ashley Belanger found the answer: You order OpenAI’s ChatGPT to indefinitely maintain all of its regular customer chat logs, upending the company’s solemn promise of confidentiality for customers’ prompts and chats.

Which is what Ona Wang, U.S. Magistrate Judge for the Southern District of New York, did on May 13. From that date forward, OpenAI has had to keep everything – even users’ deleted chats. All of the rest is now stored “just in case” it’s needed someday.

We asked ChatGPT about this, and it told us:

  • Yes, your current chat questions (and past ones you may have deleted or used in “temporary mode”) are being retained in a secure, segregated legal-hold system.

So our lives – health, financial, and professional secrets – are now being stored in AI chats that Judge Wang thinks should be kept on file for any warrant or subpoena, not to mention any Russian or Chinese hacker.

Not included in the judge’s order are ChatGPT Enterprise (used by businesses) and Edu data (used by universities). Problem: Many businesses and students use regular ChatGPT without being Enterprise subscribers, including entrepreneur Jason Bramble. He asked the judge to consider the impact of her ruling on – well, you name it – his company’s proprietary workflows, confidential information, trade secrets, competitive strategies, intellectual property, client data, patent applications, trademark requests, source code, and more.

  • Perhaps the greatest irony of the judge’s order is that it decimates the privacy-focused “Temporary Chats” feature OpenAI recently debuted. They are “temporary” no longer. Originally, those chats were designed to vanish once you closed them, nor were they part of the user’s account history or memory. They were meant to be secret, one-off conversations with no record. Now, they are digitally accessible memories.

As for the underlying case giving rise to all of this overreach, it involves a copyright infringement lawsuit between OpenAI and the New York Times. It’s a big case, to be sure, but no one saw this coming except for Jason Bramble and one other ChatGPT user, Aidan Hunt.

Hunt had learned about the judge’s order from a Reddit forum and decided it was worth fighting on principle. In his motion, he asked the court to vacate the order or at least modify it to exclude highly personal/private content. He politely suggested that Judge Wang was overstepping her bounds because the case “involves important, novel constitutional questions about the privacy rights incident to artificial intelligence usage – a rapidly developing area of law – and the ability of a magistrate to institute a nationwide mass surveillance program by means of a discovery order in a civil case.”

Judge Wang’s response was petulant.

She noted that Hunt mistakenly used incident when he meant incidental. And then she casually torpedoed two hundred years of judicial review by denying his request with this line: “The judiciary is not a law enforcement agency.” Because, after all, when have judicial decisions ever had executive branch consequences?

Judge Wang had denied business owner Jason Bramble’s earlier request on the grounds that he hadn’t hired a lawyer to draft the filing. The magistrate is swatting at flies while asking ChatGPT users to swallow the herd of camels she’s unleashed. Even if a properly narrowed legal hold to preserve evidence relevant to The New York Times’ copyright infringement claim would be appropriate, the judge massively overstepped in ordering ChatGPT to preserve global chat histories. 

The complaints of Bramble and Hunt, as well as similar pleadings from OpenAI, aim true: The court’s uninformed, over-reaching perspectives ignore the pressing realities of pervasive surveillance of those who accepted the promise that their conversations with ChatGPT were truly private.

Judge Wang wondered Hamlet-like whether the data could be anonymized in order to protect users’ privacy. As we’ve written before, and is now commonly understood, government and hackers have the power to deanonymize anonymized data. As MSN points out, the more personal a conversation is, the easier it becomes to identify the user behind it.

In declaring that her order is merely about preservation rather than disclosure, Judge Wang is naively declaring “privacy in our time.” As in 1938, we stand at the edge of a gathering storm – this time, not a storm of steel, but of data.

What can you do? At the least, you can start minding your Ps and Qs – your prompts and questions. And take to heart that “delete” doesn’t mean what it used to, either.

Here's a chronology of Ashley Belanger’s detailed reporting on this story for Ars Technica:
​
June 4
June 6
June 23
June 25

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Spy On a Wrist: How Smartwatches Can Penetrate “Airgapped” Laptops

6/24/2025

 
Picture
There you are in an overstuffed chair at your favorite coffee shop, sipping a vanilla sweet cream cold brew and working on that top secret professional project. But you know your laptop is vulnerable to snoopers through local Wi-Fi, so you “airgap” it – cut it off from networks.

This everyday form of airgapping means keeping your laptop unplugged from a physical internet or ethernet line. You would also disable all but the most basic programs, and turn off your Wi-Fi and Bluetooth. You might also want to arrive with plenty of juice to keep your laptop charged, given that some public USB ports used for charging have been known to be converted into data extractors, or “juice jacking.” (TSA and the FBI warns that this is common at airports).

Are you safe? Probably. But now we know that a person with a smartwatch seated several tables away might still be able to extract some of your data – by pulling it out of the air. All because you forgot to disable your laptop’s audio systems.

This is the finding of Ben-Gurion University researcher Mordechai Guri, who has made a career of finding exploitable weaknesses in computer networks of all kinds. He excels in identifying ways to break into standalone systems, long considered the gold standard in cyber security because they’re not connected to the outside world. Where the rest of us see only air, Dr. Guri observes an invisible world of electromagnetism, optics, vibration, sound, and temperature – all of them potential channels for covertly stealing and transmitting our data.

Now he’s suggesting that the humble smartwatch can take advantage of sound waves to defeat airgapped systems.

But just as no man is an island, no computer is completely, truly airgapped. Dr. Guri writes:
“While smartphones have been extensively studied in the context of ultrasonic covert communication, smartwatches remain largely unexplored. Given their widespread adoption and constant proximity to users, smartwatches present a unique opportunity for covert data exfiltration.”

It isn’t easily done, to be sure, but it’s doable. Here’s what Dr. Guri describes:

  • An insider compromises a secured network or device (or your laptop) and installs malware.
 
  • A nearby smartwatch has been modified to take advantage of various connectivity capabilities, turning it into a covert listening device. It makes for easy tracking, for example, everything you’re typing into that text editor or spreadsheet.
 
  • The malware and the smartwatch connect. Beyond the range of human hearing, the malware transmits its stolen data at ultrasonic frequencies using the computer’s speakers.
 
  • Computer and smartwatch can be up to 18 feet apart and still exchange data. That’s more than enough to open the door to compromise an airgapped computer to steal a password in a minute or a 4,096-bit encryption key in about an hour.
 
  • The smartwatch decodes the transmission and sends it where it needs to go via its many available connections.
 
  • Mission accomplished.

What makes the overlooked smartwatch so effective in this scenario? Pretty much everything about it, says Dr. Guri: “Smartwatches possess several technological features that enable them to receive ultrasonic signals effectively.” These include high-sensitivity microphones, advanced signal processing software, and powerful chips. (Dr. Guri’s personal site is appropriately named covertchannels.com and offers a deep-dive into his extensive research history.)
​
A smartwatch attack is a low-probability event for most people, at least for the moment. But the takeaway is that airgapping is still at best one layer of protection, not a guarantee of perfect security.  

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

The CLOUD Act Raises Bipartisan Hackles

6/18/2025

 

Hearing Evokes Unprompted, Strong Endorsement of a Warrant Requirement for Section 702

Picture
The CLOUD Act of 2018 is a framework for working with U.S. tech companies to share digital data with other governments. This law and basis for international agreements was a reasonable concession to allow these companies to do business around the world. But the agreement has gone off the rails because of the United Kingdom’s astonishing attempt to force Apple to break end-to-end encryption so they can access the data of all Apple users stored in the cloud.

Rather than violate the privacy of its users, Apple has stood by its customers and withdrawn encrypted iCloud storage from the UK altogether.

The House Judiciary’s Subcommittee on Crime and Federal Government Surveillance was already skeptical about that agreement, but appalled when the British government used it to secretly order Apple to provide that unfettered, backdoor access to all the cloud content uploaded by every Apple user on the planet. It was an unprecedented request, and an unexpected one from a fellow democracy.

  • In the two years the agreement has been in effect, the UK issued more than 20,000 requests to U.S. service providers. The bulk of those requests included wiretapping surveillance.
 
  • In comparison, the United States issued a mere 63 requests to British providers, mostly for stored data.
 
  • Compare the UK’s 20,000 requests to the 4,507 wiretap orders of U.S. federal and state law enforcement agencies in criminal cases in two years. The United States has five times the population of the U.K., but only issues about one-fourth the number of such orders.

In April, members of the House Judiciary Committee asked Attorney General Pam Bondi to terminate the U.K. agreement. As extreme as that sounds, PPSA supports that proposal as the best way to persuade Britain to back off an unreasonable position. In the worst-case scenario, no agreement would be better than comprehensive violation of Americans’ privacy.
Undeterred, the subcommittee convened a recent hearing entitled “Foreign Influence On Americans’ Data Through The CLOUD Act.” Greg Nojeim from the Center for Democracy & Technology was an invited witness. If one had to name a single theme to his powerful testimony, it would come down to one word: “dangerous.”

Subcommittee Chairman Andy Biggs used the same word, declaring the secretive British demand of Apple “sets a dangerous precedent and if not stopped now could lead to future orders by other countries.” Ranking Judiciary Committee Member Jamie Raskin struck a similar chord: “Forcing companies to circumvent their own encrypted services in the name of security is the beginning of a dangerous, slippery slope.”

In short, the hearing demonstrated that the CLOUD Act has been abused by a foreign government that does not respect privacy and civil liberties or anything remotely like the Fourth Amendment to our Constitution. It needs serious new guardrails, beginning with new rules to address its failure to protect encryption. Expert witness Susan Landau of Tufts University warned the subcommittee that the U.K. appeared to be undermining encryption as a concept. A U.S.-led coalition of international intelligence agencies, she observed, recently called for maximizing the use of encryption to the point of making it a foundational feature of cybersecurity. Yet Britain conspicuously demurred.

  • Rep. Biggs said: “Efforts to weaken, or even breaking, encryption makes us all less secure. The U.S.-U.K. relationship must be built on trust. If the U.K. is trying to undermine this foundation of cybersecurity, it is breaching that trust.” Once pried opened, he cautioned, “It's impossible to limit a back door [around encryption] to just the good guys.”
 
  • Rep. Raskin warned that issues with the CLOUD Act itself are emblematic of larger privacy issues. “None of these issues exists in a vacuum. All government surveillance curtails all citizens’ liberties.” To which witness Richard Salgado added, “If there's still a real debate about whether security should yield to government surveillance, it doesn't belong behind closed doors in a foreign country … the debate belongs in public before the United States Congress.”

That debate will likely become intense between now and next spring when Congress takes up the reauthorization of Section 702 of FISA, the Foreign Intelligence Surveillance Act. Judiciary Chairman Jim Jordan indicated as much when he used his opening remarks to tout the “good work” the Committee has ahead of it in preparing to evaluate and reform Section 702.

Later in the hearing, Chairman Jordan returned to the looming importance of the Section 702 debate, asking each of the witnesses in turn a version of the question, “Should the United States government have to get a warrant before they search the 702 database on an American?”

All agreed without hesitation.

“Wow!” declared Rep. Jordan in response. “This is amazing! We all think we should follow the Constitution and require a warrant if you're going to go search Americans’ data.”
​

Rep. Raskin nodded along. And that’s as bipartisan as it gets.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Citizen Lab: Italian Intelligence Used Israeli Paragon’s Graphite Malware to Spy on Journalists, Activists

6/17/2025

 
Picture
Israel’s spycraft is first-rate. From the “pager” attacks that decapitated Hezbollah, to the surgical strikes over the last few days that have eliminated Iran’s top generals and scientists, it is clear that Israel’s strategic success owes much to world-leading intelligence capabilities in the digital realm.

“In Israel, a land lacking in natural resources, we learned to appreciate our greatest national advantage – our minds,” said the late Israeli Prime Minister Shimon Peres.  Under constant threat, Israel has applied its great minds to information technology in the service of national defense.

What works well in the national security space for Israel, however, is a problem for the rest of the world when cutting-edge surveillance technologies are exported. PPSA has extensively covered the Israeli-based NSO Group, which released malware called Pegasus into the international market. Pegasus is a “zero-click” attack that can infiltrate a smartphone, extract all its texts, emails, images and web searches, break the encryption of messaging apps like WhatsApp and Signal, and transform that phone’s camera and microphone into a 24/7 surveillance device.

It is ingenious, really. Zero-click means the victim doesn’t have to accidentally fall for a phishing scam. The malware is just installed into a phone remotely. Victims can then be counted on to do what we all do – compulsively carry their smartphones with them wherever they go, allowing total surveillance of all they and their friends say and do.
​
  • Once released on the international market by the NSO Group, Pegasus rapidly spread to democracies and illiberal regimes alike. It has been implicated in the targeted murder of a journalist in Mexico at the hands of a cartel, as well as the murder of Jamal Khashoggi in the Saudi consulate in Istanbul. Pegasus allowed agents of an African dictatorship to listen in on a conversation at the State Department. And it has played a prominent role in the targeting of political opponents in governments from Madrid to New Delhi.

Another Israeli technology company, Paragon, differentiates itself from the NSO Group by promising a more careful approach. Its U.S. subsidiary promises that it is about “Empowering Ethical Cyber Defense.”

  • One of Paragon’s products is Graphite, also a zero-click malware that can infect digital devices. It differs from Pegasus by mostly targeting data from cloud backups instead of extracting data directly from a phone. Apparent efforts by Paragon to ensure the ethical use of this technology by its customers has failed.
 
  • Digital investigators at Citizen Lab at the University of Toronto revealed on Thursday that a prominent European journalist (who requested anonymity) and Italian journalist Ciro Pellegrino were told that they had been targeted by Paragon’s Graphite.
 
  • A June 5 report from an Italian parliamentary committee with oversight responsibility over Italy’s intelligence services acknowledged forensic evidence that Graphite was used against two leaders of an NGO, Mediterranea Saving Humans, which advocates for immigrants.

Much of the world media reports that an indignant Italian government severed ties with Paragon. But Israeli media reports that after the Italian government rejected an offer by the company to investigate one of these cases, it was Paragon that unilaterally terminated its contract with the Italian government.

The takeaway from all this is that even with a responsible vendor who sets guardrails and ethical policies, a zero-click hack is too tempting a capability for intelligence services, even those in democracies. Whether Pegasus or Graphite, a zero-click, total surveillance capability is like a dandelion in the wind. It will want to go everywhere – and eventually, it will.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

There’s Nothing Golden About China’s Golden Shield

6/17/2025

 
Picture
​The Ninth Circuit ruled that American tech companies share a degree of liability if their tools facilitate human rights abuses in other countries. The court’s 2023 decision meant that thirteen members of the Falun Gong spiritual practice group could continue to press their years-long case against Cisco Systems for its role in supporting China’s “Golden Shield.”
 
Golden Shield is the Chinese Communist Party’s domestic internet surveillance system. Members of the Falun Gong creed claim that the Chinese government used the Cisco-powered system to aggressively persecute them in a long-running and coordinated campaign.
 
Because a significant portion of Cisco’s work on Golden Shield was done in the United States, ruled the Ninth Circuit, the plaintiffs had sufficient standing to sue here. Importantly, the court noted that, “Cisco in California acted with knowledge of the likelihood of the alleged violations of international law and with the purpose of facilitating them.” The company’s role was essential, direct, and substantial to the point of being liable for “aiding and abetting.”
 
As the Electronic Frontier Foundation points out, this ruling wouldn’t apply to American companies that merely market a tool that anyone could buy and then potentially misuse. What happened in this case was different. Cisco is alleged to have designed, built, maintained – and even upgraded – a “customized surveillance product that the company knew would have a substantial effect on the ability of the Chinese government to engage in violations of human rights.” In so many words, said the Court in assessing Cisco’s role, the Chinese couldn’t have done it without them. To wit, Cisco empowered the following aspects of the Golden Shield surveillance system:
​
  • Pattern analysis to identify Falun Gong members’ internet activity
  • Real-time monitoring of those activities
  • Reporting out this data to Chinese security officers
  • Analyzing the system over time to make it more efficient
  • Increasing the scope of the original system
  • Upgrading the system with its “Ironport” tool to track emails

Cisco is accused of doing this while simultaneously helping the Chinese build a nationwide video surveillance system. The result was a state-of-the-art integrated system capable of creating “lifetime” information profiles on Falun Gong members, so full-featured that it could even be updated with data from members’ latest “interrogation” and “treatment” sessions at the hands of Chinese security personnel.
 
Cisco is alleged to have done it all in an environment in which it is common knowledge that torture, and other violations of international law, are likely to take place. This is not conjecture, but clear information in news coverage, shareholder resolutions, State Department communiques, etc.
 
Cisco rejects the Ninth Circuit’s decision, and recently asked the U.S. Supreme Court to grant cert and rule in its favor. As of now, the High Court has yet to decide whether or not it will do so, but on May 27 it asked the Solicitor General to weigh in with the government’s opinion.
 
This case has always been about testing whether foreign victims can sue U.S. companies for deliberately helping foreign governments commit human rights abuses – an inevitable outcome of advanced surveillance systems in particular. Let’s hope the Supreme Court will deny Cisco’s request. If it does, that will only mean that the case will move forward in California and Cisco and its accusers will still get a full and proper hearing.
 
This is too important a question with too many far-reaching implications to skip a step.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Watching the Watchers: Sen. Paul on Open Skies

6/10/2025

 
Picture
Sen. Rand Paul (R-KY) celebrated the termination of the “Quiet Skies” surveillance program in which U.S. Marshals posed as airline passengers to shadow targets.

This $200 million a year program did not, according to the Department of Homeland Security, stop a single terrorist attack. But, in the words of Sen. Paul in The American Conservative, it “was an unconstitutional dystopian nightmare.” Sen. Paul writes:

“According to Department of Homeland Security documents I obtained, former Congresswoman and now Director of National Intelligence Tulsi Gabbard was surveilled under the program while flying domestically in 2024. Federal Air Marshals were assigned to monitor Gabbard and report back on their observations including her appearance, whether she used electronics, and whether she seemed ‘abnormally aware’ of her surroundings. She wasn’t suspected of terrorism. She wasn’t flagged by law enforcement. Her only crime was being a vocal critic of the administration. What an insanely invasive program – the gall of Big Brother actually spying on a former congresswoman. It’s an outrageous abuse of power … 

“And perhaps the most absurd of all, the wife of a Federal Air Marshal was labeled a ‘domestic terrorist’ after attending a political rally. She had a documented disability and no criminal record. Still, she was placed under Special Mission Coverage and tracked on commercial flights – even when accompanied by her husband, who is himself a trained federal law enforcement officer. She remained on the watchlist for more than three years. To make matters worse, this case resulted in the diversion of an Air Marshal from a high-risk international mission ...
​

“Liberty and security are not mutually exclusive. When government hides behind secrecy to justify surveillance of its own people, it has gone too far."

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Big Brother Has A New Name: Executive Order 14243

6/5/2025

 
Picture
HBO’s hit series Westworld wasn’t actually about replicating the old West, but a cautionary tale about the new frontier of artificial intelligence.

It didn’t end well. For the humans, that is. The third season’s big reveal was a sinister-looking AI sphere the size of a building, called Rehoboam. It was shaped like a globe for a very good reason – it determined the destinies of every person in the world. It predicted and manipulated human behavior and life paths by analyzing massive amounts of personal data – effectively controlling society by assigning roles, careers, and even relationships to people, all in the name of preserving order.

The American government – yes, you read that correctly – America, not China, is plotting to build its own version of Rehoboam. Its brain trust will be Palantir, the AI power player recently called out in the Daily Beast with the headline, “The Most Terrifying Company in America Is Probably One You’ve Never Heard Of.”

In March of this year, President Trump issued Executive Order 14243: “Stopping Waste, Fraud, and Abuse by Eliminating Information Silos.” The outcome will be a single database containing complete electronic profiles of every soul in the United States. And all of it is likely to be powered by Palantir’s impenetrable, proprietary AI algorithms.

Reason got to the heart of what’s at stake: an AI database on such a massive scale is only nominally about current issues such as tracking illegal immigrants. It’s really about the government’s ability to profile anyone, anytime, for any purpose.

With a billion dollars in current federal contracts across multiple agencies, Palantir is currently in talks with Social Security and the IRS. Add that to existing agreements with the Departments of Defense, Health and Human Services, Homeland Security, and others. Add to that the Biden administration’s previous contract with Palantir to assist the CDC with vaccine distribution during the pandemic.

While the primary arguments in favor of such an Orwellian construct are commendable-sounding goals like a one-shop stop for efficiency, PPSA and our pro-privacy allies find such thinking – at best – appallingly naïve.

And at worst? There’s an applicable aphorism here: “This is a bad idea because it’s obviously a bad idea.” Let’s not kid ourselves – this is the desire for control laid bare, and its results will not be efficiency, but surveillance and manipulation. It makes sense for Treasury to know your tax status or State to know your citizenship status. But a governmentwide database, accessible without a warrant by innumerable government agents, is potentially the death knell for privacy and the antithesis of freedom.

Think of all the government already knows about you, your family, and friends across multiple federal databases. All this data is about to be mobilized into one single, easily searchable database, containing everything from disability status and Social Security payments to personal bank account numbers and student debt records to health history and tax filings – plus other innumerable and deeply personal datapoints ad infinitum.

Simply put, this database will put together enough information to assemble personal dossiers on every American.

It is bad enough to think any U.S. government employee in any agency will have access to all of your data in one central platform. But at least those individuals would theoretically authorized for such access. Not so the Russian and Chinese cyberhackers who’ve already demonstrated the ability to lift U.S. databases in toto.
​
If that ever happens with this database, it will truly become a matter of one-stop shopping.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

How the Law Can (Partially) Catch Up with Privacy-Destroying Smartglasses

5/27/2025

 
Picture
The mass emergence of smartglasses is sure to shrink what little privacy Americans have left. But there are steps legislators should consider:

  • Debating ways to protect people by requiring smartglasses to prominently display a light when recording.

Meta already has this feature – a small, white LED light on the right-hand side of the glasses. If a Meta user should block the light with a finger, the glasses stop recording. (Unfortunately, the light – essentially a white dot – may be hard to see, especially in a bright room or when people are in motion.)

At the very least, a clearly visible light-on when recording should be required by law for all brands of smartglasses.
 
  • Protecting children and minors by forbidding strangers from taking their images and posting them online (unless it is in the service of a First Amendment activity like newsgathering).
 
  • Clarifying the rights of individuals regarding reasonable expectations of privacy, including protections against the commercial use of their name, image, and likeness (NIL). Much will need to be sorted out to balance the First Amendment against consumers’ NIL rights, but this debate is inevitable.
​
These will not be easy laws to craft. The Fourth Amendment will require painful tradeoffs. It is best to begin what promises to be a difficult process now, before smartglasses become a pervasive feature in American life.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

A Requiem for Privacy: Smartglasses and the Making of a Paranoid Society

5/27/2025

 
Picture
Arthur C. Clarke, the great 20th century science-fiction writer, once imagined a world where privacy no longer exists after a media company markets an atom-sized camera that can be dispatched anywhere without being detected. Want to be a fly on the wall during national security conversations? Done. Need to verify your teenager’s actual whereabouts? Simple. Curious to know what your friends say about you when you’re not around? Answered.
 
Setting the sci-fi aside, the concept itself – a privacy-destroying technology hiding in plain sight – is about to be proved frighteningly real. In Clarke's story, The Light of Other Days, the media company that invented the invasive tech is called OurWorld. In our world, that company is called Meta.
 
More on that in a moment. First, back to Clarke’s story, co-authored with writer Stephen Baxter. The complete loss of privacy led some groups to declare camera-free privacy zones, but those don’t work when a technology is incapable of being detected or regulated. Other reactions were more pathological. Some people chose suicide, others self-imposed isolation and, of course, every imaginable form of anxiety and psychological distress came to the fore. And pretty much everyone became paranoid by varying degrees. It turned out, in fact, that the opposite of privacy is paranoia. Humans really need their secrets, and not just the people with something to hide.
 
In the real and present day, Meta has created its own version of a camera that hides in plain sight – Meta Ray-Ban sunglasses. Now in their third generation, the critical reception suggests that they’re here to stay. Two million have already been sold, and those were before the third-version gamechanger that’s about to be rolled out. Meta’s goal is to sell 10 million smartglasses a year by 2027. No prescription required.
 
If you’re thinking, “So, what? I’m never buyin’ a pair of those,” technology and culture critic John Mac Ghlionn says you’re missing the point. It’s not about who’s wearing them. It’s about who’s wearing them while watching and listening to you.
 
“Imagine walking down a street and having your face scanned by a dozen pairs of AI glasses, your expressions analyzed, your emotional state catalogued by strangers in real time.” In Ghlionn’s mind – and we’re inclined to agree – this is a seismic shift. Other media revolutions were shared, public experiences – whether books, magazines, film, or television. Even social media is public. They were visible, vied for our attention, and were meant to be shared.
 
Meta smartglasses are the opposite of all of that. It’s personal surveillance tech. To quote Ghlionn at length:
 
“These aren’t just toys. They’re tools – and weapons. They comprise a camera, microphone, an AI interface and internet access, all embedded discreetly in eyewear. They are capable of recognizing faces, interpreting language, overlaying information in real-time and collecting vast swaths of data as their owners simply walk down the street. They can whisper comprehensive summaries about the stranger across the subway, translate foreign speech in real time, suggest pickup lines, record interactions without consent.”
 
How can we possibly manage our personal privacy in such a world? Do you declare that a business dinner with important clients is a “smartglasses-free zone”? What if your partner, already an avid Facebook and Instagram user, takes the next logical step in Mark Zuckerberg’s calculus and buys a pair? Now you will never be able to tell your significant other – “I didn’t say that!” He or she can now play your words back for you.
 
Privacy was always going to be easier in an analog world, before the age of connectivity and digitization that have come to characterize the 21st century. In the before times, you could shut a door. “Records” were physical things that could be kept under lock and key. Surreptitiously snapping a picture of someone required a degree of stealth similar to that of an assassin.
 
As human civilization transitioned into the digital age, traditional notions of privacy got lost in translation. Margaret O’Mara wrote in her 2018 New York Times essay that today’s problem with data privacy began when we first started passing privacy legislation in the 1960s, which was aimed at regulating the collection of data. Those early legislators didn’t stop to ask whether data should be collected in the first place. In other words, she says, from that moment on, it became a question of data transparency rather than data restriction.
 
But seven years later, Meta’s new project is a quantum leap even beyond social media. Transparency? That’s so 2018. If only we’d answered that question and the ones before it when we had the chance. Because what’s about to happen will not test the limits or nature of privacy, but the very idea of privacy itself. And how will we respond when the world around us is, by design, looking at, listening to, and recording us, everywhere and always?
 
The psychological and social consequences of pervasive surveillance are unpredictable but sure to be profound.
​

Perhaps most of all, new customs and social norms will be needed. Otherwise, as Clarke wrote: “As our own species is in the process of proving, one cannot have superior science and inferior morals. The combination is unstable and self-destroying.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Big Brother in the Big Easy

5/26/2025

 
Picture
If we were writing a techno thriller set in modern-day New Orleans, we’d use the catchy title above and include these basic plot points – all of them real:
​
  • A private nonprofit is selling AI-powered facial recognition technology capable of analyzing faces in real time. It uses powerful hardware and software made by a Chinese company called Dahua, banned by the Federal Communications Commission. More than 200 AI-powered cameras are spread around areas of the city considered high crime. The nonprofit is the brainchild of a former NOPD officer who says he built the database using 30,000 faces from mugshots and other publicly available records. But with no transparency or audits, the true nature of the database and its algorithms remain opaque.

  • The cameras are owned by individuals and businesses in addition to the nonprofit, which subsidizes the cost. As a private network, it operates outside the realm of public accountability. The nonprofit operates under the innocuous title “Project NOLA.” It’s funded by donations and other private sources.

  • Perhaps sensing an opportunity to bypass legal requirements for reporting and oversight, the New Orleans PD engages Project NOLA. No city contract. No fees. No legal reviews. In theory at least, Project NOLA does all the work, and the police are simply informed (although they can request footage and ask Project NOLA to look for someone).

  • The system is fast and sophisticated, even capable of handling low-light conditions and poor camera angles (at up to 700 feet away from a target). It is effectively a real-time general surveillance tool, scanning faces on streets for any matches in its database. If it finds one, officers immediately receive alerts via an app. If someone’s face isn’t already in the database, Project NOLA can upload an image and recorded feeds can be searched for the past 30 days, retracing one’s movements.

  • The project runs for two years before the Washington Post exposed the operation through records requests (and the fact that Project NOLA’s owner would sometimes post on Facebook). Police make dozens of arrests in that time, but because Project NOLA is a private operation, there is no way to know what other steps (if any) were taken in pursuit of due process, nor is there any data on potential misidentifications.

  • The entire arrangement appears to run deeply afoul of a New Orleans city ordinance limiting use of facial recognition software to cases involving violent crime. It also completely bypassed the required use of the state’s crime investigation “fusion” center (so named because various law enforcement agencies can collaborate), where experts have to agree that an image matches a potential suspect.

The central crisis of our thriller will surely involve innocent citizens caught up in a dragnet of unbridled police authority, the thwarting of civilian oversight, and a complete disregard for constitutional rights.
 
And the dénouement? We hope it involves NOPD Superintendent Anne Kirkpatrick stepping up and doing what she told the Washington Post: “We’re going to do what the ordinance says and the policies say, and if we find that we’re outside of those things, we’re going to stop it, correct it and get within the boundaries of the ordinance.”
 
Meanwhile, next time you’re on Bourbon Street, wear a Star Wars style cloak that covers your face. And be careful what “establishments” you frequent.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

“Incredibly Juicy Targets” – Sen. Wyden Reveals Surveillance of Senate Phone Lines

5/24/2025

 
Picture
​ Sen. Ron Wyden (D-OR) informed his Senate colleagues Wednesday that “until recently, Senators have been kept in the dark about executive branch surveillance of Senate phones.”
 
AT&T, Verizon, and T-Mobile failed to meet contractual obligations to disclose such surveillance with the Senate Sergeant at Arms. Sen. Wyden wrote in a letter to his colleagues that their campaign and personal phones, on which official business can be conducted under Senate rules, are not covered by this provision. He called these phones “incredibly juicy targets.”
 
Senate Wyden recommended that his colleagues switch their campaign and personal phones to providers willing to make such disclosures.
 
The purpose of such surveillance might be to protect senators from cyber threats and foreign intelligence, but this is far from clear.
 
For example, Sen. Wyden outlined two breaches that occurred last year, one foreign and one domestic. In the Salt Typhoon hack, Chinese intelligence intercepted the communications of specific senators and their senior staff. The other breach came from the U.S. Department of Justice, which conducted a leak investigation by collecting phone records of Senate staff, including national security advisors to leadership, as well as staff from the Intelligence and Judiciary Committees. Democrats and Republicans were targeted in equal numbers. Sen. Wyden wrote:
 
“Together, these incidents highlight the vulnerability of Senate communications to foreign adversaries, but also to surveillance by federal, state, and local law enforcement. Executive branch surveillance poses a significant threat to the Senate’s independence and the foundational principle of separation of powers … This kind of unchecked surveillance can chill critical oversight activities, undermine confidential communications essential for legislative deliberations, and ultimately erode the legislative branch’s co-equal status.”
 
Perhaps we have, as Elvis sang, suspicious minds. But we find it odd that three major telecoms would all fail to meet their disclosure obligations in a contract with the U.S. Senate unless they were encouraged to do so.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Watching the Watchers: Keeping Your Thoughts Private in the Age of Pervasive Surveillance

5/22/2025

 
Picture
Writer Alex Klaushofer reports on a perfectly ordinary development in surveillance – the installation of cameras in the UK’s Sainsbury grocery store chain to ensure that every customer checks every item.
 
This prompted Klaushofer to think back to her experience in Albania, which is still dealing with the psychological toll of its communist past when one in three people in the capital worked for the secret police. She writes in the British Spectator:
 
“The poverty and under-development of Albania thirty years after the collapse of the regime were obvious to me. But I was puzzled by the behavior of some of the Albanians I got to know; there was a guardedness and often an indirect way of talking. Then Ana Stakaj, women’s program manager for the Mary Ward Loreto Foundation, explained the psychological effects of surveillance and it started to make sense.
 
“‘Fear, and poverty and isolation closed the mind, causing it to go in a circle and malfunction,’ she told me. ‘In communism, people were forced even to spy on their brother, and the wife on their husband. So they learned to keep things private and secret, especially thoughts: your thoughts are always secret.’
 
“I wonder whether we’ve learnt the lessons offered by the authoritarian regimes of the last century: or the living lesson provided by China’s tech-authoritarianism. Do we really understand where using all this new technology so freely is taking us?”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Is Your AI Therapist a Mole for the Surveillance State?

5/16/2025

 

“It’s Delusional Not to be Paranoid”

Picture
​With few exceptions, conversations with mental health professionals are protected as privileged (and therefore private) communication.
 
Unless your therapist is a chatbot. In that case, conversations are no more sacrosanct than a web search or any other AI chat log; with a warrant, law enforcement can access them for specific investigations. And of course, agencies like the NSA don’t even feel compelled to bother with the warrant part.
 
And if you think you’re protected by encryption, think again says Adi Robertson in The Verge. Chatting with friends using encrypted apps is one thing. Chatting with an AI on a major platform doesn’t protect you from algorithms that are designed to alert the company to sensitive topics.
 
In the current age of endless fascination with AI, asks Robertson, what would prevent any government agency from redefining what constitutes “sensitive” based on politics alone? Broach the wrong topics with your chatbot therapist and you might discover that someone has leaked your conversation to social media for public shaming. Or perhaps a 4 a.m. knock on the door with a battering ram by the FBI.
 
Chatbots aren’t truly private any more than email is. Recall the conventional wisdom from the 1990s that advised people to think of electronic communication as the equivalent of a postcard. If you wouldn’t want to write something on a postcard for fear of it being discovered, then it shouldn’t go in an email – or in this case, a chat. We would all do well to heed Adi Robertson’s admonition that when it comes to privacy, we have an alarming level of learned helplessness.
 
“The private and personal nature of chatbots makes them a massive, emerging privacy threat … At a certain point, it’s delusional not to be paranoid.”
 
But there’s another key difference between AI therapists and carbon-based ones: AI therapists aren’t real. They are merely a way for profit-driven companies to learn more about us. Yes, Virginia, they’re in it for the money. To quote Zuckerberg himself, “As the personalization loop kicks in and the AI starts to get to know you better and better, that will just be really compelling.” And anyone who thinks compelling isn’t code for profitable in that sentence should consider getting a therapist.
 
A real one.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Meta’s AI Chatbot a New Step Toward a Surveillance Society

5/13/2025

 
Picture
​We’re not surprised – and we are sure you are not either – to learn that new tech rollouts from Meta and other Big Tech companies voraciously consume our personal data. This is especially true with new services that rely on artificial intelligence. Unlike traditional software programs, AI requires data – lots and lots of our personal data – to continuously learn and improve.
 
If the use of your data bothers you – and it should – then it’s time to wise up and opt out to the extent possible. Of course, opting out is becoming increasingly difficult to do now that Meta has launched its own AI chatbot to accompany its third-generation smart glasses. Based on reporting from Gizmodo and the Washington Post, here’s what we know so far:

  • Users no longer have the ability to keep voice recordings from being stored on Meta’s servers, where they “may be used to improve AI.”
  • If you don’t want something stored and used by Meta, you have to manually delete it.
  • Undeleted recordings are kept by Meta for one year before expiring.
  • The smartglasses camera is always on unless you manually disable the “Hey Meta” feature.
  • If you somehow manage to save photos and videos captured by your smartglasses only on your phone’s camera roll, then those won’t be uploaded and used for training.
  • By default, Meta’s AI app remembers and stores everything you say in a “Memory” file, so that it can learn more about you (and feed the AI algorithms). Theoretically, the file can be located and deleted. No wonder Meta’s AI Terms of Service says, “Do not share information that you don’t want the AIs to use and retain such as account identifiers, passwords, financial information, or other sensitive information.”
  • Bonus tip: if you happen to know that someone is an Illinois or Texas resident, by using Meta’s products you’ve already implicitly agreed not to upload their image (unless you’re legally authorized to do so).

None of the tech giants is guiltless when it comes to data privacy, but Meta is increasingly the pioneer of privacy compromise. Culture and technology writer John Mac Ghlionn is concerned that Zuckerberg’s new products and policies presage a world of automatic and thoroughgoing surveillance, where we will be constantly spied on by being surrounded by people wearing VR glasses with cameras.
 
Mac Ghlionn writes:
​
“These glasses are not just watching the world. They are interpreting, filtering and rewriting it with the full force of Meta’s algorithms behind the lens. And if you think you’re safe just because you’re not wearing a pair, think again, because the people who wear them will inevitably point them in your direction.
“You will be captured, analyzed and logged, whether you like it or not.”
 
But in the end, unlike illicit government surveillance, most commercial sector incursions on our personal privacy are voluntary by nature. VR glasses have the potential to upend that equation.
 
Online, we can still to some degree reduce our privacy exposure in what we agree to, even if it means parsing those long, hard to understand Terms of Service. It is still your choice what to click on. So, as the Grail Knight told Indiana Jones in The Last Crusade, “Choose wisely.”
 
You should also learn to recognize Meta’s Ray-Bans and their spy eyes.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

How Police Can Use Your Car to Spy on You

5/5/2025

 
Picture
​We reported in February that Texas Attorney General Ken Paxton is suing General Motors over its long-running, for-profit consumer data collection scheme it hatched together with insurance companies. Now Wired’s Dell Cameron reveals that automakers may be doing even more with your data, perhaps sharing it with law enforcement (often with and without a proper warrant).
 
So you may be getting way more than you bargained for when you subscribe to your new vehicle’s optional services. In effect, your vehicle is spying on you by reporting your location to cell towers. The more subscription services you sign up for, the more data they collect. And in some cases, reports Wired, cars are still connecting with cell towers even after buyers decline subscriptions.
 
All of that data can easily be passed to law enforcement. There are no set standards as to who gives what to whom and when. When authorities ask companies to share pinged driver data, the answers range from “Sure! Would you like fries with that?” to “Come back with a subpoena,” to “Get a warrant.” For its part, GM now requires a court order before police can access customers’ location data. But the buck can also be passed to the cell service providers, where the protocols are equally opaque. When Wired’s Cameron asked the various parties involved what their policies were, he was frequently met with the sound of crickets.
 
Author John Mac Ghlionn sums up the state of automotive privacy: “Your car, once a symbol of independence, could soon be ratting you out to the authorities and even your insurance company.”
 
It’s probably time to update “could soon be” to “is.”
 
This technology gives police the ability to cast a wide dragnet to scoop up massive amounts of personal data, with little interference from pesky constitutional checks like the Fourth Amendment. Law enforcement agencies of all stripes claim their own compelling rights to collect and search through such data dumps to find the one or two criminals they’re looking for, needle-like, in that haystack of innocent peoples’ information. Since your driving data can be sold to data brokers, it is also likely being purchased by the FBI, IRS, and a host of other federal agencies that buy and warrantlessly inspect consumer data.
 
Just over a year ago, Sens. Ed Markey (D-MA) and Ron Wyden (D-OR) fired off a letter to the chair of the FTC to demand more clarity about this dragnet approach. Caught with their hand in the cookie jar thanks to the resulting inquiry, GM agreed to a five-year hiatus on selling driver data to consumer reporting agencies. Where that leaves us with the police, as the Wired article reports, often remains an open question.
 
In the meantime, consider adjusting your car’s privacy settings and opt outs. The more drivers who take these actions, the more clearly automakers, service providers, and law enforcement agencies will start to get the message.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

A New Concern: Privately Funded License-Plate Readers in LA

5/4/2025

 
Picture
​We’ve covered automated license plate reader (ALPR) software nearly 20 times in the last few years. That we are doing so again is a reminder that this invasive technology continues to proliferate.
 
In the latest twist, an affluent LA community bought its own license-plate readers, gifted them to the Police Foundation; and, with approval from the City Council and the Police Commission, handed them to the LAPD. There was a proviso – that they only be used in said well-off LA community.
 
Turns out the LAPD didn’t appreciate being told where to use ALPR tech and which brand to use. The head of the department’s Information Technology Bureau told the media that law enforcement agencies should be able to use plate reader technology as they see fit and should own and control the data collected. This seems more about turf than principle, given that the LAPD already has thousands of plate-reading cameras in use.
 
This case brings a new question to an already intense debate. Should the well-connected be able to contract with local police to indiscriminately spy on masses of drivers, looking for those “who aren’t from around here”?
 
It is concerning enough the LAPD has already built up one of the nation’s largest ALPR networks. This is an example of how for-profit startups like Flock Safety are trying to corner the market for this technology nationwide and doing so through opaque agreements with law enforcement agencies that are impermeable to public scrutiny and oversight.
 
As with most surveillance tech, there are cases that justify their use. But these legitimate instances tend to be relatively few in number and should be executed with transparency in mind and oversight engaged. That’s a far cry from the “dragnet surveillance” approach currently in place, where the movements of millions of citizens who have done nothing wrong are tracked and stored in public and private databases for years at a time, all without a warrant or individual consent.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

That’s No Hydrangea, It’s a Camera!

5/1/2025

 

Time to Wise Up to High Tech Burglary​

Picture
​It might be time – and we can’t believe we’re typing this – to check your potted plants and hedges. If you don’t recognize that oddly shaped topiary in between the rhododendron and the geranium, it could be, well, a plant (as in a device placed there to spy on you).
 
As we reported before, a new trend is blooming in larceny: burglars hiding cameras on properties in order to learn the habits of residents. Take a look at this recent report from KABC in Los Angeles.
 
Similar instances have been linked to visitors from South America and hence are referred to as “burglary tourism.” But in reality, it’s just as much a home-grown problem. (No more gardening puns, we promise.) In the end, the source of the violation is irrelevant. What matters is that we’re dealing with some relatively sophisticated criminals.
 
And what matters more is how to protect yourself. Here’s some advice:
 
  1. A rose by any other name: Leaves, grass, rocks, flowers – all of these have been used as disguises for hidden spy cams. Fortunately, on close inspection they generally reveal themselves as the clumsy fakes they are. The intent behind them is to blend into your peripheral vision, not to fool a botanist. So, plan a morning coffee date this weekend. Just you and the shrubs and a little fresh air.
 
  1. A little night music. Switch to decaf (or pour your nightcap into a shatterproof tumbler), turn out the lights, and have a look around your property. In the KABC segment above, intended victim George Nguyen noticed a flashing light while watering in the evening. We told you these aren’t necessarily sophisticated schemes. While not all hidden cameras come with victim-friendly giveaways like lights, a significant number do. By the way, if you do a night-time walk-around, please notify your neighbors ahead of time so they don’t call the cops. Bonus tips: Avoid dark clothing and ski masks.
 
  1. Like a good neighbor: Speaking of neighbors, don’t assume the camera is on your property. The best views of your place may be across the street or courtyard. Of course, be sure to explain yourself before looking in the hedges next door, especially if you live in Texas. The more you and your neighbors can work together on this, the better.
 
  1. Signed, sealed, and re-routed: If you’re out of town a lot, and your dropped-off packages linger, re-think your delivery strategy. Why make it obvious that no one’s home? Re-route deliveries to locker/pickup locations or to a trusted neighbor who’s always home.
 
CNET offers some additional guidance on how to thwart this high-tech thievery, including installing a video doorbell or a camera with audio that let’s you see (and ask annoying questions) in real time.
 
Finally, if you do discover a hidden camera spying on you or your neighbors, call the police.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

How Facial Recognition Technology Criminalizes Peaceful Protest

4/29/2025

 
Picture
​Today, Hungary is ostensibly free, a democratic state in a union of democratic states. But something is rotten in Budapest. Prime Minister Viktor Orbán has been steadily fashioning a monoculture since his return to power 15 years ago, running afoul of European Union policies and democratic norms along the way. The most recent infraction is multifaceted, and it involves the use of facial recognition to target peaceful protesters for criminal prosecution.
 
In March, Orbán’s subservient parliament railroaded the opposition and banned public gatherings of gay rights activists. With the stroke of a pen, Pride gatherings and related pro-gay rights protests were suddenly illegal. A month later, these crackdowns were enshrined in the country’s constitution (showing why America’s founders were wise to foresee the necessity in making the U.S. Constitution so notoriously difficult to amend).
 
As in Putin’s Russia, the justification for this crackdown is that it’s necessary to protect children from “sexual propaganda” – even though we are talking about peaceful protests conducted by adults in city centers. However you feel about Pride parades, most Hungary watchers believe the prime minister needs to whip up a cultural scapegoat to rally his base in advance of next year’s elections.
 
Hungary represents a turning point in the rise of the modern surveillance state in a developed country. Beyond the infringement of basic rights, it includes a chilling new embrace of facial recognition technology – specifically, to identify Pride participants (now officially designated as criminals) or likewise pick out faces from among the tens of thousands who are sure to illegally protest these new measures. At the moment, the punishment for such unconstitutional behavior is a fine of up to €500. Organizers, however, can be imprisoned for up to a year. But can even more draconian punishments be far behind?
 
If you’re wondering how Hungary’s democratic partners in the European Union are reacting to all of this, the answer is not well. And it’s also raising important questions about the efficacy of the EU’s AI regulations in general (a debate about loopholes and guardrails that merits a separate discussion).
 
For now, though, Americans should take in a cautionary warning from Hungary’s use of facial recognition software. Future uses of the technology here could target leaders of a MAGA or a Black Lives Matter protest. Facial recognition scans can pinpoint individuals, spotting the face in a crowd. It gives regimes the ability to come back later to arrest and persecute on a scale only Orwell could have conceived. All of this is enhanced by the unholy combination of data analytics, advanced algorithms, unprecedented computing power, and now generative AI.
 
The uncomfortable truth of the modern era is inescapable: The development and deployment of modern surveillance has gone hand in hand with modern authoritarianism, from Russia to China and Iran. Just imagine what might have happened if J. Edgar Hoover had access to facial recognition tech and AI. We imagine it would have looked like Orbán’s dystopian democracy.
 
Budapest Pride is not backing down, celebrating its 30th anniversary in a public demonstration in June. The world will be watching to see how this technology is used.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

AI and Data Consolidation Is Supercharging Surveillance

4/28/2025

 
Picture
​In Star Wars lore, it was the democratic, peace-loving Republic that built the first fleet of Star Destroyers. But the fleet was quickly repurposed for evil after the Republic fell. What was once a defensive force for good became a heavy-handed tool of occupation and terror.
 
In a galaxy closer to home, imagine the development of a fully integrated civilian computer system designed to help a technological democracy of 345 million people operate smoothly. In the early 21st century, successive governments on both the right and left embraced the idea that “data is the new oil” and began the process of digitizing records and computerizing analog processes. Generative artificial intelligence, vast increases in computing power, and the rise of unregulated data brokers made the creation of a single database containing the personal information and history of every citizen readily available to federal agencies.
 
At first, the system worked as advertised and made life easier for everyone – streamlining tax filing, improving public service access, facilitating healthcare management, etc. But sufficient guardrails were never established, allowing the repurposing of the system into a powerful surveillance tool and mechanism of control.
 
This scenario is now on the brink of becoming historical fact rather than cinematic fiction.
 
“Data collected under the banner of care could be mined for evidence to justify placing someone under surveillance,” warns Indiana University’s Nicole Bennett in a recent editorial for The Conversation. And if you like your social critiques with a side of irony, the Justice Department agreed with her in its December 2024 Artificial Intelligence and Criminal Justice report. It concluded that the AI revolution represents a two-edged sword. While potentially a driver of valuable new tools, its use must be carefully governed.
 
The Justice Department said that AI data management must be “grounded in enduring values. Indeed, AI governance in this space must account for civil rights and civil liberties just as much as technical considerations such as data quality and data security.”
 
Yet the government is proceeding at breakneck speed to consolidate disparate databases and supercharge federal agencies with new and largely opaque AI tools, often acquired through proprietary corporate partnerships that currently operate outside the bounds of public scrutiny.
 
Anthony Kimery of Biometric Update has described the shift as a new “arms race” and fears that it augers “more than a technological transformation. It is a structural reconfiguration of power, where surveillance becomes ambient, discretion becomes algorithmic, and accountability becomes elusive.”
 
The Galactic Republic had the Force to help it eventually set things right. We have the Fourth – the Fourth Amendment, that is – and the rest of the Bill of Rights. But whether these analog bulwarks will hold in the digital age remains to be seen. To quote Kimery again, we are “a society on the brink of digital authoritarianism,” where “democratic values risk being redefined by the logic of surveillance.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

What the Leaking of 21 Million Employee Screenshots Tells Us About the Threat of Worker Surveillance Apps

4/28/2025

 
Picture
​In the late 19th century, American business embraced the management philosophy of Frederick Winslow Taylor, author of The Principles of Scientific Management. He wrote: “In the past the man has been first; in the future the system must be first.”

So managers put their factory systems first by standardizing processes and performing time and motion studies with a stopwatch to measure the efficiency of workers’ every action. Nineteenth century workers, who were never first, became last.

Now intrusive surveillance technology is bringing this management philosophy to the knowledge economy. This entails not just the application of reductionism to information work, but the gross violation of employee privacy.

This was brought home when Paulina Okunyté of Cybernews reported on Thursday that WorkComposer, an employee surveillance app that measures productivity by tracking logging activity and regular screenshots of employees, left over 21 million images exposed in an unsecured bucket in Amazon’s cloud service.

WorkComposer also logs keystrokes and how much time an employee spends on an app. As a result, usernames and passwords that are visible in screenshots might enable the hijacking of accounts and breaches of businesses around the world.

“Emails, documents, and projects meant for internal eyes only are now fair game for anyone with an internet connection,” Okunyté writes.

With 21 million images to work with, there is plenty of material for cyberthieves and phishing scammers to victimize the people who work for companies that use WorkComposer software.

This incident exposes the blinkered philosophy behind employee surveillance. As we have reported, there are measurable psychological costs – and likely productivity costs – when people know that they are being constantly watched. Vanessa Taylor of Gizmodo reports that according to a 2023 study by the American Psychological Association, 56 percent of digitally surveilled workers feel tense or stressed at work compared to 40 percent of those who are not.

We also question the usefulness of such pervasive tracking and surveillance. Efficiency is a commendable goal. Surely there are broader and less intrusive ways to measure employee productivity. Such close monitoring runs the risk of focusing workers on meeting the metrics instead of bringing creativity or bursts of productivity to their jobs. Allowing people to take a break every hour to listen to a song on earbuds might, in the long run, make for better results and greater efficiency. 
​
Just don’t make a funny face or sing along, the whole world might see you.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS
<<Previous

    Categories

    All
    2022 Year In Review
    2023 Year In Review
    2024 Year In Review
    Analysis
    Artificial Intelligence (AI)
    Call To Action
    Congress
    Congressional Hearings
    Congressional Unmasking
    Court Appeals
    Court Hearings
    Court Rulings
    Data Privacy
    Digital Privacy
    Domestic Surveillance
    Facial Recognition
    FISA
    FISA Reform
    FOIA Requests
    Foreign Surveillance
    Fourth Amendment
    Fourth Amendment Is Not For Sale Act
    Government Surveillance
    Government Surveillance Reform Act (GSRA)
    Insights
    In The Media
    Lawsuits
    Legal
    Legislation
    Letters To Congress
    NDO Fairness Act
    News
    Opinion
    Podcast
    PPSA Amicus Briefs
    Private Data Brokers
    Protect Liberty Act (PLEWSA)
    Saving Privacy Act
    SCOTUS
    SCOTUS Rulings
    Section 702
    Spyware
    Stingrays
    Surveillance Issues
    Surveillance Technology
    The GSRA
    The SAFE Act
    The White House
    Warrantless Searches
    Watching The Watchers

    RSS Feed

FOLLOW PPSA: 
© COPYRIGHT 2024. ALL RIGHTS RESERVED. | PRIVACY STATEMENT
Photo from coffee-rank