Project for Privacy and Surveillance Accountability (PPSA)
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE

 NEWS & UPDATES

Humans Are Peering Through the Eyes of Robots

11/10/2025

 

“We shall describe devices which appear to move of their own accord.”

​- Hero of Alexandria, Pneumatica

Picture
Image courtesy of 1X.
​Those of a certain age might remember the Domesticon, a line of 22nd century robotic butlers from the movie Sleeper. To avoid being caught by the authoritarian state, Woody Allen’s character Miles Monroe pretends to be a Domesticon during a dinner party. The scene is equal parts slapstick and satire. Miles’ cover is blown when he tries to help the host but acts too human in the process.

The Wall Street Journal’s Joanna Stern recently found that one actual prototype of the Domesticon is not entirely dissimilar to the fictional version. 1X Technologies is beta testing NEO, the $20,000 “home humanoid” it hopes to bring to market in 2026. Recently, Stern got to see it in action for the second time and discovered a decidedly Sleeper-like connection: NEO is part human.

Not organically, like a cyborg – so far the full integration of creature and computer is limited to cockroaches. No, NEO is remotely human, as in there’s a remote human operator back at company HQ, “potentially peering through the robot’s camera eyes to get chores done.”

Now, how’d you like to have that job? But as 1X CEO Bernt Børnich told Stern: “If you buy this product, it is because you’re okay with that social contract. If we don’t have your data, we can’t make the product better.”

Such transparency is refreshing. It is also a reminder of the Faustian bargain we must strike in order to make artificial intelligence work at the expense of our personal privacy. AI is unlike any software that came before in that it requires gargantuan amounts of data in order to learn its jobs. As Stern notes, “It needs data from us – and from our homes.” A world model, in other words, centered around us and private things we do at home. 

We expect these machines to be capable of fully human, fully competent, fully safe behaviors – all while being fully autonomous. None of that will happen without the ability to collect and learn from the data of day-to-day human lives. There are no shortcuts, either. When 1X let Stern drive NEO using one of the company’s VR headsets its human operators wear, she nearly dislocated its arm. The robot left for the shop in a wheelchair. The robot, a cross between “a fencing instructor and a Lululemon mannequin,” as she describes it, had neither’s dexterity nor style.

And during the first meeting the reporter had with NEO earlier in the year, the robot managed to faceplant.

“No way that thing is coming near my kids or dog,” she remembers thinking. Domestic robotics remains in its infancy – literally in Stern’s view. “The next few years won’t be about owning a capable robot; they’ll be about raising one.” Like a toddler, humanoid AI can’t learn without doing, watching, and remembering.

1X says users will be able to set “no-go” zones, blur faces in the video feed, and that human operators back at HQ will not connect unless invited to do so. CEO Børnich told Stern that such “teleoperation” was a lot like having a house cleaner. “Last I checked,” Stern responded wryly, “my house cleaner doesn’t wear a camera or beam my data back to a corporation.”

A punchline of sorts seems appropriate here: We’re big fans of the ethical AI principle that says always have a human in the loop – “but this is ridiculous!” 

Stern’s forthcoming book, I Am Not a Robot: My Year Using AI to Do (Almost) Everything and Replace (Almost) Everyone, is now available for pre-order. Readers can expect more dirt on NEO.
​
Unless he learns to vacuum first.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

One Nation Under Watch: How Borders Went from Being Physical to Digital

11/10/2025

 

​“If you want to keep a secret, you must also hide it from yourself.”

​- George Orwell

Picture
​Imagine a dish called Surveillance Stew. It’s served anytime multiple privacy-threatening technologies come together, rather like a witch’s brew of bad ideas. It's best served cold.

The latest Surveillance Stew recipe includes location data, social media, and facial recognition. Nicole Bennett, who studies such things, writes in The Conversation that this particular concoction represents a turning point: borders are no longer physical but digital. The government has long held that the border is a special zone where the Fourth Amendment has little traction. Now the government is expanding border rules to the rest of America.

Immigration and Customs Enforcement (ICE) has put out a call to purchase a comprehensive social media monitoring system. At first glance, Bennett notes, it seems merely an expansion of monitoring programs that already exist. But it’s the structure of what’s being proposed that she finds new, expansive, and deeply concerning. “ICE,” she writes, “is building a public-private surveillance loop that transforms everyday online activity into potential evidence.”

The base stock of Surveillance Stew came with Palantir’s development of a national database that could easily be repurposed into a federal surveillance system. Add ICE’s social media monitoring function and the already-thoroughgoing Palantir system becomes “a growing web of license plate scans, utility records, property data and biometrics,” says Bennett, “creating what is effectively a searchable portrait of a person’s life.”

Such a technology gumbo seems less a method for investigating individual criminal cases than a sweeping supposition that any person anywhere in the United States could, at any moment, be a “criminal.” It’s a dragnet, says Wired’s Andrew Couts, noting that 65 percent of ICE detainees had no criminal convictions. Dragnets are inimical to privacy and corrosive to the spirit of the Constitution.

Traditional, law-based approaches to enforcement are one thing – and enforcement, of course, is ICE’s necessary job. The problem now, warns Bennett, is that “enforcement increasingly happens through data correlations” rather than the gathering of hard evidence.

We agree with Bennett's conclusion that these sorts of “guilt by digitization” approaches fly in the face of constitutional guardrails like due process and protection from warrantless searches. To quote Wired’s Couts again, “It might be ICE using it today, but you can imagine a situation where a police officer is standing on a corner and just pointing his phone at everybody, trying to catch a criminal.”

The existence of Palantir’s hub makes it inevitable that ICE’s expanded monitoring capability will migrate to other agencies – from the FBI to the IRS. And when that happens, what ICE does to illegal immigrants can just as easily be done to American citizens – by any government entity, for any reason.
​
When our daily lives are converted into zeroes and ones, the authorities can draw “borders” wherever they want.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Just What We Need – Hack Makes Recordings by Wearable Glasses Undetectable

11/3/2025

 

“Privacy is not just about hiding things or keeping secret, it’s about controlling who has access to your life.”

​- Roger Spitz

Picture
​Here’s a quick news update on one of the privacy stories of the year: Meta’s Ray-Ban smartglasses. Joseph Cox and Jason Koebler of 404 Media told the story of Bong Kim, a hobbyist who engineered a way to disable the LED light intended to shine conspicuously whenever Meta’s glasses are recording or taking photos.
 
Let’s be clear: Meta has nothing to do with hacks like this one. The company tried to prevent privacy violations by designing the glasses so that if someone covered up the LED light, the recording function wouldn’t work. So we'll skip the “we told you so” part where we question the wisdom of building a modern Prometheus (powered by an app and AI, of course) while clutching at pearls when it gets compromised – as it now is.
 
We’ll also refrain from asking what could possibly go wrong. But here’s one possibility out of 10,000 would-be privacy violations: Imagine a stalker no longer having to worry about an LED light giving him away. Or industrial spies. Or actual spies. Or the colleague at work tricking you into saying something that will get you fired.
 
From a privacy standpoint, wearables (including smartglasses) are a non-starter, a set of technologies primarily in search of a hack. And if you don’t believe that, you probably haven’t been on Reddit lately.
 
According to 404’s reporting, Kim’s modification is advertised on YouTube and costs just $60 (though it’s unclear whether shipping is included). That’s what your privacy is worth these days.
 
So what can you do? At the very least, familiarize yourself with the look of these new wearable glasses from a host of companies. And quietly read yourself a Miranda warning: “anything thing you say can and will be used against you in a court of law.” Or, maybe just in a meeting with HR.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Bay State Drivers Can Now Be Tracked by 7,000 Flock Customers

11/3/2025

 

“There is something predatory in the act of taking a picture.”

- Susan Sontag

Picture
​Search our news blog for "Flock" and you'll hit the jackpot. This company has been a consistent source of concern for privacy watchdogs.
 
Just last week, the ACLU’s Jay Stanley summarized the results of a detailed Massachusetts open-records investigation. Thanks to Flock’s contracts with more than 40 Massachusetts police departments, Bay State drivers can now be tracked by 7,000 of the company’s customers – “in real time, without a warrant, probable cause, or even reasonable suspicion of wrongdoing.” To be clear, that surveillance of Massachusetts drivers can be conducted from other parts of the country… because why wouldn’t Texas authorities want to know what Massachusetts drivers are up to?
 
This chilling state of affairs is the result of Flock’s boilerplate contract language, which only changes if a police department demands it (most have not). The company’s contracts include an “irrevocable, worldwide, royalty-free, license to use the Customer Generated Data for the purpose of providing Flock Services.”
 
Stanley’s article includes additional anecdotes about Flock’s propensity for over-sharing that suggest the issue goes far beyond Massachusetts. In Virginia, for example, reporters found that “thousands of outside law enforcement agencies searched Virginians’ driving histories over 7 million times in a 12-month period.” As we’ve written before, Virginia is already one of the most surveilled states in the country, thanks largely to vendors like Flock Safety.
 
Consider following the ACLU’s advice for pushing back against this kind of Orwellian oversight. If we don’t say anything, nothing is going to change.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

AI Drones Sharpen the Security/Privacy Tradeoff of a Surveillance State

10/30/2025

 
Picture
​Flock Safety – the vendor installing license plate readers across the country – is now helping police departments enhance their drone fleets with artificial intelligence. With this surveillance comes improved public safety, but also new threats to privacy and personal freedom.

Police drones are not an exotic trend. From 2018 to 2024, the number of police and sheriff departments with drones has risen by 150 percent – for a total of about 1,500 drone-enabled departments.

Increasingly, these drones have brains as well as eyes. Rather than requiring a human operator to direct them, a new generation of autonomous drones can work in concert with an officer at the scene. Lieutenant Ryan Sill, Patrol Watch Commander of the police department in Hayward, California, writes in Police 1 News of surveillance vendor Axon’s “One-Click” drone technology for Autonomous Aerial Vehicles (AAVs):

“The future is one where an AAV can be assigned to each officer, deploying from a patrol car, operating independently without the need for a pilot, responding to voice commands, and completing tasks as directed by the officer.”
​

The integration of AI and drone technology is undeniably a boon to public safety. One of the most dangerous police activities – both for police officers and the public – is the high-speed pursuit of criminals in cars. Increasingly, suspects in cars and on foot can run all they want, but they can be tracked wherever they go by drones.
View this post on Instagram

A post shared by Skydio (@skydiohq)

​Intelligent drones can also zoom quickly to an accident or crime scenes. They can record incidents and respond to situations in ways that assist police departments with too-few officers.

But intelligent drones bring with them the likelihood that all the information they collect will be abused. Then there is information that won’t be collected by drones operated by citizens and journalists in airspace cleared by police drones. Earlier this month, the Federal Aviation Administration imposed a 12-day ban on all non-governmental drone flights across much of Chicago. This coincided with the arrival of National Guard troops and federal agents to conduct immigration raids.

ACLU reports: “This raises the sharp suspicion that it is intended not to ensure the safety of government aircraft, but (along with violence, harassment, and claims of ‘doxing’) is yet another attempt to prevent reporters and citizens from recording the activities of authorities.”

Even more concerning is the emergence of drones that can predict crime.

Malavika Madgula of Sify.com writes about “Dejaview,” a new South Korean technology that “blends AI with real-time CCTV to discern anomalies and patterns in real-life scenarios, allowing it to envisage incidents ranging from drug trafficking to pettier offenses with a sci-fi-esque accuracy rate of 82 percent.”

Knowing that a synthetic brain is watching you for any sign that you might be a criminal is hardly the vibe of a free society. Madgula writes: “It could trigger feelings of heightened self-awareness and unease for even the most innocuous of activities, such as taking a shortcut on your way home or using a cash machine.”
​

Elon Musk famously worried that in AI “we’re summoning the demon.” The demon is welcomed by law enforcement because he is enormously useful in protecting communities. Without guardrails in place to prevent the misuse of this immense collection of our personal movements, activities and associations, it could also turn out to be a Faustian bargain.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Don’t Look Up: Those Satellites Are Leaking

10/27/2025

 

“To have good data, we need good satellites.”  - Jeff Goodell

Picture
​Sigh. As if we didn’t have enough to worry about already. While privacy experts were focusing on the security of undersea fiberoptic cables, government surveillance, and corporate subterfuge, our data is being broadcast unencrypted all around the Earth by satellites.

Satellites are leaky – and it isn’t fuel they’re off-gassing; it’s our personal information. “These signals are just being broadcast to over 40 percent of the Earth at any point in time,” researchers told Wired’s Andy Greenberg and Matt Burgess.

A few years ago, those researchers (at UC San Diego and the University of Maryland) followed up on a whim: Could we eavesdrop on what satellites are broadcasting? The answer was a big fat “yes” – and it took only about $800 in equipment. Their complete findings are detailed in a newly released study. They had assumed, or at least hoped, to find very little – that almost every signal would be protected by encryption – the ne plus ultra of privacy protection.

Instead, among the many things they found floating in the ether were:
​
  • Miscellaneous corporate and consumer data (such as phone numbers)
  • Actual voice calls
  • Text messages
  • Industrial communications
  • Decryption keys
  • Even in-flight Wi-Fi data for systems used by 10 different airlines (including users’ in-flight browsing activities).

Researchers also “pulled down a significant collection of unprotected military and law enforcement communications,” including information about some U.S. sea vessels.

The Wired article’s authors are quick to note that the National Security Agency warned about the security of satellite communications more than three years ago.

Will the publication of such research encourage bad actors to take advantage of these weaknesses?

In the short term, perhaps, but the study’s authors are hopeful that various companies will respond like T-Mobile did and immediately get their encryption house in order (a spokesperson noted the issue was not network-wide). Another affected company, Santander Mexico, responded: “We took the report as an opportunity for improvement, implementing measures that reinforce the confidentiality of technical traffic circulating through these links.” (It should be noted that the affected organizations were notified many months prior to the study’s release.)

In the meantime, let’s hope most hackers haven’t renewed their Wired subscriptions.
​
After all, the scale of the problem is enormous. A Johns Hopkins expert told the magazine: “The implications of this aren't just that some poor guy in the desert is using his cell phone tower with an unencrypted backhaul. You could potentially turn this into an attack on anybody, anywhere in the country.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Is AI Evolving from Helpful Assistant to Permanent Spy?

10/23/2025

 

“Their power derives from memory, and memory is where the risks lie.” - Kevin Frazier and Joshua Joseph

Picture
​Here’s a quick news item that will come as a surprise to absolutely no one, except perhaps for hermits who have been living in caves since AI went mainstream in 2022. Two new pieces of reporting, from Stanford and Tech Policy Press, confirm the fresh dangers to privacy emerging from the AI frontier.
 
First to Palo Alto, where researchers evaluated the privacy policies of six frontier AI developers. You can check out the complete analysis, but here are the takeaways from the abstract. Spoiler alert – they’re not a win for privacy:

  • All six AI developers appear to employ their users' chat data to train and improve their models by default

  • Some retain this data indefinitely

  • Developers may collect and train on personal information disclosed in chats, including sensitive information such as biometric and health data, as well as files uploaded by users

  • Four of the six companies examined appear to include children's chat data for model training, as well as customer data from other products
    ​

  • On the whole, developers' privacy policies often lack essential information about their practices, highlighting the need for greater transparency and accountability.
 
The Tech Policy Press interview with experts sheds some light on why “agentic AI” is so dependent on user information. Agentic AI refers to generative AI with the ability to act independently. Generative AI says things. Agentic AI does things. Both are built on the large language models Stanford studied.
 
It’s a logical evolution – think of asking a restaurant chef to give you his recipe versus having a live-in chef who plans and prepares them. But it’s all built on memory. The more AI is allowed to remember about us, the more effective it will be at meeting our asks. “The central tension, then, is between convenience and control,” the experts told Tech Policy Press.
 
We would add that if you think you’re trusting AI what to remember about your prompts and interests and what not to remember, think again. We’re really talking about trusting companies like the ones in the Stanford study – because they’ll be the ones licensing the AI. As of now, then, the fate of your data ultimately rests in the hands of others. From the interview:
 
“Who, exactly, can access your agent’s memories – just you, or also the lab that designed it, a future employer who pays for integration, or even third-party developers who build on top of the agent’s platform?
 
In short, these experts say, the stakes are these:

“Deciding what should be remembered is not just a question of personal preference; it’s a question of governance. Without careful design and clear rules, we risk creating agents whose memories become less like a helpful assistant and more like a permanent surveillance file.”

We close with a refrain that will be familiar to our readers – now is the time for common-sense laws that privilege personal privacy. Without it, these experts warn, AI will become a tool of enclosure rather than empowerment.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Altamides – The New Spyware that Can Infiltrate Your Phone Without a Trace

10/20/2025

 
Picture
​We’ve long reported on Pegasus, the prolific spyware that allows attackers to access the calls, texts, emails, and images on a target’s smartphone. Worse, Pegasus can turn on a phone’s camera and microphone, transforming it into a 24/7 spying device that the victim helpfully takes from place to place.
  • This technology was used by Saudi intelligence to track the soon-to-be murdered journalist Jamal Khashoggi, helped cartels in Mexico to target journalists for assassination, and was implicated in political spying scandals from India to Spain.
 
  • One of the most insidious aspects of Pegasus is that it is “zero-click” malware, meaning it can be remotely installed on a phone and the user doesn’t have to fall for a phishing scam or commit some other act of poor digital hygiene.

But Pegasus has a flaw – digitally savvy victims may be tipped off by a phone’s unusually high data usage, overheating, quick battery drain, and unexpected restarts. If you’re suspicious that Pegasus has been planted in your smartphone, you can scan for it via the Mobile Verification Toolkit developed by Amnesty International’s Security Lab.

Unfortunately, evolution works on spyware as it did on dinosaurs, creating new predators with enhanced stealth and devastating lethality.

Enter First Wap’s Altamides. Based in Jakarta, Indonesia, First Wap’s technology can do what Pegasus does, but without installing malware or leaving digital traces. It tracks people, Mother Jones reports, by exploiting archaic telephonic networks designed without security in mind. It can track users’ movements, listen in on their calls, and extract their text messages.
Recent versions can even penetrate encrypted messaging apps.

  • Victims of such surveillance reportedly include Blackwater founder Erik Prince, Google engineers, the actor Jared Leto, and the wife of former Syrian dictator Bashar al-Assad. Mother Jones also found “hundreds of people with no public profile swept up in the dragnet; a softball coach in Hawaii, a restaurateur in Connecticut, an event planner based in Chicago.”

Who has purchased this surveillance weapon?

Lighthouse Reports, a coalition of media organizations, performed a sophisticated sting operation in which a journalist posed at a Prague sales conference as a shady buyer for an African mining concession. The journalist said he was looking for a way to identify, profile, and track environmental activists.

The salesman replied: “If you are holding an Austrian passport, like me, I am not even allowed to know about the project, because otherwise I can go to prison.”

The salesman, who (irony alert) was secretly videotaped by the journalist, added: “So that’s why such a deal, for example, we make it through Jakarta, with the signature coming from our Indian general manager.”

When the undercover journalist came back for another meeting, he elicited on tape senior First Wap executives discussing workarounds through Niger-to-Indonesia bank transfers to sell its technology to individuals under international sanctions.

Click below for a short film about this undercover sting.
​U.S. Sen. Ron Wyden (D-OR) told Mother Jones that this story only underscores the extent to which the U.S. government and telecoms have failed to make patches to “the glaring weaknesses in our phone system, which the government and phone companies have failed dismally to address.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Flock Partners with Ring – “It’s a Warrantless Day in the Neighborhood!”

10/20/2025

 
Picture
​Amazon’s Ring doorbell cameras are the always-on eyes of the American neighborhood. Owners are free to provide to police images related to suspected crimes, whether a porch pirate or a prowling burglar. But they can also share images of a lawful protest, or turn over warrantless evidence against a targeted individual.
 
Ring is one link in the expanding national chain of visual surveillance. Added to closed-circuit television systems and police-monitored surveillance cameras sold by the tech company Flock Safety, all the elements of a national surveillance system are falling into place.
 
Now, one more element has just been secured with a new partnership between Ring and Flock. Elissa Welle of The Verge reports that “local U.S. law enforcement agencies that use Flock’s platforms Nova or FlockOS can request video footage from Ring users through the Neighbors app.”
 
There is some good news: In the request, Ring says that law enforcement must include details about an alleged crime and its time and location. Individual users still get to decide for themselves whether to respond to a police request for video. And law enforcement cannot see who does or does not respond, limiting the potential for pressure tactics. Still, the integration of Flock – which sells automated license plate readers capable of tracking cars nationwide – into the doorbells of America should be a matter of deep concern.
 
Sen. Ron Wyden (D-OR) told Flock’s management in a letter:
 
“I now believe that abuses of your product are not only likely but inevitable, and that Flock is unable and uninterested in preventing them. In my view, local elected officials can best protect their constituents from the inevitable abuses of Flock cameras by removing Flock from their communities.”
 
The partnership of Flock with Ring is even more troubling in light of Amazon’s reversal of reforms it made in 2024. The company had previously pulled its app feature, which had allowed police to remotely ask for and obtain footage from Ring users. Now, Ring is reinstating the feature, once again making it easy for police to solicit warrantless video from homeowners without a warrant. New policies will also allow police to request live-stream access.
 
Flock does not currently apply facial recognition to its images. The Electronic Frontier Foundation, however, reports that internal Ring documents show an appetite to integrate artificial intelligence – including, perhaps, video analytics and facial recognition software – into its product.
 
Step by step, corporations are working with each other and with government to link technologies to create a national surveillance system. What may be used for commendable purposes today can be used for any purpose tomorrow.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Wi-Fi Turns Spy-Fi

10/15/2025

 

“We are profoundly bad at asking ourselves how the things we build could be misused.”

​- Brianna Wu

Picture
​In terms of surveillance tech, Wi-Fi is having its moment. This is the fourth time in 2025 we’ve covered the growth of an invasive concept that three years ago seemed remote, even arcane: Wi-Fi sensing.

Increasingly, Wi-Fi turned Spy-Fi is ready for prime time. The Karlsruhe Institute of Technology (KIT), a German research university and think tank, found that Wi-Fi networks can use their radio signals to identify people. Any Wi-Fi network can be made to do this, no fancy hardware required. The people being identified don’t have to be logged into these networks, either. In fact, they don’t even need to carry electronic devices for this subterfuge to work; it’s enough simply to be present, minding one’s own business, within range of a given Wi-Fi router.

But given the ubiquity of Wi-Fi networks, that leaves very few places to hide. “This technology turns every router into a potential means for surveillance,” warns security/privacy expert Julian Todt of KIT. “If you regularly pass by a café that operates a Wi-Fi network, you could be identified there without noticing it and be recognized later – for example by public authorities or companies.” (Or hackers, autocrats, or foreign agents).

How does it work? By exploiting a standard feature and turning it into a vulnerability – leveraging weaknesses must be taught at Bad Actor 101 at Spy School. In this case, connected devices regularly send feedback signals to Wi-Fi routers. According to the researchers, these signals are frequently unencrypted – which means anyone nearby can capture them. Then, with the right know-how, that data can be converted into images.

Not photos exactly, but close enough – analogous to ultrasound, sonar, or radar. The more devices that are connected to a given Wi-Fi network, the fuller the picture provided – height, shape, gestures, gait, hats, purses, and more. With a little help from machine learning, our bodies turn out to be uniquely identifiable, not unlike a fingerprint.

Are there easier ways to spy on us? Most certainly – CCTV, for example. But what Wi-Fi sensing lacks in ease it makes up for in reach. As technologies go, it’s practically everywhere that humans are. The vast majority of people don’t have CCTV cameras in their homes, but they (or their neighbors) are almost guaranteed to have Wi-Fi.
​
Wherever you’re reading this from, take a moment to see how many Wi-Fi networks your phone detects. If the KIT research proves correct, any one of them could be used to track your movements and determine your identity.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

Your Mouse May Have Ears Now, Thanks to AI

10/13/2025

 

The Growing Threat of Side-Channel Attacks

Picture
​No, this is not about the brown field mouse you saw in the garage yesterday. We are talking about the high-end laser mouse, common in the gaming world.

Iain Thomson (The Register) reported on a study from UC Irvine, entitled “Invisible Ears at Your Fingertips,” which demonstrates how a modern optical mouse can be exploited to capture human speech. On some surfaces, our voices create vibrations that a supersensitive mouse interprets as movement. Operating systems store such movement data routinely, and it isn’t particularly secure.
​
The researchers found that bad actors could manipulate most operating systems (MacOS included) to capture such data using basic malware, run it through a few sophisticated filters (with artificial intelligence), and eventually discern spoken words. While still imperfect, the concept is sound – literally. See (and hear) for yourself in this demo video produced by the researchers:
​And it isn’t just voices. Footsteps, coughs, and whatever the person in the room happens to be watching on their phone or computer, can be detected. Keystrokes are especially noteworthy – each one emits a slightly different sound. This kind of attack could be used to detect what someone is typing. (For the time being, we can only wonder why it was deemed necessary to give keystrokes unique audio signatures in the first place.)

As Malwarebytes notes, such hacks are classic examples of side-channel attacks, which steal secrets “not by breaking into software, but by observing physical clues that devices give off during normal use.” Because such information is just a natural byproduct rather than an anomaly, no alarms are set to go off. After all, you don't prepare defenses for attacks you can't imagine in the first place.

The good news is that the UC Irvine researchers have informed 26 manufacturers of vulnerable mouse models about their findings. We take more comfort in that approach than Vice’s tongue-in-cheek recommendation: “To hell with those people who told you to buy a gaming mouse.”
​
But the whole thing leaves us – once again – shaking our heads while wondering aloud, “AI can do that?!” Because if it can, then before long, the sky’s the limit. We need robust policy to keep this burgeoning technology firmly grounded in the public interest. Otherwise, this technology is the Tower of Babel in reverse – making varied human communications too comprehensible.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

The Feds Have Your Number… And Your Location… And a lot More

10/6/2025

 

“A day-in-the-life profile of individuals based on mined social media data.”
​

- Ellie Quinlan Houghtaling, The New Republic

Picture
​You might think that where you go and with whom you meet is your private information. And it is. But now it’s also accessible to the government, with a federal agency purchasing software to track the location of your phone.

Joseph Cox of 404 Media reports that the U.S. Immigration and Customs Enforcement (ICE) is buying an “all-in-one” surveillance tool from Penlink to “compile, process, and validate billions of daily location signals from hundreds of millions of mobile devices, providing both forensic and predictive analytics.”

That chilling quote is ICE’s own declaration. Apparently, acquiring Penlink’s proprietary tools are the only way to beat criminals at their own game.

ICE is not taking us down a slippery slope. It is going straight to the gully, discarding any concept of the prohibition against warrantless surveillance in violation of the Fourth Amendment. From there, monitoring the movements of the general population is simply an act of political will. As with facial recognition software, notes the Independent’s Sean O’Grady, it is one more example of the “creeping ubiquity of various types of surveillance.”

Indeed, location is but one element of commercial telemetry data (CTD), the industry term for information acquired from cellphone networks, connected vehicles, websites, and more. PPSA readers know that banning the sale of CTD to government agencies is one goal of the bipartisan Fourth Amendment Is Not For Sale Act, which passed the House in the previous Congress.

Collecting and selling CTD is the shady business of the data broker industry, a practice the Federal Trade Commission once tried, meekly, to rein in. Indeed, for one brief shining moment, even ICE previously announced it would stop buying (but continue to use) CTD after the Department of Homeland Security’s own Inspector General found that DHS agencies weren’t giving privacy protections their due.

And yet here we are. As the Electronic Frontier Foundation’s Beryl Lipton recently put it in Forbes:

“This extension and expansion of ICE’s Penlink contract underlines the federal government’s enthusiasm for indiscriminate and warrantless data collection on as many people as possible. We’re still learning about the extent of the government’s growing surveillance apparatus, but tools like Penlink can absolutely assist ICE in turning law-abiding citizens and protestors into targets of the federal government.”
​
These tools are in the hands of ICE today, but they could be in the hands of the FBI, IRS, and other federal agencies in the blink of an eye. Congress should take note of this development when it debates reauthorization of a key surveillance authority – FISA Section 702 – next spring.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

Heard on the Street? Our Voices, Apparently

10/6/2025

 

“Don’t eavesdrop on others – you may hear your servant curse you.”
​

- Ecclesiastes 7:21

Picture
Image via https://www.flocksafety.com/
​Flock Safety is a frequent PPSA subject (this is our tenth article on the company). But instead of the company’s license-plate reader cameras, today’s discussion was inspired by Flock’s listening device, Raven.

According to Ben Miller of Government Technology, Raven was developed to detect gunshots and other crime-related noises, then activate nearby Flock Falcon cameras and alert authorities. Flock began marketing the Raven-Falcon combo to schools in 2023. The camera integration is meant to be Raven’s primary selling point, giving law enforcement immediate alerts about gunshots, breaking glass, screeching tires, and whatever it's programmed to listen for.

Funny thing – it can also listen for human voices.

Matthew Gauriglia of the Electronic Frontier Foundation (EFF) reports that Flock has been touting Raven’s ability to detect screaming and other forms of vocal distress. The obvious implication, of course, is the product’s ability to “listen” and record human speech. Raven competitor ShotSpotter proved it could be done when its system recorded the words of a dying man in 2014.

Critics, meanwhile, challenge the notion that technology like Raven and ShotSpotter are good listeners – or even solid policing strategy. ShotSpotter published its own study claiming nearly 97 percent accuracy, though that level required six well-placed (and expensive) sensors in a given area.

Public research tells a different story. Chicago’s Inspector General was highly critical of the technology, finding that “alerts rarely produce evidence of a gun-related crime.” Instead, its use increased stop-and-frisk tactics due to officers’ changed perceptions of the areas where the sensors were deployed. It was deemed not to be worth the $33 million the city had paid for the contract.

Northwestern University’s MacArthur Justice Center published the most comprehensive set of findings to date – claiming that “on an average day, ShotSpotter sends police into these communities [mostly of color] more than 61 times looking for gunfire in vain.” Meanwhile, a National Institute of Justice report last year essentially concluded the technology brought little in terms of meaningful impacts on policing and crime reduction.

And now Raven is joining the audio sensor party, which, as parties go, is turning out to be a veritable Fyre Festival of public safety based on the combined testimony of multiple watchdog groups. In addition to those noted above, the list of audio sensor detractors includes the ACLU, Surveillance Technology Oversight Project, and Electronic Privacy Information Center. We also recommend EFF’s summary of the entire audio sensor industry.

Yet law enforcement continues to hail these too-good-to-be-true, quick-fix “solutions” to public safety challenges, potentially wasting millions of taxpayer dollars and eschewing much-needed transparency. The boosterism continues, despite concerns raised by the communities this technology purports to protect.
​
Audio-sensing tech capable of being deployed at scale nearly completes the mass surveillance infrastructure needed to destroy our privacy once and for all. After all, it is not a great leap for government to go from listening for screams to eavesdropping on private conversations.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

AI Reinvents Surveillance, This Time Without Limits

10/1/2025

 

“We but teach bloody instructions, which, being taught, return to plague the inventor.” - Macbeth

Picture
​Closed circuit television (CCTV) has changed very little since its introduction in the 1960s – essentially passive systems that merely display whatever they’re aimed at. In fact, without a human at the other end, there was no real surveillance taking place.

That was always the flaw in George Orwell’s 1984 – it would take as many people to surveil as there are people to surveil. And the watchers would have to try to remain alert throughout the day as they watched people eat breakfast, brush their teeth, and wash their dishes.

Then the ability to digitally store vast amounts of surveillance made the task of surveillance easier. But now that AI is here, it is proving to be the real game-changer.

The new generation of CCTV security cameras are capable of autonomous surveillance and action. “Watched by AI guards,” boasts ArcadianAI, whose Ranger line of products operates on its own, proactively identifying what it sees as threats and subsequently alerting authorities.

It’s largely thanks to recent “advances” in computer vision and vision language models, which speak of “objects,” a fiendishly clever euphemism for anything – bodies, body parts, events, contexts, movements, behaviors, colors, dimensions, distances, sounds, textures. In effect, anything that can be recognized and classified as its own distinct kind of pattern.

Thus updated surveillance video now “thinks” about what it’s seeing. Case in point: An orchestral piece powered by AI video. It’s a bit of PR for Axis Communications to make the point that its CCTV systems can detect whatever its clients seek to find and, with that information, do previously unimaginable things.

This moment represents a threshold of sorts: defining, recognizing, and interpreting patterns without limit. Using such technology for musical composition is innocuous enough, but what about scanning a scene for skin color, hair style, facial features, gait, ethnicity, gender, age… or failing to applaud… or using a secret handshake?

Amid all the hype about AI’s possibilities, it’s important to step back and remember that there is nothing inherently moral about creativity – not in medicine, physics, management, or any human endeavor. Yet, here we are rushing headlong into a frenzied new era of possibility with no guardrails or ethical standards in sight.
View this post on Instagram

A post shared by IFLScience (@iflscience)

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

The Spy in Your Baby’s Bedroom

9/30/2025

 
Picture
​“Made in China” products should carry the warning “Watching from China,” according to threat assessor Michael Lucci in an interview with Fox News. Nebraska Attorney General Mike Hilgers agrees and is suing the Chinese firm, Lorex, accusing it of using technology the FCC banned in 2022. Lorex cameras are commonly sold by U.S. retailers ranging from Costco to Best Buy, Kohls, and Home Depot.

Nebraska’s complaint accuses Lorex of using tech from Dahua, one of the companies the FCC banned after accusing it of sharing American consumers’ data with the Chinese government. So far, Lorex and other companies have managed to get around the ban by employing a popular strategy known as “white labeling,” in which products are made generically by Company A but sold under Company B’s name.

India recently made a similar determination about such products, imposing stringent new security requirements on mostly Chinese-made CCTV cameras. As we wrote at the time, China’s rap sheet when it comes to using products to spy on other countries is a long one. Nowhere is this truer than in the United States, China’s largest trading partner and most persistent observer.

Lorex’s cameras are frequently sold for in-home surveillance of infants and small children. But what threat could a baby monitor pose? Who cares if every gurgle and burp is captured?
Consider: With video and audio monitoring, Beijing could listen in to the conversations of parents who work in the military or in intelligence agencies. Knowing when thousands of parents with such duties are being called in for a weekend or late night could, in an emergency, be priceless strategic intelligence. The device could also be within earshot of parents talking about work in a way that yields intelligence about commercial business plans or useful Washington gossip.

As always, China is playing a numbers game. The PRC hoovers in vast intelligence, and then turns to AI and a vast army at the Ministry of State Security and its many consultants to winnow out useful intelligence. That is why Attorney General Hilgers calls these baby monitors a “national security issue.” Even if all Beijing has access to is Mom asking Dad to go to the kitchen for a bottle of milk, the erosion of privacy is galling. No American couple signs up to let a foreign government in their baby’s bedroom.
​
If these concerns are accurate, then parents and families aren’t the only ones being watched. All of which also makes us queasy about the growing popularity of AI-powered children’s toys – or, perhaps, justifiably paranoid.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

DOJ Actually Responds to a Freedom of Information Act Request!

9/29/2025

 
Picture
​That shouldn’t merit a headline, but it does. We’ve often reported on the Department of Justice’s responses to our Freedom of Information Act (FOIA) requests for internal policies concerning the use of cell-site simulators, commonly known as stingrays.

In the past, we’ve received non-response responses to our FOIA request, including one in which DOJ sent us 40 redacted pages from MISTER BLANK in the office of BLANK, with only this statement: “Hope that’s helpful.” We noted at the time that this could only be taken as a middle-finger salute to FOIA itself.

There now seems to be a more responsive spirit at DOJ. A new reply to our FOIA request arrived this month. True, it was still less than fulsome. But it was a response! And what it did reveal was encouraging. It showed a determination to abide by a 2015 DOJ memo requiring probable cause warrants before this technology can be used, except in emergency circumstances.

DOJ personnel were informed:

“The core of this new policy is to require search warrants for use of the devices, except in rare circumstances such as a threat to life and limb. It also requires transparency with the courts in the way that we apply for legal process, and it dictates what should be done with information about cell phones that are not related to the investigation.”

This leaves you wondering why some previous respondents at DOJ chose obfuscation and a rude brushoff instead of showcasing an internal determination to abide by the Fourth Amendment.

Stingrays are devices that mimic cell towers, pinging the phones of people within a geofenced area to reveal their location, movements, and potentially some contents within their phones. This technology can sweep up the personal information of hundreds of people in a given area. This actually happened when the Richmond, Virginia, police searched for a bank robber. Their sweep compromised the Fourth Amendment rights of diners in a Ruby Tuesday restaurant, guests at a Hampton Inn, residents of an apartment complex, and seniors in an assisted living facility.

This incident demonstrates that while the Justice Department has a tight policy regarding the use of stingrays, different rules apply to a dozen other federal agencies and at least 75 state agencies around the country that also use this surveillance technology. The FBI instructs police to use stingrays to develop leads, but use other means to develop “primary evidence.” This sure sounds like a suggestion to construct parallel evidence.

Shouldn’t defendants know if evidence used against them was taken from their phones?
​
Still, we are happy to take good news when we can get. Here’s to encouraging the DOJ to continue to abide by its policy of applying a warrant requirement to stingrays.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

Watching the Watchers: How Surveillance Reduces Humans to Data Points

9/22/2025

 
Picture
​We’ve reported extensively on how high schools across the United States are monitoring the communications of students for the sake of “safety.” Now an anonymous teacher in the United Kingdom, after learning that a system called Senso secretly monitors whatever students or staff type, explains his concerns about privacy – but also, much more.

From Unherd:

“At first, I thought my reaction was about privacy. Partly, it was. But what lingered – what I kept turning over – was something else. A kind of moral labour was being handed over to a machine: the quiet discipline of noticing, of staying with another person’s experience, of holding their reality in mind. And no one seemed to notice, or care. 

“What I was seeing – or rather, what was vanishing – was a form of attention. Not just focus or vigilance, but something older and more human. The effort to see someone in their full, contradictory reality – not as a data point, a red flag, or a procedural category …

“Tools like Senso make that trade easy – and invisible. They train us to scan for risk, not to remain with the person. Moral attention is the ground of judgement, the beginning of care. It is also a stance of active presence: an effort to refuse reducing the person in front of us to the signals a system is designed to detect. 

“As Simone Weil wrote: ‘Attention is the rarest and purest form of generosity.’ It is not just noticing – it is the effort to see someone else as they are, without turning away … 
​
“Sociologists have long recognised that moral life depends not only on individual decisions, but on shared structures. When those structures weaken – when proximity is replaced by process – something shifts. The moral weight of a situation is no longer felt; it is processed. As judgement is replaced by assessment, the capacity for care erodes.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

Clearview AI: Giving the US Government A Clear View of Its Citizens

9/18/2025

 
Picture
​Clearview AI is raking in the cash with its facial recognition software, signing lucrative contracts that make all Americans easier targets for government surveillance. The latest award is a $10 million deal with the Department of Homeland Security (DHS) to support Immigration and Customs Enforcement (ICE) operations.

Clearview was previously fined more than $30 million by Dutch regulators for privacy violations related to data collection. It also settled privacy violation charges in the U.S. for tens of millions more. But none of that has stopped the company from becoming a favorite of law enforcement and government intelligence agencies in the United States. In fact, we’ve written about the dangers of facial recognition more times than we can count. Its continued popularity only proves that the federal government cares more about purchasing facial recognition software than regulating its use. As a result, states have had to step in and fill the regulatory gap.

The new ICE contract means that Clearview will be used to help identify individuals accused of assaulting its officers – a commendable goal. But the accumulation of Americans’ faces into a single database is an immense temptation for abuse in many other domains, including surveillance for political reasons.

You may applaud or deplore ICE’s new aggressiveness. The larger is issue what the government, or Clearview itself, will do down the road with the mass collection of America’s facial data. Our faces, along with the rest of our biometric data – and our privacy in general – remain for sale. Of course, we’re assuming that the software will actually recognize us rather than mistake us for someone else.

As spy tech goes, facial recognition can’t seem to win for losing.
​
It’s enough to make one yearn for the quaint times of Oscar Wilde, who once said, “I never forget a face, but in your case I will make an exception.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

The Wearable Revolution Will Be A Boon For Data Harvesters

9/15/2025

 

“There’s no federal law that is going to protect against these companies weaponizing this data.”

Prof. Alicia Jessop
Picture
​We recently reported that the popularity of wearables is eroding confidence in the idea that private, candid conversations will always remain private. Now Charlie McGill and The American Prospect report that HHS Secretary Robert F. Kennedy Jr. “wants a wearable on every American body.” They described this announcement as “curious” given that five years ago the Secretary himself blasted wearables and other smart devices as being about “surveillance, and harvesting data.”

That was then. A massive, government-funded pro-wearables ad campaign will soon promote Secretary Kennedy’s long-held view that eating right and exercising is superior to pharmaceutical remedies. He also wants HHS to popularize wearables: “You know the [sic] Ozempic is costing $1,300 a month, if you can achieve the same thing with an $80 wearable, it's a lot better for the American people.”

Persuading people to take better care of themselves is certainly a commendable goal for an HHS Secretary. But the security and privacy risks inherent to wearables are also a veritable bonanza for data brokers. On the Dark Web in 2021, healthcare data records were worth $250 each, compared to $5.40 for a payment card record. Just imagine what they’ll be worth in four years’ time if the HHS plan comes to fruition. Meanwhile, companies are lining up to cash in on the wearables boom that the department is promoting.

Companies that buy our data usually just want to target customers with ads and appeals. On a more sinister level, our health data derived from wearables – about as personal as information can be – will be sold by data brokers to about a dozen federal agencies, ranging from the FBI and the IRS to the Department of Homeland Security.

Health data from wearables will surely become part of a single, federal database of Americans’ information. “Techno-utopianism” observes Natalia Mehlman Petrzela “assumes more sophisticated technology always yields a better future.” Without constructing the requisite privacy guardrails for the data new technologies generate, quantifying ourselves on such an extreme scale may invite unwanted scrutiny.
​
Do we really want the FBI or the IRS to be able to warrantlessly access our deeply personal health issues? The wearables revolution, and the data it generates, is just another privacy violation that should prompt Congress to enforce the Fourth Amendment by forbidding the government from warrantlessly purchasing our most personal data.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

How “Therapy” from Generative AI Powers the Great Surveillance State

9/12/2025

 

“The progress of science in furnishing the government with means of espionage is not likely to stop with wire-tapping.”

Louis Brandeis, 1928
Picture
​Protecting privacy in the Information Age was always going to be a tough proposition. Protecting privacy in the era of generative AI? Without the proper safeguards on your part, is nigh unto impossible.

Every entry you make in ChatGPT could surface in public due to a subpoena or a warrant. So when ChatGPT asks you (cue the Viennese accent) “how do you feel about your mudder?” your response may well be read by an FBI agent or by a prosecutor in open court.

Yet this technology is being used by some in exactly that way – as a therapist.

Mostly hoping that no one would notice, ChatGPT parent OpenAI recently published a mea culpa of sorts, trying to “sorry/not-sorry” its way through the bad PR it’s received as a result of users harming themselves and others. Because “people using ChatGPT in the midst of acute crises” hasn’t gone well, OpenAI will now route to human reviewers any conversations in which ChatGPT users threaten harm to others (another privacy can of worms). OpenAI may ban such accounts, but it may also refer the matter to law enforcement.

Generative AI is not a therapist. It is not a counselor. It is not a parent, a minister, a rabbi, a teacher, or a school administrator. AI isn’t even anyone’s friend, much less a lover. It is a very bad substitute for all of these utterly human roles. We misuse it at our peril.

But generative AI is something else as well – a profitable branch of data science that corporations, educational institutions, governments, law enforcement agencies (and scammers!) are using to collect vastly more data about employees, customers, students, citizens, and future victims of criminal schemes.

To the extent that we use it at all, we should be exceedingly wary of what we share. It is not, nor has it ever been, private. Americans have never been more surveilled than we are at this moment. Before generative AI, the surveillance apparatus was proceeding more or less in a linear fashion, like a twin-engine prop on a steadily rising course. That prop plane is now a supersonic jet thanks to generative AI.

“Safety” is one of the many traps that the era of generative AI is increasingly setting for matters of privacy. When our fundamental right to be let alone (to quote Justice Brandeis) is traded away these days, it is most often done “in the name of” some noble-sounding cause – safety, national security, you name it.
​
Until law catches up to reality, you would be well advised to be very careful with any private information you share with AI advisors like ChatGPT, especially if it is about your mother.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

OPINION: Leverage the CLOUD Act To Protect Encryption

9/12/2025

 
Picture
PPSA Senior Policy Advisor & Former U.S. Congressman Bob Goodlatte. PHOTO CREDIT: Gage Skidmore
Our Senior Policy Advisor and former U.S. Congressman, Bob Goodlatte, explains in Tech Policy Press why it’s necessary to protect encryption, which ensures that emails, texts, and other communications are kept private between sender and receiver. The world currently faces numerous cybersecurity threats, and every piece of data from medical records to trade secrets is a potential target. Encryption not only protects industry, but it also protects journalists from malevolent governments and victims from their abusers.

Yet the UK has chosen to pursue a disastrous policy of attacking encryption and the privacy it enables by requiring Apple to facilitate wiretapping of its users - even those that are, like the company, outside of the UK. The US government shouldn't be complicit in this power grab by continuing to give the UK authority to enforce surveillance orders against US tech companies, as it currently does under the 2017 CLOUD Act. 
READ HERE

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Watching the Watchers: Pakistan’s Total Surveillance State – Can’t Happen Here, Right? Right?

9/9/2025

 
Picture
​With help from vendors in the United States, Canada, Europe, and China, Pakistan has relied on a global supply chain to create comprehensive and sophisticated surveillance and censorship tools, according to a new report released by Amnesty International on Tuesday.

Agnès Callamard, Secretary General of Amnesty International, said:

“Pakistan’s Web Monitoring System and Lawful Intercept Management System operate like watchtowers, constantly snooping on the lives of ordinary citizens. In Pakistan, your texts, emails, calls and internet access are all under scrutiny. But people have no idea of this constant surveillance, and its incredible reach. This dystopian reality is extremely dangerous because it operates in the shadows, severely restricting freedom of expression and access to information.”

Amnesty International provides a real-life example, a journalist who is responding to constant surveillance with self-censorship. The journalist describes what happened after he published a story on public corruption:

“After the story, anyone I would speak to, even on WhatsApp, would come under scrutiny. [The authorities] would go to people and ask them, ‘Why did he call you?’ [The authorities] can go to these extreme lengths … Now I go months without speaking to my family [for fear they will be targeted].”

Keep in mind that virtually every bit of data that Pakistan extracts from its citizens relies on technologies sold by U.S. companies and used by our federal government. In our country, internal agency procedures, ethical commitments, and laws prevent such ready and rampant surveillance. But all the elements are in place. Through purchased data and FISA Section 702, the federal government can already view virtually anything it wants without a warrant.
​
Pakistan is a reminder of just how perilously close we are to an American surveillance state.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

When Police Profit From Protection

9/8/2025

 

“Ethics is knowing the difference between what you have a right to do and what is right to do.”

- Justice Potter Stewart
Picture
​Local police departments are spending billions of dollars on surveillance technology, from cameras, to cell-site simulators, to drones. Customers in blue range from the New York Police Department, which has invested $3 billion in surveillance in recent years, to small-town departments willing to fork out tens of thousands.

With so much money sloshing around, it is reasonable to wonder how careful local officials are in maintaining clear boundaries between customer and vendor. Events in Atlanta suggest that sometimes these boundaries are, at best, blurry.

Marshall Freeman is the Chief Administrative Officer of the Atlanta Police Department (APD) and a former leader at the non-profit Atlanta Police Foundation. Together, the Foundation and the APD devised Connect Atlanta, a camera network that makes Atlanta one of the most surveilled cities per capita in the United States.

The Atlanta Community Press Collective (ACPC) was combing through public records when they noticed Freeman’s name on a Conflict of Interest Disclosure Report. Citing “financial interest” in Axon, a law enforcement tech company, he recused himself from contract-related “matters and dealings” that could impact Axon financially. “I have interest in a company that is currently in talks with Axon around acquisition and investment,” he wrote, without specifics.

ACPC discerned that Freeman’s unnamed stake was in a company called Fusus, whose software fuels the Connect Atlanta surveillance system. Axon acquired it for $240 million barely a week after Freeman filed his disclosure. More red flags followed. Freeman was the only public official quoted in Axon’s press release announcing the acquisition: “I wholeheartedly encourage all agencies to embrace this cutting-edge technology and experience its transformative impact firsthand.”

Using open records requests, ACPC also reports it also found emails indicating that Freeman “boosted Fusus and Axon products to other agencies in Georgia and around the U.S.” on multiple occasions post-disclosure. When the reporting first surfaced, APD responded tersely: “The appropriate ethics filings were submitted.”

A few weeks later, though, the City of Atlanta Ethics Office begged to differ, announcing an investigation into Freeman’s post-recusal behavior. Fifteen months later, the body released an official report totaling 313 pages. The findings suggest that Freeman’s relationship with the camera-pushing Fusus dated back to his days at the Atlanta Police Foundation, a relationship he brought with him to APD and continued to nurture. According to The Guardian, he consulted for Fusus for at least a year after joining APD, “crisscrossing the country in person and by email while repping the company, including conversations with police departments in Florida, Hawaii, California, Arizona and Ohio.”

All told, the Ethics Office found 15 separate matters in which Freeman used his official position as an influencer for Axon and Fusus. For at least part of this time, he served on the board of two Fusus subsidiaries in Virginia and Florida – a fact he did not disclose to ethics investigators. 

Writing in The Intercept, Timothy Pratt and Andrew Free detail how Freeman’s impropriety (the “appearance” of which is the only thing he’s admitted to) is making all of us less free – taking the Great Atlanta Mass Surveillance Experiment and replicating it from sea to monitored sea: Seattle, Sacramento, New York City, Omaha, Birmingham, Springfield, Savannah, and counting.

Freeman may be an exception. But he might be the rule. It doesn’t matter, given the outsized influence even one public official can have when it comes to the proliferation of the police surveillance dragnet in the United States. Then again, by the time robust surveillance systems get to smaller, heartland cities like Lawrence, Kansas, it may already be too late.
​
At the very least, police procurement processes would benefit from tighter rules, like those that govern Pentagon officials when they assess contracts.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Home Security and the Rise of Surveillance Art

9/4/2025

 
Picture
PHOTO: www.swiftcreatives.com
We often speak of surveillance technology. Now we have surveillance art, modernist sculptures that watch you back whenever you admire them.

We’re a bit forgiving when the technology is used as a form of home security, since it is defensive in nature rather than invasive (and mass in scale). But the melding of art and surveillance is a trend that ought to give anyone pause.

Alyn Griffiths of Dezeen reports on Sculptural Surveillance by Denmark studio Swift Creatives. Marketing to homeowners, these designs are bendable silly straws that you can customize into landscape art. Slender, looping, and brightly colored, these are meant to be noticed.

In the words of Swift Creatives co-found Carsten Eriksen, “Our concept for this collection aims to challenge the conventional notions of home surveillance, transforming functional devices into objects of beauty that homeowners can proudly display.” They are, he says, “aimed to stand out.”

Driven by residents/owners themselves, such approaches to home security are respectful and a far cry from the techniques high-tech burglars have been using. They also represent a far safer choice than Chinese-made junk products masquerading as security devices – which can be used to watch their owners instead of the other way around.
​
This has the feeling of a beginning of a trend. Perhaps the next time you get the creepy feeling that the eyes in a painting or a Rodin sculpture are following you, you might be right.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Note to Protestors: Turn Off Your Wi-Fi

9/4/2025

 
Picture
Philip K. Dick, the 20th century writer whose science-fiction stories proved prescient, once declared: “My phone is spying on me.” He might have been paranoid then, but he wouldn’t be now.

Wi-Fi has become the newest battlefield in the surveillance war. First, researchers showed it could sense bodies and furniture in the dark. Then came “WhoFi,” a variant that can detect the size, shape, and makeup of those bodies. A once obscure technology is now advancing at a disturbing clip.

Now comes something simpler – and just as insidious, from Australia. In July 2024, the University of Melbourne used Wi-Fi location data, cross-referenced with CCTV footage, to identify student protestors at a sit-in, reports Simon Sharwood of The Register. This was after the school ordered protestors to leave and warned that anyone who stayed could face suspension, discipline, or police referral.

Despite the students’ misbehavior, the Victoria state’s Information Commissioner investigated this use of technology, citing possible violations of the 2014 Privacy and Data Protection Act. The final report cleared the university’s CCTV use but found its Wi-Fi tracking out of bounds. Why? Because the school had never clearly disclosed this purpose in its Wi-Fi policies. The Commissioner reports:

“Even if individuals had read these policies, it is unlikely they would have clearly understood their Wi-Fi location data could be used to determine their whereabouts as part of a misconduct investigation unrelated to allegations of misuse of the Wi-Fi network.”

The Commissioner called this “function creep.” Or as we would say, mission creep. Whatever the name, it’s a serious problem. Surveillance technologies rarely stay in their lane. Once deployed, they inevitably “creep” unless nailed down by clear rules, ethical guardrails, and organizational cultures that prize transparency over convenience.

To its credit, the university cooperated with the investigation and promised reforms.

But let’s be fair, the University of Melbourne isn’t unique here. We’re all naïve about the countless ways our gadgets betray us. And it’s not just CCTV. No one should be shocked when cameras are used as surveillance tools. It is far less obvious that almost every modern technology can be repurposed to follow us wherever we go.

Yes, Virginia, Wi-Fi tracks location. It always has. And whenever location data is on the table, the odds of being spied on shoot through the roof.

What else relies on location data? Practically everything with a battery. If you want to reduce your surveillance footprint, you can’t rip down the cameras – but you can shut down your phone, smartwatch, Fitbit, smartglasses, and every other blinking, beeping device. Or better yet, leave them at home.
​

With the possible exception of pacemakers, of course.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS
<<Previous

    Categories

    All
    2022 Year In Review
    2023 Year In Review
    2024 Year In Review
    Analysis
    Artificial Intelligence (AI)
    Call To Action
    Congress
    Congressional Hearings
    Congressional Unmasking
    Court Appeals
    Court Hearings
    Court Rulings
    Data Privacy
    Digital Privacy
    Domestic Surveillance
    Facial Recognition
    FISA
    FISA Reform
    FOIA Requests
    Foreign Surveillance
    Fourth Amendment
    Fourth Amendment Is Not For Sale Act
    Government Surveillance
    Government Surveillance Reform Act (GSRA)
    Insights
    In The Media
    Lawsuits
    Legal
    Legislation
    Letters To Congress
    NDO Fairness Act
    News
    Opinion
    Podcast
    PPSA Amicus Briefs
    Private Data Brokers
    Protect Liberty Act (PLEWSA)
    Saving Privacy Act
    SCOTUS
    SCOTUS Rulings
    Section 702
    Spyware
    Stingrays
    Surveillance Issues
    Surveillance Technology
    The GSRA
    The SAFE Act
    The White House
    Warrantless Searches
    Watching The Watchers

    RSS Feed

FOLLOW PPSA: 
© COPYRIGHT 2024. ALL RIGHTS RESERVED. | PRIVACY STATEMENT
Photo from coffee-rank