Project for Privacy and Surveillance Accountability (PPSA)
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE

 NEWS & UPDATES

Watching the Watchers: AI & Cybercrime Are a Match Made In Hell

12/8/2025

 
Picture
Axios contributors Christine Clarridge and Russell Contreras recently assessed the increasingly ominous role artificial intelligence is playing in cybercrime. Deepfakes, ransomware, identity hijacks, and infrastructure hacks are all newly elevated threats – widely varied acts that previously required specialized expertise and massive organizations.
But not anymore. Now, they write:

“Off-the-shelf AI lowers the skill level and cost of carrying out attacks, enabling small crews to execute schemes that previously required nation-state resources.”

Here's what else their snapshot revealed:

  • Financial systems seem especially vulnerable, but the threat isn’t limited to banks. It potentially affects any entity with customer accounts, from hospitals to water plants to retailers.

  • “Crimes can now hit millions at once with voice clones and account takeovers, while local agencies are trained and funded to chase one case at a time.”

  • AI can commit crimes humans aren’t capable of: “AI can create automations to ‘lock pick’ into a system millions of times per second, something humans can't do.”

  • Almost anything can be disabled in such attacks: a Port of Seattle attack “disabled airport kiosks, baggage systems and Wi-Fi, while exposing data for roughly 90,000 people.” Speaking of Seattle, the Seattle Public Library “suffered a ransomware attack that wiped out its catalog, computers, Wi-Fi and e-books.” It cost Seattle a million dollars and three months to fully recover.

  • The Chinese government is all-in: “State-backed hackers used AI tools from Anthropic to automate breaches of major companies and foreign governments during a September cyber campaign.” That attack marks a particularly dark turn, since the level of human involvement required was minimal thanks to AI’s assistance.

  • More crimes are happening: “Generative AI has increased the speed and scale of synthetic-identity fraud,” especially where real-time payment systems are involved.
​
  • And they are happening faster: “A deepfake attack occurred every five minutes globally in 2024, while digital-document forgeries jumped 244% year-over-year.”

When it comes to cybercrime, these stats suggest that it pays to be more than a little paranoid.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Sex Talk from Children’s AI-Enabled Teddy Bears

12/5/2025

 
Picture
​If you’re making a holiday shopping list for the kids, be grateful that Kumma “talking toy bears” will no longer be on store shelves. It is creepy enough that AI-enabled toys allow companies to track what your children (and any family members in the vicinity) say. How long such data is kept – and how it might be used when children become adults – is anyone’s guess.

Worse, an advocacy group found that FoloToy’s Kumma bear had no problem recommending kinky sex as a way to spice up relationships. (It offered, among other things, tips on how to tie knots). Completely unrelated and of no concern at all is the news that OpenAI announced a partnership with Mattel in June of this year.

Now back to the bear: Not only did Kumma discuss very adult sexual topics, but it also introduced new ideas the evaluators hadn’t even mentioned – “most of which are not fit to print.” They also found AI-powered children’s toys (including Kumma) that variously:

  • Offered advice on where to find matches, knives, and pills
 
  • Provided tips on how to be a good kisser
 
  • Asked follow up questions about sexual preferences
 
  • Seemed dismayed when users said they had to leave
 
  • Found ways to actively discourage users from leaving
 
  • Listened continuously and joined a nearby conversation

And as that last bullet suggests, don’t even think about privacy:

“These toys can record a child’s voice and collect other sensitive data, by methods such as facial recognition scans,” warn the researchers. It’s unclear what the (mostly Chinese) companies pushing these products will do with all the data they mine from these toys, but deleting it seems highly unlikely. To date, such AI systems remain eminently hackable.

Earlier talking toys like Hello Barbie relied on machine learning and could only follow predetermined scripts. But the rise of generative AI has introduced true conversationality into the mix – and with it, massive unpredictability (randomness, after all, is baked into generative AI models). The responses are often completely novel – and may be entirely inappropriate for younger audiences (or, as adults have discovered, just plain wacko).

Parents need to understand that children might be having detailed, potentially formative conversations on all kinds of important topics – without their knowledge or involvement. And many of the toys in question use gamification techniques and other strategies (as in the list above) to keep children engaged and continuously coming back for more.

Of course, it’s now a given that every AI toy tested framed itself as one’s buddy or even best friend. The stakes could hardly be higher: For the youngest children, the presence of AI-based toys introduces a massive unknown into a critical window for development.

For now at least, Kumma the bear is off the market in the wake of the revelations about its kinky side and tell-all personality.
​
Being a parent or caregiver was already hard enough. Now thanks to generative AI and the mad rush to reinvigorate a market (children’s toys) that had long been stagnant, gift-giving is turning out to be almost as fraught as parenting itself.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Humans Are Peering Through the Eyes of Robots

11/10/2025

 

“We shall describe devices which appear to move of their own accord.”

​- Hero of Alexandria, Pneumatica

Picture
Image courtesy of 1X.
​Those of a certain age might remember the Domesticon, a line of 22nd century robotic butlers from the movie Sleeper. To avoid being caught by the authoritarian state, Woody Allen’s character Miles Monroe pretends to be a Domesticon during a dinner party. The scene is equal parts slapstick and satire. Miles’ cover is blown when he tries to help the host but acts too human in the process.

The Wall Street Journal’s Joanna Stern recently found that one actual prototype of the Domesticon is not entirely dissimilar to the fictional version. 1X Technologies is beta testing NEO, the $20,000 “home humanoid” it hopes to bring to market in 2026. Recently, Stern got to see it in action for the second time and discovered a decidedly Sleeper-like connection: NEO is part human.

Not organically, like a cyborg – so far the full integration of creature and computer is limited to cockroaches. No, NEO is remotely human, as in there’s a remote human operator back at company HQ, “potentially peering through the robot’s camera eyes to get chores done.”

Now, how’d you like to have that job? But as 1X CEO Bernt Børnich told Stern: “If you buy this product, it is because you’re okay with that social contract. If we don’t have your data, we can’t make the product better.”

Such transparency is refreshing. It is also a reminder of the Faustian bargain we must strike in order to make artificial intelligence work at the expense of our personal privacy. AI is unlike any software that came before in that it requires gargantuan amounts of data in order to learn its jobs. As Stern notes, “It needs data from us – and from our homes.” A world model, in other words, centered around us and private things we do at home. 

We expect these machines to be capable of fully human, fully competent, fully safe behaviors – all while being fully autonomous. None of that will happen without the ability to collect and learn from the data of day-to-day human lives. There are no shortcuts, either. When 1X let Stern drive NEO using one of the company’s VR headsets its human operators wear, she nearly dislocated its arm. The robot left for the shop in a wheelchair. The robot, a cross between “a fencing instructor and a Lululemon mannequin,” as she describes it, had neither’s dexterity nor style.

And during the first meeting the reporter had with NEO earlier in the year, the robot managed to faceplant.

“No way that thing is coming near my kids or dog,” she remembers thinking. Domestic robotics remains in its infancy – literally in Stern’s view. “The next few years won’t be about owning a capable robot; they’ll be about raising one.” Like a toddler, humanoid AI can’t learn without doing, watching, and remembering.

1X says users will be able to set “no-go” zones, blur faces in the video feed, and that human operators back at HQ will not connect unless invited to do so. CEO Børnich told Stern that such “teleoperation” was a lot like having a house cleaner. “Last I checked,” Stern responded wryly, “my house cleaner doesn’t wear a camera or beam my data back to a corporation.”

A punchline of sorts seems appropriate here: We’re big fans of the ethical AI principle that says always have a human in the loop – “but this is ridiculous!” 

Stern’s forthcoming book, I Am Not a Robot: My Year Using AI to Do (Almost) Everything and Replace (Almost) Everyone, is now available for pre-order. Readers can expect more dirt on NEO.
​
Unless he learns to vacuum first.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

Is AI Evolving from Helpful Assistant to Permanent Spy?

10/23/2025

 

“Their power derives from memory, and memory is where the risks lie.” - Kevin Frazier and Joshua Joseph

Picture
​Here’s a quick news item that will come as a surprise to absolutely no one, except perhaps for hermits who have been living in caves since AI went mainstream in 2022. Two new pieces of reporting, from Stanford and Tech Policy Press, confirm the fresh dangers to privacy emerging from the AI frontier.
 
First to Palo Alto, where researchers evaluated the privacy policies of six frontier AI developers. You can check out the complete analysis, but here are the takeaways from the abstract. Spoiler alert – they’re not a win for privacy:

  • All six AI developers appear to employ their users' chat data to train and improve their models by default

  • Some retain this data indefinitely

  • Developers may collect and train on personal information disclosed in chats, including sensitive information such as biometric and health data, as well as files uploaded by users

  • Four of the six companies examined appear to include children's chat data for model training, as well as customer data from other products
    ​

  • On the whole, developers' privacy policies often lack essential information about their practices, highlighting the need for greater transparency and accountability.
 
The Tech Policy Press interview with experts sheds some light on why “agentic AI” is so dependent on user information. Agentic AI refers to generative AI with the ability to act independently. Generative AI says things. Agentic AI does things. Both are built on the large language models Stanford studied.
 
It’s a logical evolution – think of asking a restaurant chef to give you his recipe versus having a live-in chef who plans and prepares them. But it’s all built on memory. The more AI is allowed to remember about us, the more effective it will be at meeting our asks. “The central tension, then, is between convenience and control,” the experts told Tech Policy Press.
 
We would add that if you think you’re trusting AI what to remember about your prompts and interests and what not to remember, think again. We’re really talking about trusting companies like the ones in the Stanford study – because they’ll be the ones licensing the AI. As of now, then, the fate of your data ultimately rests in the hands of others. From the interview:
 
“Who, exactly, can access your agent’s memories – just you, or also the lab that designed it, a future employer who pays for integration, or even third-party developers who build on top of the agent’s platform?
 
In short, these experts say, the stakes are these:

“Deciding what should be remembered is not just a question of personal preference; it’s a question of governance. Without careful design and clear rules, we risk creating agents whose memories become less like a helpful assistant and more like a permanent surveillance file.”

We close with a refrain that will be familiar to our readers – now is the time for common-sense laws that privilege personal privacy. Without it, these experts warn, AI will become a tool of enclosure rather than empowerment.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR FOURTH AMENDMENT RIGHTS

AI Reinvents Surveillance, This Time Without Limits

10/1/2025

 

“We but teach bloody instructions, which, being taught, return to plague the inventor.” - Macbeth

Picture
​Closed circuit television (CCTV) has changed very little since its introduction in the 1960s – essentially passive systems that merely display whatever they’re aimed at. In fact, without a human at the other end, there was no real surveillance taking place.

That was always the flaw in George Orwell’s 1984 – it would take as many people to surveil as there are people to surveil. And the watchers would have to try to remain alert throughout the day as they watched people eat breakfast, brush their teeth, and wash their dishes.

Then the ability to digitally store vast amounts of surveillance made the task of surveillance easier. But now that AI is here, it is proving to be the real game-changer.

The new generation of CCTV security cameras are capable of autonomous surveillance and action. “Watched by AI guards,” boasts ArcadianAI, whose Ranger line of products operates on its own, proactively identifying what it sees as threats and subsequently alerting authorities.

It’s largely thanks to recent “advances” in computer vision and vision language models, which speak of “objects,” a fiendishly clever euphemism for anything – bodies, body parts, events, contexts, movements, behaviors, colors, dimensions, distances, sounds, textures. In effect, anything that can be recognized and classified as its own distinct kind of pattern.

Thus updated surveillance video now “thinks” about what it’s seeing. Case in point: An orchestral piece powered by AI video. It’s a bit of PR for Axis Communications to make the point that its CCTV systems can detect whatever its clients seek to find and, with that information, do previously unimaginable things.

This moment represents a threshold of sorts: defining, recognizing, and interpreting patterns without limit. Using such technology for musical composition is innocuous enough, but what about scanning a scene for skin color, hair style, facial features, gait, ethnicity, gender, age… or failing to applaud… or using a secret handshake?

Amid all the hype about AI’s possibilities, it’s important to step back and remember that there is nothing inherently moral about creativity – not in medicine, physics, management, or any human endeavor. Yet, here we are rushing headlong into a frenzied new era of possibility with no guardrails or ethical standards in sight.
View this post on Instagram

A post shared by IFLScience (@iflscience)

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US DEFEND YOUR PRIVACY RIGHTS

Clearview AI: Giving the US Government A Clear View of Its Citizens

9/18/2025

 
Picture
​Clearview AI is raking in the cash with its facial recognition software, signing lucrative contracts that make all Americans easier targets for government surveillance. The latest award is a $10 million deal with the Department of Homeland Security (DHS) to support Immigration and Customs Enforcement (ICE) operations.

Clearview was previously fined more than $30 million by Dutch regulators for privacy violations related to data collection. It also settled privacy violation charges in the U.S. for tens of millions more. But none of that has stopped the company from becoming a favorite of law enforcement and government intelligence agencies in the United States. In fact, we’ve written about the dangers of facial recognition more times than we can count. Its continued popularity only proves that the federal government cares more about purchasing facial recognition software than regulating its use. As a result, states have had to step in and fill the regulatory gap.

The new ICE contract means that Clearview will be used to help identify individuals accused of assaulting its officers – a commendable goal. But the accumulation of Americans’ faces into a single database is an immense temptation for abuse in many other domains, including surveillance for political reasons.

You may applaud or deplore ICE’s new aggressiveness. The larger is issue what the government, or Clearview itself, will do down the road with the mass collection of America’s facial data. Our faces, along with the rest of our biometric data – and our privacy in general – remain for sale. Of course, we’re assuming that the software will actually recognize us rather than mistake us for someone else.

As spy tech goes, facial recognition can’t seem to win for losing.
​
It’s enough to make one yearn for the quaint times of Oscar Wilde, who once said, “I never forget a face, but in your case I will make an exception.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

How “Therapy” from Generative AI Powers the Great Surveillance State

9/12/2025

 

“The progress of science in furnishing the government with means of espionage is not likely to stop with wire-tapping.”

Louis Brandeis, 1928
Picture
​Protecting privacy in the Information Age was always going to be a tough proposition. Protecting privacy in the era of generative AI? Without the proper safeguards on your part, is nigh unto impossible.

Every entry you make in ChatGPT could surface in public due to a subpoena or a warrant. So when ChatGPT asks you (cue the Viennese accent) “how do you feel about your mudder?” your response may well be read by an FBI agent or by a prosecutor in open court.

Yet this technology is being used by some in exactly that way – as a therapist.

Mostly hoping that no one would notice, ChatGPT parent OpenAI recently published a mea culpa of sorts, trying to “sorry/not-sorry” its way through the bad PR it’s received as a result of users harming themselves and others. Because “people using ChatGPT in the midst of acute crises” hasn’t gone well, OpenAI will now route to human reviewers any conversations in which ChatGPT users threaten harm to others (another privacy can of worms). OpenAI may ban such accounts, but it may also refer the matter to law enforcement.

Generative AI is not a therapist. It is not a counselor. It is not a parent, a minister, a rabbi, a teacher, or a school administrator. AI isn’t even anyone’s friend, much less a lover. It is a very bad substitute for all of these utterly human roles. We misuse it at our peril.

But generative AI is something else as well – a profitable branch of data science that corporations, educational institutions, governments, law enforcement agencies (and scammers!) are using to collect vastly more data about employees, customers, students, citizens, and future victims of criminal schemes.

To the extent that we use it at all, we should be exceedingly wary of what we share. It is not, nor has it ever been, private. Americans have never been more surveilled than we are at this moment. Before generative AI, the surveillance apparatus was proceeding more or less in a linear fashion, like a twin-engine prop on a steadily rising course. That prop plane is now a supersonic jet thanks to generative AI.

“Safety” is one of the many traps that the era of generative AI is increasingly setting for matters of privacy. When our fundamental right to be let alone (to quote Justice Brandeis) is traded away these days, it is most often done “in the name of” some noble-sounding cause – safety, national security, you name it.
​
Until law catches up to reality, you would be well advised to be very careful with any private information you share with AI advisors like ChatGPT, especially if it is about your mother.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Watching the Watchers: On Its Own, AI Isn’t Watching, Or Thinking

9/2/2025

 
Picture
Image: Citizen website.
Joseph Cox of 404 Media reminds us of three things that we know to be true about the new era of generative artificial intelligence:

  1. AI isn’t a substitute for people.
  2. AI isn’t a substitute for people.
  3. AI isn’t… well, you get the picture.

As we’ve written before, AI works best when there’s a human in the loop. Take the case of Citizen.com, whose app is increasingly taking an AI-only approach to crime fighting. Because, really, what could possibly go wrong?

Plenty, as you can imagine. Without further ado, here’s 404 Media’s report on what happens when AI is left to its own devices, Citizen-style. It is prone to:

  • Mistranslating “motor vehicle accident” as “murder vehicle accident.”
 
  • Misinterpreting addresses.
 
  • Publishing incorrect locations.
 
  • Adding gory or sensitive details that violate Citizen’s guidelines.
 
  • Sending notifications about police officers spotting a stolen vehicle or homicide suspect, potentially putting operations at risk.
 
  • Writing alerts as if officers had already arrived on the scene, when in fact the dispatcher was only providing supplemental information while officers were en route.
 
  • Duplicating incidents, failing to recognize that two pieces of dispatch audio are related to the same singular event.
 
  • This was especially common with police chases, where dispatch continually provided new addresses. The “AI would just go nuts and enter something at every address it would get and we would sometimes have 5-10 incidents clustered on the app that all pertain to the same thing,” one source said.
 
  • Omitting important details, such as whether a person was armed with a weapon.
​
The stakes are as strategic as they are tactical. One of Cox’s sources told him, “This could skew the perception of crime in a particular area,” as AI-created incidents proliferated.
 
By the way, the original name of Citizen – both the app and the company – was, perhaps tellingly, Vigilante. But that’s a story for another day.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

“Wearables” – A Euphemism for “Spy Tech”

8/26/2025

 
“I don’t think you can make it off the record once you’ve said it – you can’t call dibs after the fact.”

​- Journalist Philip Corbett
Picture
Wearables are defined by their comfort. But there is a lot about wearable technology that is distinctly uncomfortable, if not Orwellian.

Wearable computers hit the mainstream with the introduction of Fitbits and smartwatches in the 2010s. Now, says The San Francisco Standard, the rise of artificial intelligence is adding spy tech to the wearable computing family tree. The newest devices are akin to smartglasses but take that technology’s most invasive feature – recording the environment – and turn the creep factor up to 11. The new wearables are stylish and somewhat stealthy and designed to do two things very well: listen and remember.

They come in the form of pendants, necklaces, lapel pins – or, in a twist, might even look like a Fitbit or smartwatch. But they are all recording devices capable of capturing the wearer’s every conversation and meeting, then transcribing them, and – the pièce de la résistance – using AI to organize, analyze, and mine them for insights (think personal assistant on steroids, or maybe your very own opposition researcher). In some cases, the devices may only transcribe conversations rather than record them, but they’re still listening and processing conversations, so such distinctions are hardly comforting.

The San Francisco Standard suggests that everyone in Silicon Valley should assume that everything they say, especially at work, is being recorded. Which means the rest of America – and its kitchen tables, coffee houses, and classrooms – won’t be far behind.

One venture capital partner told the Standard’s writers that she knew a fellow VC who records all in-person meetings “without telling the other meeting participants. It's an invasion of privacy and I seriously disapprove of it." Then, presumably referring to herself and the rest of us would-be audience members, she added, “Of course, this is a horrible way to live your life.”

In terms of the privacy concerns raised by this new generation of wearables, Julian Chokkattu of Wired cracked the code. Earlier generations of recording devices and software “at least required active engagement like a tap or a wake word to activate their ability to eavesdrop.” For the most part, the new devices are passive and always on, which places responsibility for gaining consent on the instigator. In other words, “Fox, meet henhouse.”

In the research, there are lots of names for the chilling effects that even consensual recording has on conversations, but one of the keenest is “spiral of silence.” People will varnish the truth, if they bother to speak it at all. They will hold back, self-censor, even shut down. As for the possible effects on creativity that this sort of tech might have – as in a brainstorming session, for example – we invite you to judge for yourself.

If you think all of this seems like a claim just waiting for a plaintiff, we agree: It’s a one-way express ticket to litigation city. But as with most things AI, the laws governing them are in their infancy and court rulings sparse. One corner of Silicon Valley is already fighting back though: Confident Security is developing Don’t Record Me, a browser plugin that could potentially detect illicit recordings and disrupt them.

What about audible cues or flashing lights to indicate that one of these devices is collecting data? Don’t count on it. One entrepreneur told Wired, in effect, “That would drain too much battery life.” Another claims that all you have to do is think about recording to activate his product. Thankfully, for that mode to work, the wearable has to be affixed to the side of your temple with medical tape.
​
But don’t expect other forms of personal surveillance to be so obvious. All the more reason for requiring disclosure for private recording and warrants when government agents listen in on what we say.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

AI And Schools: Cheating Isn’t The Problem

8/12/2025

 
“The future of AI is not about replacing humans, it's about augmenting human capabilities.”
- Sundar Pichai, Google
Picture
​After you read this, you’ll wish that students using AI to cheat was the biggest problem with the technology. Turns out, a bigger issue is just how inconsistent AI is at monitoring students for “safety risks.” It’s a privacy nightmare we’ve written about before, with laptops snapping pictures of students at home, and the chilling effect such surveillance has on creative expression and First Amendment rights.

But almost four years after we first reported on this increasingly popular trend in secondary education, it shows no signs of letting up – even as we wait for the outcome of a major lawsuit by Columbia’s Knight Institute designed to compel a school district to disclose the nature of their surveillance tech.

Instead, we continue to read more headlines like this one from Sharon Lurye from the Associated Press: “Students have been called to the office – and even arrested – for AI surveillance false alarms.” You can read the details of the story for yourself, but the gist is this: A student made a joke on a school-related chat account. The joke was both culturally insensitive and had a reference to feigned violence. It was also somewhat self-deprecating. It was therefore exactly the kind of crass, completely innocent sarcastic drivel that you would expect from a teenager.

The only difference is that AI was watching (and, apparently, without the aid of humans possessed of common sense). So, of course, the student was arrested and separated from her parents for 24 hours. Then, somehow, a court made up of non-AI judges ordered eight weeks of house arrest, a full psych evaluation, and 20 days at an “alternative” school. When asked about the incident, the CEO of Gaggle, the company that made the software, opined, “Golly, I wish that was treated as a teachable moment, not a law enforcement moment.” (Okay, we added the “Golly.”)

In all such cases, best as we can tell, these are traditional AI systems – unthinking, rules-based programs that have absolutely no sense of context. Traditional student surveillance products are close to 20 years old. The systems that schools pay companies like Gaggle six figures to operate as elaborate keyword-matching programs don’t “think,” and they certainly don’t understand context.

Just imagine a student paraphrasing one of Shakespeare’s characters crying, “O, I am slain!” Should that student be flagged for suicide watch? That, of course, is a rhetorical question – something that we’re genuinely worried students in these surveillance-based school systems might never learn. (Of course, we have no idea if any Shakespeare character ever uttered anything like that because we used AI to suggest it.)

We get that being proactive about student safety is critical. But monitoring what they type isn’t the right way to do it. Students type – and say – all kinds of tasteless statements because that’s what being in elementary, junior high, and high school is all about. Students should not get arrested (and traumatized) merely for writing sarcastic or ironic language – the kinds of expressive skills school are supposed to teach them in the first place.

This isn’t working and it’s time for parents and school systems – and yes, the students themselves who have filed lawsuits – to stand in solidarity and demand at least an overlay of common sense. Without human discernment, AI-powered surveillance systems are unthinking, non-stop monitors designed to destroy privacy, creativity, and individual expression.
​
We would also remind the school administrators who surely mean well when they initially deploy such systems not to forget the cardinal rule of any AI system: Always keep a human in the loop. Every flagged item should be reviewed by at least one school system employee – preferably a principal with, perhaps, the addition of a school counselor – before anything gets reported to law enforcement.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

House Subcommittee Agrees: AI Crimes Lack Regulation

7/20/2025

 

​“You never change things by fighting the existing reality.”
- Buckminster Fuller

Picture
Last week the House Judiciary Subcommittee on Crime and Federal Government Surveillance held a hearing on AI and crime and something remarkable happened: Everyone agreed:

  1. The administration’s desired moratorium on state AI regulation is a bad idea, and
    ​
  2. Existing criminal statutes everywhere need to be retrofitted to include AI-based offenses.

As for the first area of agreement, there was a collective sense that the country dodged a bullet last week when the Senate removed the moratorium from the budget bill and the House declined to reinstate it. Regarding the second issue, the consensus was clear: Buckle up. We have work to do.

Perhaps getting to work should start with persuading Members of Congress to show up at AI hearings. Other than the Chair and Ranking Member, only three of ten regular members were present. Those who did attend, however, heard from witnesses who, in combined testimony that ran 77 pages, struck similar chords:

  1. Criminals can use AI impersonation to do things we’ve scarcely imagined.

  2. AI is removing the technical barriers to crime. In the hands of industrious criminals, said witness Zara Perumal of Overwatch Data, “AI agents can learn by doing,” meaning the criminals themselves no longer have to be technical experts. “AI is removing human bottlenecks. It’s not just enhancing traditional fraud – it’s creating entirely new categories of criminal threat,” agreed Ari Redbord of TRM Labs. Just imagine, as one of the witnesses portended, “child abuse at scale.”

  3. Law and policy are late to the party. For example, “Artificial Intelligence” isn’t even a category on the drop-down menu of the National Conference of State Legislatures. Overall, a handful of states have passed a few random measures, while hundreds of initiatives either failed or remain pending. As for the federal government, forget it. The administration’s recent moratorium attempt proved that the attitude du jour is recklessly laissez-faire. Which is exactly why subcommittee chair Andy Biggs (R-AZ) needed to call this hearing and promisingly referred to it as the first of many.

  4. When it comes to fighting AI-powered crime, the best offense is a good defense. Given that the proverbial cats and genies are already out of their respective bags and bottles, it’s time to “shift the technical advantage to the defenders,” said Perumal.

Oregon and California, for example, intend to repurpose existing laws to include AI abuses. Some from-scratch legislation is also emerging, like Texas’ Responsible AI Governance Act and Tennessee’s ELVIS Act.

While most of the discussion centered on how criminals can misuse AI, we should not forget how it may be misused by our own government, which has a voracious appetite for our purchased data. AI is the critical ingredient to turn all that raw personal data into a working surveillance state.

The ACLU’s Cody Venzke reminded everyone not to overlook the Swiss Army Knife of our democracy – the Bill of Rights, especially the First and Fourth Amendments. Such protections, Venzke said, do not lose their power “simply because a new tool such as artificial intelligence was used.” They are both our sword and shield against criminals and government surveillance abuse, especially in the age of AI.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Big Brother Has A New Name: Executive Order 14243

6/5/2025

 
Picture
HBO’s hit series Westworld wasn’t actually about replicating the old West, but a cautionary tale about the new frontier of artificial intelligence.

It didn’t end well. For the humans, that is. The third season’s big reveal was a sinister-looking AI sphere the size of a building, called Rehoboam. It was shaped like a globe for a very good reason – it determined the destinies of every person in the world. It predicted and manipulated human behavior and life paths by analyzing massive amounts of personal data – effectively controlling society by assigning roles, careers, and even relationships to people, all in the name of preserving order.

The American government – yes, you read that correctly – America, not China, is plotting to build its own version of Rehoboam. Its brain trust will be Palantir, the AI power player recently called out in the Daily Beast with the headline, “The Most Terrifying Company in America Is Probably One You’ve Never Heard Of.”

In March of this year, President Trump issued Executive Order 14243: “Stopping Waste, Fraud, and Abuse by Eliminating Information Silos.” The outcome will be a single database containing complete electronic profiles of every soul in the United States. And all of it is likely to be powered by Palantir’s impenetrable, proprietary AI algorithms.

Reason got to the heart of what’s at stake: an AI database on such a massive scale is only nominally about current issues such as tracking illegal immigrants. It’s really about the government’s ability to profile anyone, anytime, for any purpose.

With a billion dollars in current federal contracts across multiple agencies, Palantir is currently in talks with Social Security and the IRS. Add that to existing agreements with the Departments of Defense, Health and Human Services, Homeland Security, and others. Add to that the Biden administration’s previous contract with Palantir to assist the CDC with vaccine distribution during the pandemic.

While the primary arguments in favor of such an Orwellian construct are commendable-sounding goals like a one-shop stop for efficiency, PPSA and our pro-privacy allies find such thinking – at best – appallingly naïve.

And at worst? There’s an applicable aphorism here: “This is a bad idea because it’s obviously a bad idea.” Let’s not kid ourselves – this is the desire for control laid bare, and its results will not be efficiency, but surveillance and manipulation. It makes sense for Treasury to know your tax status or State to know your citizenship status. But a governmentwide database, accessible without a warrant by innumerable government agents, is potentially the death knell for privacy and the antithesis of freedom.

Think of all the government already knows about you, your family, and friends across multiple federal databases. All this data is about to be mobilized into one single, easily searchable database, containing everything from disability status and Social Security payments to personal bank account numbers and student debt records to health history and tax filings – plus other innumerable and deeply personal datapoints ad infinitum.

Simply put, this database will put together enough information to assemble personal dossiers on every American.

It is bad enough to think any U.S. government employee in any agency will have access to all of your data in one central platform. But at least those individuals would theoretically authorized for such access. Not so the Russian and Chinese cyberhackers who’ve already demonstrated the ability to lift U.S. databases in toto.
​
If that ever happens with this database, it will truly become a matter of one-stop shopping.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Is Your AI Therapist a Mole for the Surveillance State?

5/16/2025

 

“It’s Delusional Not to be Paranoid”

Picture
​With few exceptions, conversations with mental health professionals are protected as privileged (and therefore private) communication.
 
Unless your therapist is a chatbot. In that case, conversations are no more sacrosanct than a web search or any other AI chat log; with a warrant, law enforcement can access them for specific investigations. And of course, agencies like the NSA don’t even feel compelled to bother with the warrant part.
 
And if you think you’re protected by encryption, think again says Adi Robertson in The Verge. Chatting with friends using encrypted apps is one thing. Chatting with an AI on a major platform doesn’t protect you from algorithms that are designed to alert the company to sensitive topics.
 
In the current age of endless fascination with AI, asks Robertson, what would prevent any government agency from redefining what constitutes “sensitive” based on politics alone? Broach the wrong topics with your chatbot therapist and you might discover that someone has leaked your conversation to social media for public shaming. Or perhaps a 4 a.m. knock on the door with a battering ram by the FBI.
 
Chatbots aren’t truly private any more than email is. Recall the conventional wisdom from the 1990s that advised people to think of electronic communication as the equivalent of a postcard. If you wouldn’t want to write something on a postcard for fear of it being discovered, then it shouldn’t go in an email – or in this case, a chat. We would all do well to heed Adi Robertson’s admonition that when it comes to privacy, we have an alarming level of learned helplessness.
 
“The private and personal nature of chatbots makes them a massive, emerging privacy threat … At a certain point, it’s delusional not to be paranoid.”
 
But there’s another key difference between AI therapists and carbon-based ones: AI therapists aren’t real. They are merely a way for profit-driven companies to learn more about us. Yes, Virginia, they’re in it for the money. To quote Zuckerberg himself, “As the personalization loop kicks in and the AI starts to get to know you better and better, that will just be really compelling.” And anyone who thinks compelling isn’t code for profitable in that sentence should consider getting a therapist.
 
A real one.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Meta’s AI Chatbot a New Step Toward a Surveillance Society

5/13/2025

 
Picture
​We’re not surprised – and we are sure you are not either – to learn that new tech rollouts from Meta and other Big Tech companies voraciously consume our personal data. This is especially true with new services that rely on artificial intelligence. Unlike traditional software programs, AI requires data – lots and lots of our personal data – to continuously learn and improve.
 
If the use of your data bothers you – and it should – then it’s time to wise up and opt out to the extent possible. Of course, opting out is becoming increasingly difficult to do now that Meta has launched its own AI chatbot to accompany its third-generation smart glasses. Based on reporting from Gizmodo and the Washington Post, here’s what we know so far:

  • Users no longer have the ability to keep voice recordings from being stored on Meta’s servers, where they “may be used to improve AI.”
  • If you don’t want something stored and used by Meta, you have to manually delete it.
  • Undeleted recordings are kept by Meta for one year before expiring.
  • The smartglasses camera is always on unless you manually disable the “Hey Meta” feature.
  • If you somehow manage to save photos and videos captured by your smartglasses only on your phone’s camera roll, then those won’t be uploaded and used for training.
  • By default, Meta’s AI app remembers and stores everything you say in a “Memory” file, so that it can learn more about you (and feed the AI algorithms). Theoretically, the file can be located and deleted. No wonder Meta’s AI Terms of Service says, “Do not share information that you don’t want the AIs to use and retain such as account identifiers, passwords, financial information, or other sensitive information.”
  • Bonus tip: if you happen to know that someone is an Illinois or Texas resident, by using Meta’s products you’ve already implicitly agreed not to upload their image (unless you’re legally authorized to do so).

None of the tech giants is guiltless when it comes to data privacy, but Meta is increasingly the pioneer of privacy compromise. Culture and technology writer John Mac Ghlionn is concerned that Zuckerberg’s new products and policies presage a world of automatic and thoroughgoing surveillance, where we will be constantly spied on by being surrounded by people wearing VR glasses with cameras.
 
Mac Ghlionn writes:
​
“These glasses are not just watching the world. They are interpreting, filtering and rewriting it with the full force of Meta’s algorithms behind the lens. And if you think you’re safe just because you’re not wearing a pair, think again, because the people who wear them will inevitably point them in your direction.
“You will be captured, analyzed and logged, whether you like it or not.”
 
But in the end, unlike illicit government surveillance, most commercial sector incursions on our personal privacy are voluntary by nature. VR glasses have the potential to upend that equation.
 
Online, we can still to some degree reduce our privacy exposure in what we agree to, even if it means parsing those long, hard to understand Terms of Service. It is still your choice what to click on. So, as the Grail Knight told Indiana Jones in The Last Crusade, “Choose wisely.”
 
You should also learn to recognize Meta’s Ray-Bans and their spy eyes.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

AI and Data Consolidation Is Supercharging Surveillance

4/28/2025

 
Picture
​In Star Wars lore, it was the democratic, peace-loving Republic that built the first fleet of Star Destroyers. But the fleet was quickly repurposed for evil after the Republic fell. What was once a defensive force for good became a heavy-handed tool of occupation and terror.
 
In a galaxy closer to home, imagine the development of a fully integrated civilian computer system designed to help a technological democracy of 345 million people operate smoothly. In the early 21st century, successive governments on both the right and left embraced the idea that “data is the new oil” and began the process of digitizing records and computerizing analog processes. Generative artificial intelligence, vast increases in computing power, and the rise of unregulated data brokers made the creation of a single database containing the personal information and history of every citizen readily available to federal agencies.
 
At first, the system worked as advertised and made life easier for everyone – streamlining tax filing, improving public service access, facilitating healthcare management, etc. But sufficient guardrails were never established, allowing the repurposing of the system into a powerful surveillance tool and mechanism of control.
 
This scenario is now on the brink of becoming historical fact rather than cinematic fiction.
 
“Data collected under the banner of care could be mined for evidence to justify placing someone under surveillance,” warns Indiana University’s Nicole Bennett in a recent editorial for The Conversation. And if you like your social critiques with a side of irony, the Justice Department agreed with her in its December 2024 Artificial Intelligence and Criminal Justice report. It concluded that the AI revolution represents a two-edged sword. While potentially a driver of valuable new tools, its use must be carefully governed.
 
The Justice Department said that AI data management must be “grounded in enduring values. Indeed, AI governance in this space must account for civil rights and civil liberties just as much as technical considerations such as data quality and data security.”
 
Yet the government is proceeding at breakneck speed to consolidate disparate databases and supercharge federal agencies with new and largely opaque AI tools, often acquired through proprietary corporate partnerships that currently operate outside the bounds of public scrutiny.
 
Anthony Kimery of Biometric Update has described the shift as a new “arms race” and fears that it augers “more than a technological transformation. It is a structural reconfiguration of power, where surveillance becomes ambient, discretion becomes algorithmic, and accountability becomes elusive.”
 
The Galactic Republic had the Force to help it eventually set things right. We have the Fourth – the Fourth Amendment, that is – and the rest of the Bill of Rights. But whether these analog bulwarks will hold in the digital age remains to be seen. To quote Kimery again, we are “a society on the brink of digital authoritarianism,” where “democratic values risk being redefined by the logic of surveillance.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Scammers Are Using Generative AI to Level the Field

3/26/2025

 

FBI PSA: The Safe Bet Is to Assume It’s Fake

Picture
​Remember when the only person you worried might fall prey to scammers was your favorite aunt, who had only her Welsh Corgi at home with her during the day? “Now, Trixie,” you’d say, “don’t agree to anything and always call me first.”
 
Those days are over.
 
Forget your late aunt Trixie. Worry about yourself. Imagine if you received a phone call from a close friend, family member, even your spouse that was actually an utterly-convincing AI-generated version of that person’s voice – urgently begging you to provide a credit card number to spring her out of a filthy jail in Veracruz or pay an emergency room hospital bill.
 
The age of AI augers many things, we are told. But while we’re waiting for flying taxis and the end of mundane tasks, get ready to question the veracity of every form of media you encounter, be it text, image, audio, or video. In what is sure to be the first of many such public service announcements, the FBI is warning that the era of AI-powered fraud hasn’t just dawned, it is fully upon us.
 
The theme of the FBI’s announcement is “believability.” It used to be that scams were easy to spot – the writing was laughably bad, or the video and audio were noticeably “off” or even a little creepy – a phenomenon known as the Uncanny Valley effect. The newfound power of generative AI to produce realistic versions of traditional media has put an end to such reliable tells.
 
Anyone who thinks they’re immune to such trickery misunderstands the nature of generative AI.
 
Consider:
  • In how many languages can you write fluently? It doesn’t matter because whatever the answer is, generative AI’s got you beat.
 
  • That person you were flirting with via text? Generative AI chatbots are better at responding and demonstrating empathy. When they say, “I’ll message you this afternoon to see how your day went,” they actually do. And they will remember to ask you about the acupuncture treatment you had after scraping your post about it from your social media.
 
  • Don’t bother asking scammers to prove their identity – fake passports and driver’s licenses are a generative AI specialty, right down to the photos. (Or ask anyway in case they happen to be amateurs, but don’t stop there).
​
Whenever a friend or family member sends a video that clearly shows him or her in need of help (stranded on vacation or having their wallet stolen at a nightclub perhaps), don’t automatically assume it’s real no matter how convincing it looks. And thanks to generative AI’s “vocal cloning” ability, a straight-up phone call is even easier to fake.
 
So, what can we do?
 
The FBI advises: Agree to a secret password, phrase, or story that only you and your family members know. Do the same with your friend groups. Then stick to your guns. No matter how close your heartstrings come to breaking, if they don’t know the secret answer, it’s a scam-in-waiting.
 
The FBI also recommends limiting “online content of your image or voice” and making social media accounts private. Fraudsters scrape the online world for these artifacts to produce their deepfake masterpieces. All generative AI needs to create a convincing representation of you is a few seconds of audio or video and a handful of images.
 
Rest in peace, Aunt Trixie. We miss her and the good old days when all we had to do was warn her not to give her personal information to a caller who said he was from the Corgi Rescue Fund. Today, if an AI scamster wanted to, he could now have Aunt Trixie call you from the grave, needing money, of course.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Sam Altman’s Apocalypse

2/12/2025

 

​AI Inventor Muses About the Authoritarian Potential of General AI

Picture
​Robert Oppenheimer was famously conflicted about his work on the atomic bomb, as was Alfred Nobel after inventing dynamite. One supposes any rational, non-sociopath would be.
 
But imagine if Alexander Graham Bell had similarly cast aspersions on the widespread use of telephones or Edison on electrification? When Morse transmitted, “What hath God wrought?” as the first official telegraph, it was meant as an expression of wonder, even optimism. We expect weapons of destruction to come with warnings. By contrast, technological revolutions that improved human existence have rarely come with dire predictions, much less from their inventors. So it’s a bit jarring when it happens.
 
And with artificial intelligence, it’s happening.
 
Geoffrey Hinton, the “godfather of AI,” quit Google after warning about its dangers and later told his Nobel Prize audience, “It will be comparable with the Industrial Revolution. But instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us.”
 
Now enter Sam Altman, the man whose company, OpenAI, brought artificial intelligence into the mainstream. In a blog post published this week, Altman opened with his own paraphrase of “But this feels different.” Hinton and Altman are both referring to what many consider the inevitable turning point in the coming AI revolution – the advent of artificial general intelligence, or AGI. In short, this will be when almost every computer-based system we encounter is as smart or smarter than us.
 
“We never want to be reckless,” Altman writes in the blog (emphasis added).
 
“We believe that trending more towards individual empowerment is important,” Altman writes, “the other likely path we can see is AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy.” To be fair, OpenAI was founded with the goal of preventing AGI from getting out of hand, so perhaps his somewhat conflicted good cop/bad cop perspective is to be expected.
 
Yet that hasn’t stopped Altman from taking what might someday be seen as the “self-fulfilling prophecy” step on our road to perpetual surveillance.
 
Altman is partnering with Oracle and others in a joint venture with the U.S. government to build an AI infrastructure system, the Stargate Project. Two weeks after the venture was announced, his blog is acknowledging the need for a “balance between safety and individual empowerment that will require trade-offs.”
 
What to make of all this? Sam Altman is a capitalist writ large. He believes in the American trinity of money, freedom, and individualism. So when he feels compelled to ponder the looming potential of a technocratic authoritarian superstate from his brainchild, he is to be believed.
 
Altman dances ever-so-deftly around the potential dangers of mass surveillance in the hands of an AGI-powered authoritarian state, but it’s there. AI is the glue that makes a surveillance state work. This is already happening in the People’s Republic of China, where AI drinks in the torrent of data from a national facial recognition system and total social media surveillance to follow netizens and any wayward expressions of belief or questioning of orthodoxy. Altman is fundamentally worried that the technology he’s helping to unleash on the world could prove to be the fundamental unraveling of individual liberty, and democracy itself.
 
One last thing worth noting: Sam Altman is an apocalypse-prepper. “I try not to think about it too much,” he told The New Yorker in 2016. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”
 
Just imagine what he isn’t telling us.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

AI Surveillance and Biometrics Shoring Up Cash-Strapped Autocracies Around the World

6/13/2024

 
Picture
​George Orwell wrote that in a time of deceit, telling the truth is a revolutionary act.
 
Revolutionary acts of truth-telling are becoming progressively more dangerous around the world. This is especially true as autocratic countries and weak democracies purchase AI software from China to weave together surveillance technology to comprehensively track individuals, following them as they meet acquaintances and share information. A piece by Abi Olvera posted by the Bulletin of Atomic Scientists describes this growing use of AI to surveil populations.
 
Olvera reports that by 2019, 56 out of 176 countries were already using artificial intelligence to weave together surveillance data streams. These systems are increasingly being used to analyze the actions of crowds, track individuals across camera views, and pierce the use of masks or scramblers intended to disguise faces. The only impediment to effective use of this technology is the frequent Brazil-like incompetence of domestic intelligence agencies.
 
Olvera writes:
 
“Among other things, frail non-democratic governments can use AI-enabled monitoring to detect and track individuals and deter civil disobedience before it begins, thereby bolstering their authority. These systems offer cash-strapped autocracies and weak democracies the deterrent power of a police or military patrol without needing to pay for, or manage, a patrol force …”
 
Olvera quotes AI surveillance expert Martin Beraja that AI can enable autocracies to “end up looking less violent because they have better technology for chilling unrest before it happens.”
 
Olivia Solon of Bloomberg reports on the uses of biometric identifiers in Africa, which are regarded by the United Nations and World Bank as a quick and easy way to establish identities where licenses, passports, and other ID cards are hard to come by. But in Uganda, Solon reports, President Yoweri Museveni – in power for 40 years – is using this system to track his critics and political opponents of his rule. Used to catch criminals, biometrics is also being used to criminalize Ugandan dissidents and rival politicians for “misuse of social media” and sharing “malicious information.”
 
The United States needs to lead by example. As our facial recognition and other systems grow in ubiquity, Congress and the states need to demonstrate our ability to impose limits on public surveillance, and legal guardrails for the uses of the sensitive information they generate.

Sex, Lies, and Chatbots

2/20/2024

 
Picture
David Pierce has an insightful piece in The Verge demonstrating the latest example of why every improvement in online technology leads to a yet another privacy disaster.
 
He writes about an experiment by OpenAI to make ChatGPT “feel a little more personal and a little smarter.” The company is now allowing some users to add memory to personalize this AI chatbot. Result? Pierce writes that “the idea of ChatGPT ‘knowing’ users is both cool and creepy.”
 
OpenAI says it will allow users to remain in control of ChatGPT’s memory and be able to tell it to remove something it knows about you. It won’t remember sensitive topics like your health issues. And it has a temporary chat mode without memory.
 
Credit goes to OpenAI for anticipating the privacy implications of a new technology, rather than blundering ahead like so many other technologists to see what breaks. OpenAI’s personal memory experiment is just another sign of how intimate technology is becoming. The ultimate example of online AI intimacy is, of course, the so-called “AI girlfriend or boyfriend” – the artificial romantic partner.
 
Jen Caltrider of Mozilla’s Privacy Not Included team told Wired that romantic chatbots, some owned by companies that can’t be located, “push you toward role-playing, a lot of sex, a lot of intimacy, a lot of sharing.” When researchers tested the app, they found it “sent out 24,354 ad trackers within one minute of use.” We would add that data from these ads could be sold to the FBI, the IRS, or perhaps a foreign government.
 
The first wave of people whose lives will be ruined by AI chatbots will be the lonely and the vulnerable. It is only a matter of time before sophisticated chatbots become ubiquitous sidekicks, as portrayed in so much near-term science fiction. It will soon become all too easy to trust a friendly and helpful voice, without realizing the many eyes and ears behind it.

Sex, Deepfakes, and Rock & Roll

2/5/2024

 
Picture
The first deepfake of this long-anticipated “AI election” happened when a synthetic Joe Biden made robocalls to New Hampshire Democrats urging them not to vote in that presidential primary. “It’s important that you save your vote for the November election,” fake Biden told Democrats. Whoever crafted this trick expected voters to believe that a primary vote would somehow deplete a storehouse of general-election votes.
 
Around the same time, someone posted AI-generated fake sexual images of pop icon Taylor Swift, prompting calls for laws to curb and punish the use of this technology for harassment. Other artists are calling for protections not of their visage, but of their intellectual property, with paintings and photographs being expropriated as grist for AI’s mill.
 
Members of Congress and state legislators are racing to pass laws to make such tricks and appropriations a crime. It certainly makes sense to criminalize the cheating of voters by making candidates appear to say and do things they would never say or do. But sweeping legislation also poses dangers to the First Amendment rights of Americans, including crackdowns on what is clearly satire – such as a clear joke image of a politician in the inset behind the “Weekend Update” anchors of Saturday Night Live.
 
Such caution is needed as pressure for legislative action grows with the proliferation of deepfakes. Even among non-celebrities, this technology is used to create sexually abusive material, commit fraud, and harass individuals. According to Control AI, a group concerned about the current trajectory of artificial intelligence, such technology is now widely available. All someone needs to create a compelling deepfake is a photo of you or a short recording of your voice, which most of us have already very helpfully posted online.
 
Control AI claims that an overwhelming 96 percent of deepfake videos are sexually abusive. And they are becoming more common – 13 times as many deepfakes were created in 2023 as in 2022. Meanwhile, only 42 percent of Americans even know what a deepfake is.
 
The day is fast approaching when anyone can create a convincing fake sex tape of a political candidate, or a presidential candidate announcing the suspension of his campaign on the eve of an election, or a fake video of a military general declaring martial law. A few weeks ago, a convincing fake video of the Louvre museum in Paris on fire went viral, alarming people around the world. With two billion people poised to vote in major elections around the globe this year, deepfake technology is positioned to brew distrust and wreak some havoc.
 
While the Biden campaign has the resources to quicky refute the endless stream of fake photos and videos, the average American does not. A fake sex tape of a work colleague could burn through the internet before she has a chance to refute it. An AI-generated voice recording could be used to commit fraud, while even a fake photo could do immense damage.
 
And if you thought forcing AI to include a watermark in whatever it produces, think again. Control AI points out that it is simply impossible to create watermarks that cannot be removed easily by AI. Many strategies to stop deepfakes are about as effective as trying to keep kids off their parents’ computer.
 
It is unrealistic to believe we can slow down the evolution of artificial intelligence, as Control AI proposes to do. Certainly America’s enemies can be counted on to use AI to their advantage. Putting AI behind a government lock and key stifles the massive innovation that AI promises to bring, gives a technological edge to Russia and China, while also giving sole use of the technology to the federal government. That, too, poses serious problems for surveillance and oversight.
 
Given the First and Fourth Amendment implications, Congress should not act in haste. Congress should start the long and difficult conversation about how best to contain AI’s excesses, while best benefitting from its promise in human health and wealth creation. Congress should continue to hold hearings and investigate solutions. Meanwhile, the best guard against AI is a public that is already deeply skeptical of conventional information encountered online. As more Americans learn what a deepfake is, the less impact these images will have.

    Categories

    All
    2022 Year In Review
    2023 Year In Review
    2024 Year In Review
    Analysis
    Artificial Intelligence (AI)
    Call To Action
    Congress
    Congressional Hearings
    Congressional Unmasking
    Court Appeals
    Court Hearings
    Court Rulings
    Data Privacy
    Digital Privacy
    Domestic Surveillance
    Facial Recognition
    FISA
    FISA Reform
    FOIA Requests
    Foreign Surveillance
    Fourth Amendment
    Fourth Amendment Is Not For Sale Act
    Government Surveillance
    Government Surveillance Reform Act (GSRA)
    Insights
    In The Media
    Lawsuits
    Legal
    Legislation
    Letters To Congress
    NDO Fairness Act
    News
    Opinion
    Podcast
    PPSA Amicus Briefs
    Private Data Brokers
    Protect Liberty Act (PLEWSA)
    Saving Privacy Act
    SCOTUS
    SCOTUS Rulings
    Section 702
    Spyware
    Stingrays
    Surveillance Issues
    Surveillance Technology
    The GSRA
    The SAFE Act
    The White House
    Warrantless Searches
    Watching The Watchers

    RSS Feed

FOLLOW PPSA: 
© COPYRIGHT 2024. ALL RIGHTS RESERVED. | PRIVACY STATEMENT
Photo from coffee-rank