Project for Privacy and Surveillance Accountability (PPSA)
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE

 NEWS & UPDATES

Watching the Watchers: $12 Trillion in CyberCrime This Year Alone

6/17/2025

 

New FBI Warning Highlights Latest Ways Cyber Thieves Steal Your Identity and Money – and How You Can Stop Them

Picture
​The FBI is issuing a new warning that cybercriminals are now focusing on impersonating employee self-service websites – such as payroll services, unemployment programs, and health savings accounts – with the goal of stealing your money through fraudulent wire payments or redirecting payments.
 
You might notice your service’s website on an ad, or find it in an email or a link, without noticing the slight difference in the URL that marks it as a digital clone. Such a scam site will ask you for your credentials to gain access. A self-described representative from a bank or some other service may call you to “confirm” your one-time passcode.
 
Don’t fall for it. The FBI recommends that you take the following precautions:
 
  • Before clicking on an advertisement, check the URL to make sure the site is authentic. A malicious URL may be similar to the legitimate URL, but with minor, hard-to-spot typos. Such a malicious ad may redirect you to a phony, phishing website that appears identical to the legitimate site.
 
  • Instead, try typing a business's URL directly into an internet browser address bar to access the official website instead of searching for it in a search engine.
 
  • Or try using “Bookmarks” or “Favorites” for navigating to login websites rather than clicking on Internet search results or advertisements. Multi-factor authentication will not protect you if you land on a fraudulent login page.
 
  • Make good use of an ad blocking extension from your browser when performing internet searches. These ad blockers can be turned on and off within a browser to permit advertisements on certain websites while blocking ads from others.
 
  • If your account requires multi-factor authorization, be aware that cybercriminals may use social engineering techniques to obtain access to accounts, including calling you and pretending to be a bank employee or technical support to obtain a one-time passcode.
 
Skip Sanzeri, a strategic advisor at iValt, surveys in Forbes all the reasons that you are probably insufficiently paranoid about being cleaned out by a cyber thief.
 
“Thanks to ever-increasing online access and connectivity, AI, and quantum computing, it is increasingly difficult for legitimate businesses and sites to know the true identity of users accessing their systems. Think in terms of deepfakes, where video and audio can be created to mimic the real user. And since our daily activities, thoughts and preferences are tracked and stored, data is available everywhere on all of us. Any person or system from anywhere in the world can access nearly any information on government or corporate systems due to our pervasive use of the Internet, leading to predictions from groups like Forrester that cybercrime could cost up to $12 trillion this year alone.”
 
Sanzeri concludes that the current system, which relies on passwords, logins, two-factor identification and even tokens, is not enough. He suggests a deeper reliance on biometrics, machine ID (mobile phones and other devices for authentication), geofencing your location, and “time-bounding,” in which you limit your access to, say, a payroll or brokerage account to a specific time, every time.
 
All of these practices add one more data point for cybercriminals to have to know in order to be a convincing impersonator. Of course, biometrics and geofencing come at a cost to your privacy. And with advances in computing, it won’t be long before cybercriminals learn to use those as well.
 
The dispiriting reality is that there is no way to seal off all possibility of fraud. This is a never-ending footrace between consumers and cybercriminals. But if you take every precaution, the odds are you will not be the next mark.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Newly Released FBI Documents Reveal the National Extent of the Bureau’s Targeting of “Radical Traditionalist Catholics”

6/9/2025

 
Picture
In the intelligence business, “tradecraft” is the professional use of techniques, methods, and technologies to evaluate a purported threat. When an official finding is made that a threat assessment memo lacks tradecraft standards, that is a hard knock on the substance of the memo and the agent who wrote it.
 
Thanks to the efforts of Sen. Chuck Grassley (R-IA) and the forthcoming response from FBI Director Kash Patel, we now know that the infamous memo from the Richmond, Virginia, field office targeting “radical traditional Catholics” was riddled with conceptional errors and sloppy assumptions. In the FBI’s own judgment, it showed poor tradecraft. Worse, the impact of this assessment of traditional Catholics was rooted in smears from the Southern Poverty Law Center (SPLC), which Sen. Grassley correctly calls “thoroughly discredited and biased.”
 
Contrary to dismissive statements from former FBI Director Christopher Wray, this memo wasn’t the product of one field office. In its preparation, the Richmond, Virginia, field office consulted with Bureau offices in Louisville, Portland, and Milwaukee to paint Catholics who adhere to “conservative family values/roles” as being as dangerous as Islamist jihadists. Sen. Grassley’s document reveal also shows that there were similar efforts in recent years in Los Angeles and Indianapolis.
 
This memo was not a mere thought experiment. It was a predicate for surveillance. Among the activities we know about that resulted from this memo were attempts to develop a priest and a choir director into FBI informants on parishioners.
 
Sen. Grassley also produced a memo from Tonya Ugoretz, FBI Assistant Director, Directorate of Intelligence, acknowledging that the Southern Poverty Law Center’s (SPLC) list of hate groups – and lack of explanation for its threshold in slapping such a label on organizations and people – went unexamined by this Richmond memo. Yet that original memo from the Richmond field office found SPLC as a trustworthy source to assert that there will be a “likely increase” in threats from “radical traditional Catholics” in combination with “racially and ethnically-motivated violent extremism.”
 
Another memo produced by Sen. Grassley reveals the conclusion of the FBI’s Directorate of Intelligence: “The SPLC has a history of having to issue apologies and retract groups and individuals they have identified as being extremist or hate groups.” Now Sen. Grassley and Sen. James Lankford (R-OK) are appealing to the FBI to direct field offices not to rely on the characterizations of the SPLC.
 
This whole episode should serve as a reminder that merely opening an investigation of a religious group for its First Amendment-protected speech is a punishment in itself, at best violating practitioners’ privacy; at worst, incurring huge legal costs and anxiety.
 
Sen. Grassley deserves the gratitude of the surveillance-reform community for bringing to light the extent to which the FBI allowed America’s culture wars to become a predicate for suspicion of law-abiding Americans.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

PPSA Files Only Amicus in William Case v. State of Montana

6/3/2025

 

How Police “Emergency” Entries into Homes Will Lead to “Emergency” Entry into Phones

Picture
​The U.S. Supreme Court this week granted a petition for review in what will be the first case that the Court has agreed to hear addressing the scope of the Fourth Amendment’s warrant requirement since 2021. The case seeks clarity on whether the so-called “emergency-aid” exception to the Fourth Amendment requires police to have probable cause that an emergency is ongoing.

After police officers learned that William Case of Montana had threatened suicide, they entered his home without a warrant and seized evidence later used to convict him of a felony. Because the officers “were going in to assist him,” they felt unrestrained by the Fourth Amendment’s warrant requirement even though they did not actually believe that he was in any immediate danger since he was attempting to commit suicide at the hands of the police.

This Court had not reaffirmed the sanctity of the home since Caniglia v. Strom (2021), which found that allowing warrantless entry into the home for community caretaking – duties beyond law enforcement or keeping the peace – would have been completely at odds with the privacy expectations and demands of the Framers.

PPSA, which filed the only amicus brief in William Case v. State of Montana, informed the Court that if now upheld, such warrantless intrusion would inevitably lead to warrantless inspection of the very personal information on Americans’ smartphones and other digital devices.

In our brief, PPSA warned the Court of the “diluting effect such a low bar for emergency aid searches” would cause in other contexts – especially regarding digital devices. PPSA told the Court:

“Such devices hold vast amounts of personal information that, historically, would only have been found in the home. Lowering the burden of proof required to justify the warrantless search of the place the Constitution protects most robustly would lead law enforcement and the courts to dilute protections for other, less historically safeguarded areas, such as electronic devices, which would be devastating to the privacy of Americans …

“If the government may enter the home without a warrant based only on a reasonable belief, far short of probable cause, that an emergency exists, the government may treat electronic sources of information the same way, posing an even greater threat to privacy and the ultimate integrity of the Fourth Amendment. The insidious branding almost writes itself: ‘Big Brother’ may be ‘watching you,’ but it’s for your own good!”

PPSA’s brief also made clear the long history of elevated protection of the home in both American law and English common law. By the 17th century it was established law that the agents of the Crown were permitted to intrude on the home only in a narrow set of extreme circumstances, and only when supported by strong evidence of an emergency that corresponds to at least probable cause. PPSA wrote that if the new emergency standard is allowed

“Seemingly benevolent searches would then become an engine for criminal prosecutions even though no warrant was ever obtained, and no probable cause ever existed. The emergency-aid exception would thus become a license for the government to discover criminal activity that – in all other circumstances – would only have been discoverable through a warrant supported by probable cause.”
​

In Caniglia, the Court unanimously restricted the community care exception to the Fourth Amendment. PPSA will report back when the Court holds oral arguments on the emergency-aid exception in Case v. Montana.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

The Intelligence Community Plan to Make It Easier to Buy All Your Data

6/2/2025

 
Picture
​As we’ve written many times before, the commercially available information (CAI) of American citizens should not be for sale. It’s one of the few things Republicans and Democrats agree on. Unfortunately, the Office of the Director of National Intelligence (ODNI) not only wants to ensure that our personal data remains for sale, but also see to it that the government’s intelligence community gets “the best data at the best price.” Quite the reversal from the stark warning about the purchase of CAI presented to the ODNI just two years ago.

And so it was that on a quiet Tuesday in April the DNI fast-tracked a request for proposals for what it calls the Intelligence Community Data Consortium, or ICDC – a centralized clearinghouse where the legions of unruly CAI data vendors would be forced to get their act together, making it even easier for the government to violate our Fourth Amendment rights. We suppose calling this initiative the “Ministry of Truth” would have seemed too baroque and “One Database to Watch Them All” too obvious. So ICDC it is.

The RFP is looking for a vendor to help the DNI and the IC eliminate “problems” with our private information like:

  • Paying twice for the same data
 
  • Buying data that contains duplicate records
 
  • Overpaying for our data (because, how much is your privacy worth, really?)
 
  • Having to piece together fragmented data from walled-off siloes
 
  • Not having a single, searchable, user-friendly, cloud-based interface
 
  • Unused, “exorbitantly-priced” contracts with individual data vendors
 
  • Limited data sharing

Decentralized, fragmented, duplicative, siloed, overpriced, limited – literally everything you might hope your personal data would actually be. But no, the ODNI insists on being able to “access and interact with this commercial data in one place.” The intelligence community apparently complained and, lo, the ODNI heard its cries.

And the voice of American citizens in all of this – the rightful owners of all that data? The main RFP mentions civil liberties and Americans’ privacy exactly one time, and then only in passing.
Make no mistake: This change is a quantum leap in the wrong direction.

The Intercept quotes the Brennan Center’s Emile Ayoub and EPIC’s Calli Schroeder to make the point that the DNI doesn’t even have a specific use in mind for this data – it just wants it, and it doesn’t want to answer to privacy statutes or constitutional protections.

This dance has gone on for years, prolonged and encouraged by a lax regulatory environment and a commercial sector whose lack of scruples would make Jabba the Hutt repent and join the Jedi priesthood. Given that it now seems here to stay, we’ve decided it’s time to give the dance a name – the Constitutional Sidestep.

What else should you know about the ICDC and the Constitutional Sidestep? According to The Intercept and other sources, plenty:

  • 18 federal IC agencies and offices will be invited to the party
  • Other, unspecified non-IC agencies might also join
  • The highest tier of CAI data – “sensitive” – will be on offer
  • Analysts who use the consortium won’t be identified by agency – hence the identity of those who misuse our data will be protected
  • Once the consortium is operational, deeply-flawed AI tools like “sentiment analysis” will be set loose upon our data

Speaking of AI, get ready for one more dance – the Reidentification Rumba. Because when AI gets hold of these previously fragmented pieces of data, it will be easy to re-identify personal information that was previously anonymized: location histories, identities, associations, ideologies, habits, medical history – shall we go on? And here it must be noted that AI safeguards have been rescinded.
​
The remedy for all this is, of course, the Fourth Amendment Is Not for Sale Act, which would require a warrant before Americans’ personal information can be acquired and accessed. That law passed the House in 2024. This news ought to provide fresh momentum for that measure to become law.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

The Fifth Circuit Is Calling: They’d Like You to Have Some of Your Privacy Back

6/2/2025

 
Picture
​When you seal and mail a letter, the fact that you’re sending something via letter is not private – the addresses, the stamp, etc. Those are all visible and meant to be seen. You mailed a sealed letter. Everybody knows it. You can’t walk into FedEx or the Post Office screaming at strangers, “Don’t you dare look at me while I’m mailing this letter!”
 
Ah, but the contents of your sealed letter? Now that’s private. No one is entitled to know what’s inside except for you (and anyone you consent to give permission to, like a recipient).
 
And so it is with electronic storage services like Dropbox. The fact that you have a Dropbox account is not private, but what you store there is.
 
And that’s a big deal, because believe it or not, it hasn’t been entirely clear if electronic communications (including files stored in the cloud) are protected by the Fourth Amendment from unlawful search and seizure by the government.
 
But now we know. The Fifth Circuit Court of Appeals wrote in an opinion issued just last week: “The Fourth Amendment protects the content of stored electronic communications.”
 
If you didn’t intend for something to be public and made a reasonable effort to keep it private (such as password-protecting it in the cloud), you’re entitled to privacy. The government doesn’t have the right to access it without a warrant and probable cause.  
 
In the case at hand, Texas officials used a disgruntled ex-employee of a contractor to spy on the contractor by searching its Dropbox files. To quote the Fifth Circuit, “This was not a good-faith act.”
 
File (pardon the pun) all of this under “reasonable expectation of privacy.” Brought to you by the Fourth Amendment to the United States Constitution. Proudly serving Americans since 1791.  

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Montana Leads the Way with Two New Data Privacy Bills

5/15/2025

 
Picture
​If you want to see what leadership looks like when it comes to protecting data privacy, head to Big Sky Country. Montana Gov. Greg Gianforte just signed a bill limiting the state’s use of personal electronic data. That makes Montana the first state to pass a version of the federal bill known as the Fourth Amendment Is Not for Sale Act.
 
The chief provisions of the new Montana law include:
​
  • Government entities are prohibited from purchasing personal data without a warrant or subpoena issued by a court.

  • Authorities may not access data from personal electronic devices unless the owner consents, a court agrees that there is probable cause, or the situation is a legitimate emergency.

  • Courts must hold as inadmissible improperly obtained personal data.

  • Service providers cannot be forced to disclose their customers’ personal data unless a court has granted permission.

There must be something in Montana’s clean, libertarian air these days, because the governor is expected to sign another pro-privacy bill soon. That bill bolsters the state’s existing consumer data privacy act, the Montana Consumer Data Privacy Act (MTCDPA), in several ways:

  • Obvious (and straightforward) methods must be available for consumers to choose if they want their personal data sold or used for targeted advertising.

  • Greatly increasing the number of organizations that are subject to the MTCDPA.

  • The state’s Attorney General can now quickly respond to privacy act violators. No more 60-day waiting period.
    ​
  • The new law makes transparency more transparent. For example, privacy notices have to be clearly hyperlinked from websites or within apps.

We hear Montana is beautiful this time of year. If you go, take a moment to appreciate that your data is safer there than anywhere else in the country. Let’s hope that what happened in Montana last week will inspire federal lawmakers to follow suit and pass the Fourth Amendment Is Not for Sale Act.

    STAY UP TO DATE

Subscribe to Newsletter
DoNATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Ipsos Poll: Americans Agree that Government Collects Too Much of Our Data

5/15/2025

 
Picture
​Privacy is a big theme around here (it’s even in our name), so we are alert to reputable polling on the subject. The latest such poll from Ipsos confirms that privacy is a concern for Americans of all political stripes.
 
Among the survey’s key findings? When shown the statement, “The government collects too much data about me,” majorities in every group agreed:
 
  • Republicans: 61%
  • Democrats: 70%
  • Independents: 68%
 
That question covers collecting our data. When Ipsos asked if it is acceptable for government agencies to share that data with private companies, landslide majorities declared it was not okay:
 
  • Republicans: 85%
  • Democrats: 89%
  • Independents: 94%
 
For good measure, the pollsters flipped that question and asked if it is acceptable for private companies to share our data with government agencies? Ipsos found nearly identical levels of scorn. This is not a theoretical concern – a dozen agencies, ranging from the IRS to the FBI, Department of Homeland Security, and the Pentagon, routinely purchase and access Americans’ personal data from third-party data brokers. Americans are coming to appreciate this, which is why a YouGov poll showed that four out of five Americans favor a warrant requirement before federal agencies can inspect our private information.
                                                                                            
But the real gem of the Ipsos survey is to be found in a follow up question. Having established that Americans care about their data, respondents were asked to name which types of data come to mind that they consider the most personal. Out of 12 possible categories, the top four of the greatest concern are:
 
  • Financial: 60%
  • Health: 37%
  • Credit card usage: 32%
  • Biometric identifiers: 32%
 
If there’s a surprise to be found anywhere in the results, it may be in this crosstab nugget: Americans ages 18-34 ranked biometrics and location-based data even higher than health information. It seems the digital generation is keenly aware of how they’re being tracked by their faces, irises, and fingerprints. It may be the case that older Americans tend to think of data and privacy in terms of records, whereas the youngest know it’s about much more than that.
 
The Ipsos findings are probably no surprise to the private companies and government agencies that covet our personal data. But we hope Members of Congress will pay attention to these results and respond to the passionate concerns of their constituents.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Meta’s AI Chatbot a New Step Toward a Surveillance Society

5/13/2025

 
Picture
​We’re not surprised – and we are sure you are not either – to learn that new tech rollouts from Meta and other Big Tech companies voraciously consume our personal data. This is especially true with new services that rely on artificial intelligence. Unlike traditional software programs, AI requires data – lots and lots of our personal data – to continuously learn and improve.
 
If the use of your data bothers you – and it should – then it’s time to wise up and opt out to the extent possible. Of course, opting out is becoming increasingly difficult to do now that Meta has launched its own AI chatbot to accompany its third-generation smart glasses. Based on reporting from Gizmodo and the Washington Post, here’s what we know so far:

  • Users no longer have the ability to keep voice recordings from being stored on Meta’s servers, where they “may be used to improve AI.”
  • If you don’t want something stored and used by Meta, you have to manually delete it.
  • Undeleted recordings are kept by Meta for one year before expiring.
  • The smartglasses camera is always on unless you manually disable the “Hey Meta” feature.
  • If you somehow manage to save photos and videos captured by your smartglasses only on your phone’s camera roll, then those won’t be uploaded and used for training.
  • By default, Meta’s AI app remembers and stores everything you say in a “Memory” file, so that it can learn more about you (and feed the AI algorithms). Theoretically, the file can be located and deleted. No wonder Meta’s AI Terms of Service says, “Do not share information that you don’t want the AIs to use and retain such as account identifiers, passwords, financial information, or other sensitive information.”
  • Bonus tip: if you happen to know that someone is an Illinois or Texas resident, by using Meta’s products you’ve already implicitly agreed not to upload their image (unless you’re legally authorized to do so).

None of the tech giants is guiltless when it comes to data privacy, but Meta is increasingly the pioneer of privacy compromise. Culture and technology writer John Mac Ghlionn is concerned that Zuckerberg’s new products and policies presage a world of automatic and thoroughgoing surveillance, where we will be constantly spied on by being surrounded by people wearing VR glasses with cameras.
 
Mac Ghlionn writes:
​
“These glasses are not just watching the world. They are interpreting, filtering and rewriting it with the full force of Meta’s algorithms behind the lens. And if you think you’re safe just because you’re not wearing a pair, think again, because the people who wear them will inevitably point them in your direction.
“You will be captured, analyzed and logged, whether you like it or not.”
 
But in the end, unlike illicit government surveillance, most commercial sector incursions on our personal privacy are voluntary by nature. VR glasses have the potential to upend that equation.
 
Online, we can still to some degree reduce our privacy exposure in what we agree to, even if it means parsing those long, hard to understand Terms of Service. It is still your choice what to click on. So, as the Grail Knight told Indiana Jones in The Last Crusade, “Choose wisely.”
 
You should also learn to recognize Meta’s Ray-Bans and their spy eyes.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

It’s Time to Enforce the TikTok Ban

5/12/2025

 
Picture
​Ireland’s Data Protection Commission, acting in its official capacity as an EU privacy guardian, recently fined TikTok $600 million (€530 million) for breaching its data privacy rules. This punishment was meted out after the conclusion of a four-year investigation, so it’s a decision that was not made lightly.
 
None of this surprises us. We have previously reported on the surveillance issues related to TikTok as well as other Chinese-owned concerns. It’s naïve to think that any software of Chinese provenance isn’t being used as a data collection scheme, and equally naïve to believe that said data isn’t being shared with the Chinese government.
 
A year ago, Congress passed a law mandating that ByteDance, the Chinese parent of TikTok, divest its ownership else be banned in the United States. ByteDance could be rich beyond all the dreams of avarice if it chose to sell. That it hasn’t done so simply reinforces everyone’s suspicions that the service’s real owner is primarily interested in something other than profits.
 
The bill that President Biden signed had passed the House 360-58 and the Senate 79-18. TikTok sued but the Supreme Court upheld the law in a unanimous ruling in January. It’s an astonishingly bipartisan issue in a deeply divided time. Yet in a mystifying turn of events, the current administration has twice extended the original divestment deadline (now set for June 19). “Perhaps I shouldn’t say this,” President Trump told NBC’s Kristen Welker, “but I have a little warm spot in my heart for TikTok.” Quite the switch for someone who rightly attempted to ban the service during his first term.
 
After the latest show of bad faith by Tik Tok revealed by Irish regulators, President Trump should now enforce this sale – after all, it is a law, not a suggestion – and protect our citizens. It is the president’s constitutional duty to carry out the laws the American people pass through the voice of their representatives. A show of seriousness about enforcing this law would probably allow TikTok to survive in some form. Moreover, it would protect tens of millions of Americans from Chinese government surveillance.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

PPSA Brief to SCOTUS: Clarify What Third-Party Disclosure Means in the Modern Era

5/8/2025

 
Picture
​The U.S. First Circuit Court of Appeals in 2024 held that the IRS did not violate the Fourth Amendment when it scooped up the financial records of one James Harper through a broad dragnet of the Coinbase cryptocurrency exchange. The court based this finding on a sweeping interpretation of the “third-party doctrine,” which “stems from the notion that an individual has a reduced expectation of privacy in information knowingly shared with another.”
 
Given the terabytes of personal data that technology forces us to hand over to third-party companies, including our most intimate data – personal communications, online searches, health issues, and yes, financial holdings – does this mean that, as the First Circuit and other lower courts have ruled, there is essentially “no legitimate expectation of privacy” in that data?
 
Consider that the U.S. Supreme Court has repeatedly held that the Fourth Amendment protects “that degree of privacy against government that existed when [it] was adopted.” Times change and technology evolves. Any inquiry into reasonableness should require a periodic recontextualizing of what the Founders intended. That’s not anti-originalism; it’s just a common-sense application of original intent with new technology and capabilities.
 
The Supreme Court did just that in Carpenter v. United States, holding that the warrantless seizure of cell phone records constitutes a Fourth Amendment violation. In this case, at least, the high Court held that a reasonable expectation of privacy exists even when information is held by a third party.
 
As the Court wrote, “when an individual ‘seeks to preserve something as private,’ and his expectation of privacy is ‘one that society is prepared to recognize as reasonable,’ official intrusion into that sphere generally qualifies as a search and requires a warrant supported by probable cause.”
 
That goes not only for cell phone records but for any data that is supposed to be private.
 
In our brief that PPSA filed with the Court, we explain that:
 
“Despite Carpenter’s clear warning against allowing the third-party doctrine to degrade privacy via a ‘mechanical interpretation of the Fourth Amendment’ … lower courts have generally failed to heed that warning. Rather, they mechanically first ask if the information was disclosed to a third party and then treat this disclosure as a complete carveout from Fourth Amendment protections unless the circumstances closely or identically match Carpenter’s narrow facts.”
 
In this era of breakneck technological change and cloud computing, much of our personal information is disclosed to third parties – even information of the most sensitive kind. An interpretation that third-party disclosure automatically nullifies your right to privacy is a flawed approach in the 21st century. 
 
As we demonstrated in our brief, the Supreme Court must act to “prevent a contrary understanding of Carpenter from continuing to erode Americans’ privacy as third-party storage becomes ubiquitous and artificial intelligence becomes powerful enough to piece together intimate information from seemingly innocuous details about a target’s life.”  
 
Technology is evolving too robustly and too rapidly for the third-party doctrine to remain stuck in the era of paper bills. The First Circuit’s extreme interpretation of the third-party doctrine is a quaint vestige of a prior age, no longer equal to technologies that the Supreme Court ruled contain all “the privacies of life,” and it would make the Fourth Amendment a mere piece of ink on parchment rather than a true safeguard of Founding-era levels of privacy.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Biden Administration Kept “Disinformation” Dossiers on Americans

5/3/2025

 
Picture
The Biden administration’s State Department kept dossiers on Americans accused of acting as “vectors of disinformation.”

This was a side activity of the now-defunct State Department Global Engagement Center (GEC). It secretly funded a London-based NGO that pressured advertisers to adhere to a blacklist of conservative publications, including The American Spectator, Newsmax, the Federalist, the American Conservative, One America News, the Blaze, Daily Wire, RealClearPolitics, Reason, and The New York Post.

Now we know that the blacklisting went beyond publications to include prominent individuals. At least one of them, Secretary Rubio said, was a Trump official in the Cabinet room when the secretary made this announcement.
​
“The Department of State of the United States had set up an office to monitor the social media posts and commentary of American citizens, to identify them as ‘vectors of disinformation,’” Rubio said on Wednesday. “When we know that the best way to combat disinformation is freedom of speech and transparency.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

How AI Can Leak Your Data to the World

4/21/2025

 
Picture
​As we labor to protect our personal and business information from governments and private actors, it helps to think of our data as running through pipes the way water does. Just like water, data rushes from place to place, but is prone to leak along the way. Now, as the AI revolution churns on, workplaces are getting complete overhauls of their data’s plumbing. Some information leaks are thus almost inevitable. So, just as you would do under a sink with a wrench, you should be careful where you poke around.
 
A major new source of leakage is conversational AI tools, which are built on language in all its forms – words and sentences, but also financial information, transcripts, personal records, documents, reports, memos, manuals, books, articles, you name it. When an organization builds a conversational AI tool, many of these source items are proprietary, confidential, or sensitive in some way. Same with any new information you give the tool or ask it to analyze. It absorbs everything into its big, electronic, language-filled brain. (Technically, these are called “large language models,” or LLMs, but we still prefer “big, electronic, language-filled brains.”)
 
So be careful where you poke around.
 
As Help Net’s Mirko Zorz reminds us, companies should give employees clear guidelines about safely using generative AI tools. Here is our topline advice for using AI at work.

  • If it’s a public tool like ChatGPT, absolutely no confidential personal or business information should be entered that isn’t already publicly available. Just ask Samsung about their misadventure.
 
  • Even internal, private company tools carry risk. Make sure you’re authorized to access the confidential information your system contains. And don’t add any additional sensitive information either (documents, computer code, legal contracts, etc.) unless you’re cleared to do so.
 
  • Like a person, LLMs can be “tricked” into disclosing all manner of sensitive information, so don’t give your credentials to anyone who does not have the same authorization as you. Those new employees from Sector 7G? Sure they seem nice and perfectly harmless, but they could be corporate spies (or more likely, just untrained). Don’t trust them until they’re vetted.
 
  • Any company that isn’t educating its employees on how to use AI tools acceptably is asking for trouble. If your company isn’t training you or at least providing basic guidelines, demand both. Vigilant employees are the last line of defense in any organization that doesn't bring its “A” game to AI. And “A” really is the operative letter here (we’re not just being cute). Authorization and Authentication are the bywords of any IT organization worth its salt in the AI space.
 
  • Just because an approved software program you’ve been using at work for years has suddenly added an AI feature does NOT mean it’s safe to use. Consult with your IT team before trying the new feature. And until they give you the all-clear, be sure to avoid inputting any sensitive or otherwise restricted information.
​
Finally, leave everything work-related at work (wherever work is). When elsewhere, don’t use your work email to sign into any of the tens of thousands of publicly available AI applications. And never upload or provide any personal or private information that you don’t want absorbed into all those big, electronic, language-filled brains out there.
 
Because leaks are nearly inevitable.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

How to Guard Against Smishing Scams from China

4/21/2025

 
Picture
​Like millions of other Americans, we are receiving text messages telling us that someone at a company’s HR department has noticed our very impressive resume and would like to discuss a job offer, call before the job’s filled! – or, we have an unpaid highway toll and must pay quickly to avoid a fine! – or, our package delivery has hit a snafu and we need to deal with it post haste, or it might get lost forever!
 
The FBI advises us to delete such texts and to never – as in NEVER!!! – click through them. Such messages aim to persuade you to add to the hundreds of millions of dollars Americans are losing to text scams every year from sophisticated gangs in China. As Americans become wary of these smishing scams (a portmanteau of “SMS” short-message service texts and “phishing”), criminals are becoming more sophisticated, often impersonating a credible brand or agency to make you think that you must provide your credentials, account numbers, Social Security number, or make a payment in order to avoid a severe penalty.
 
And if you do click through, you may also expose your phone to a malware infection that will endanger you long after the text is forgotten.
 
One telltale sign of a smishing scam is that the link points to a foreign top-level domain. Common ones are “com-track,” and “com-toll.” But China’s smishing gangs are getting good at embedding links in actual “.com” addresses for real brands and agencies. So always assume it is a scam.
 
What should you do if you receive such a suspicious text?
 
The FBI advises: “STOP! Take a moment to breathe deeply in and out.”
 
Again, NEVER!!! open the text.
 
Write down the issue on paper and delete the text.
 
And if you still have a tingle of doubt, go online and look up the main website and customer service number of the bank, delivery company, toll authority, or whatever, and ask them.
 
But you do have an impressive resume, by the way. Click here to learn more.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Frankenstein Needs a New Pair of Shoes

4/14/2025

 

And He May Steal Part of Your Identity to Buy Them

Picture
​There’s a relatively new twist in identity theft – synthetic identity theft, meaning the individual elements of the fake identity are either stolen from multiple victims or fabricated. Because none of the pieces are from the same victim, it’s like building a new person out of the spare parts of others – hence Frankenstein.
 
What’s the appeal? From the fraudster’s perspective, the Frankenstein approach offers numerous advantages over traditional identity theft (where a single, real person’s whole identity is stolen). The two biggest advantages are:

  1. It’s much harder to detect: Because the identity doesn’t belong to a single real person, there’s no one to notice suspicious activity right away. Alerting tools may miss them too, especially if some of the data used is associated with someone whose credit file is inactive (like a child, an elderly person, even the homeless – all folks who are highly unlikely to check the status of their credit).
    ​
  2. Frankensteins build their own credit: Fraudsters using these accounts may apply for small lines of credit and pay on time, slowly raising the profile’s creditworthiness before eventually going after the bigger prizes. Along the way, no one’s complaining to the bank or credit bureaus about being denied because when the theft is so fragmented, no one notices. In the end, it’s about patience and playing the long game. Scammers taking that approach are far more likely to succeed.
 
CNET’s Neal O’Farrell says the way to watch out for this kind of identity theft is to keep an eye on your Social Security Number. Phone numbers and addresses can change; SSNs are static. So if Frankenstein’s SSN happens to be yours, well, you get the picture. O’Farrell specifically recommends these steps:

  1. Freeze your credit reports. It’s both free and easy to do. And if you need to temporarily unfreeze your credit reports for a legitimate reason, that’s even easier.

  2. Monitor your SSN. And the best way to do that is to create a “my Social Security” account. Launched in 2012 and recently beefed up from a credentials standpoint, set a reminder on your calendar and check in once a month. It’s not proactive, so it won’t alert you, but checking it regularly does allow you to see if the activity associated with your SSN looks normal. And creating an account yourself prevents anyone else from doing it, so there’s that.  
    ​
  3. Check your credit reports regularly. Weekly would be nice. Note that we’re not talking about checking your credit SCORE, we’re talking about checking your credit REPORTS. There are multiple ways to go about this. CNET explains the options.
 
Finally, consider the Federal Reserve Toolkit devoted to this subject, specifically, the Fed’s Synthetic Identity Fraud Mitigation Toolkit. Aimed primarily at businesses and the payment industry, it contains plenty of information of value to any audience, including individuals and families. We asked them to rename it the “Frankenstein Identity Fraud Mitigation Toolkit,” but you can imagine how that went.
 
File all of the above under the folder named, “Reality, New.” We agree that it’s something of a pain, but ultimately it’s just about forming a few new habits.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Traveling Abroad This Summer? Think Twice about Bringing Devices

4/14/2025

 

The ACLU’s Updated Travel Advice with Privacy in Mind

Picture
​Traveling with electronic devices this summer? Of course you are.
 
Would you like those devices searched by federal agents? Of course not.
 
Think the Fourth Amendment protects you from such searches? Think again, says the ACLU.
 
As we’ve written previously, U.S. ports of entry are twilight zones where the Fourth Amendment is more of a suggestion than a right. Having monitored this issue for years, the ACLU recently updated their advice for travelers. Here’s a summary version from the ACLU:

  • Limit devices and data: Don’t take them if you don’t need them, or consider travel-only devices that don’t contain sensitive information.
  • Encrypt & ship: Instead of packing them, ship them, but realize that Customs and Border Protection (CBP) reserves the right to search international packages – so encrypt them. A forensic search could get around some encryption, but it’s still good to play defense.
  • Encryption is the new reality: While we’re on the subject, adjust your thinking and embrace encryption as something you need to do, not just the techie-crypto crowd. Here’s a guide recommended by the ACLU. There are many others, of course.
  • Is this thing on? Active devices are suspicious devices. If you insist on traveling with them (see above) turn them off when crossing. If on (not recommended), then airplane mode only.
  • Leave it in the cloud: Get out of the habit of storing sensitive or private information on your devices. Use end-to-end encrypted cloud-storage accounts instead. Then disable those apps for traveling and delete the caches. CBP claims it is against policy for border agents to search cloud-stored data on electronic devices. Fingers crossed.
  • About those photos: It’s cloud time again. Digital cameras don’t offer encrypted storage, so upload and delete if you’re worried. And, yes, digital cameras are considered electronic devices.
  • “Privileged” travelers? We mean actual privilege, as in the attorney-client kind. Do not volunteer this information, but if agents announce they’re going to search, first let them know the device contains privileged material. In such cases, CBP is supposed to follow special procedures that set a higher bar. Again, fingers crossed.
​
CBP agents can’t force you do anything (surrender a password, for example), but if you lock horns then you’d better be prepared to stay at the airport awhile or at least say goodbye to your electronic devices for weeks or even months.
 
This is all a pain. But the better strategy is to plan ahead.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Monitoring Students for Safety or Mining Students for Data?

4/13/2025

 

Columbia’s Knight Institute Goes to Court to Find Out

Picture
As we’ve noted, a veritable gaggle of organizations (including a service called Gaggle) are helping schools to monitor student activity on district-issued devices – tracking every website, every keystroke (and potentially snapping pictures of students’ private lives). These arrangements lack transparency. Parents are only told it’s necessary to ensure “public safety” or some version of “safeguarding student mental health.” In the meantime, school districts and taxpayers are shelling out millions to the ed tech industry.
 
And all that collected data? Surveillance companies like GoGuardian and Gaggle have signed a Student Privacy Pledge that they will not sell students’ personally identifying information. Despite pledges from school districts and tech companies, more clarity is needed about who can access students’ information and why.

This inscrutable practice of student monitoring is about to get a little more attention – in the form of a lawsuit aimed at unearthing the facts. Attorney Jennifer Jones of the Knight First Amendment Institute describes the student surveillance industry in detail and makes the legal case against it in the Teen Vogue online newsletter.

The Knight Institute’s lawsuit isn’t the first of its kind, but its timing amid the cultural chaos of artificial intelligence suggests it could be a tipping point for transparency. This lawsuit is also not about specific privacy violations alleged by individuals, so it won’t be settled for damages as some previous cases have been.
​
On paper, student surveillance systems sound great: The monitoring is designed to prevent self-harm, cyberbullying, and violence. And yet, as Jones points out, the standard list of related keywords and websites the software provides can be customized – making it capable of going far beyond universal safety concerns to serve the political or cultural agenda du jour. What happens if a student tries to access a banned book, for example? Should that be reported? This is all just one search word away from a dystopian episode of the Twilight Zone.
 
As has been reported from multiple quarters, there is scant and merely anecdotal evidence that any of these systems accomplish what they purport to – but evidence of plenty of misfires. Moreover, the law on which this burgeoning surveillance apparatus is based, the Children’s Internet Protection Act of 2000, requires no measures beyond basic obscenity filters. The ed tech industry has done a bait and switch to take advantage of well-intentioned school administrators who are desperate to solve some of the most heartbreaking problems of our time.
 
It would be nice if AI-powered surveillance was the quick fix, but it’s not. It is a blunt force instrument with chilling implications up and down the Bill of Rights. We don’t need to normalize an educational-corporate-juridical surveillance state. The answers to the problems of school violence and self-harm are not easy, and they won’t be solved by technology alone. They must be mitigated through connection and relationships: Talking not stalking.
 
So it’s time for a reckoning, and a conversation that brings all of us to the table. We hope the Knight First Amendment Institute’s lawsuit makes that candid and open conversation happen.
 
Here’s some suggested further reading:
  • Associated Press
  • New York Times
  • The74
  • Wired
  • Center for Democracy & Technology

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

FBI Warning: Those “Free” Online Tools Have a Cost

3/31/2025

 

Would You Like a Side of Malware with Your PDF Conversion?

Picture
The threat landscape is growing again. This time, reports Forbes contributor Zak Doffman, the FBI is warning Americans about online utility sites, especially those offering free online document converter tools or a tool for downloading audio or video files (MP3, MP4, etc.).

Basically if it’s “online” and “free” and purports to do something you really need done – just say no. It’s not worth it.

Yes, these sites “work” in that you may well get your converted file or the downloading program you need. But you’re also likely to get your sensitive information stolen and malware or ransomware installed on your device.

And while there are legitimate utility sites out there, the scam sites will try to mimic their URLs. So, unless you know the site from prior experience and can trust it, or unless the site has been vetted by your tech team or the cyber gurus in your life, then don’t engage.
​
Better yet, don’t enter “free online document converter” in your search bar in the first place. It’s worth investing in official tools for all such tasks. Because not having your information stolen or your computer invaded is a value at any price.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Scammers Are Using Generative AI to Level the Field

3/26/2025

 

FBI PSA: The Safe Bet Is to Assume It’s Fake

Picture
​Remember when the only person you worried might fall prey to scammers was your favorite aunt, who had only her Welsh Corgi at home with her during the day? “Now, Trixie,” you’d say, “don’t agree to anything and always call me first.”
 
Those days are over.
 
Forget your late aunt Trixie. Worry about yourself. Imagine if you received a phone call from a close friend, family member, even your spouse that was actually an utterly-convincing AI-generated version of that person’s voice – urgently begging you to provide a credit card number to spring her out of a filthy jail in Veracruz or pay an emergency room hospital bill.
 
The age of AI augers many things, we are told. But while we’re waiting for flying taxis and the end of mundane tasks, get ready to question the veracity of every form of media you encounter, be it text, image, audio, or video. In what is sure to be the first of many such public service announcements, the FBI is warning that the era of AI-powered fraud hasn’t just dawned, it is fully upon us.
 
The theme of the FBI’s announcement is “believability.” It used to be that scams were easy to spot – the writing was laughably bad, or the video and audio were noticeably “off” or even a little creepy – a phenomenon known as the Uncanny Valley effect. The newfound power of generative AI to produce realistic versions of traditional media has put an end to such reliable tells.
 
Anyone who thinks they’re immune to such trickery misunderstands the nature of generative AI.
 
Consider:
  • In how many languages can you write fluently? It doesn’t matter because whatever the answer is, generative AI’s got you beat.
 
  • That person you were flirting with via text? Generative AI chatbots are better at responding and demonstrating empathy. When they say, “I’ll message you this afternoon to see how your day went,” they actually do. And they will remember to ask you about the acupuncture treatment you had after scraping your post about it from your social media.
 
  • Don’t bother asking scammers to prove their identity – fake passports and driver’s licenses are a generative AI specialty, right down to the photos. (Or ask anyway in case they happen to be amateurs, but don’t stop there).
​
Whenever a friend or family member sends a video that clearly shows him or her in need of help (stranded on vacation or having their wallet stolen at a nightclub perhaps), don’t automatically assume it’s real no matter how convincing it looks. And thanks to generative AI’s “vocal cloning” ability, a straight-up phone call is even easier to fake.
 
So, what can we do?
 
The FBI advises: Agree to a secret password, phrase, or story that only you and your family members know. Do the same with your friend groups. Then stick to your guns. No matter how close your heartstrings come to breaking, if they don’t know the secret answer, it’s a scam-in-waiting.
 
The FBI also recommends limiting “online content of your image or voice” and making social media accounts private. Fraudsters scrape the online world for these artifacts to produce their deepfake masterpieces. All generative AI needs to create a convincing representation of you is a few seconds of audio or video and a handful of images.
 
Rest in peace, Aunt Trixie. We miss her and the good old days when all we had to do was warn her not to give her personal information to a caller who said he was from the Corgi Rescue Fund. Today, if an AI scamster wanted to, he could now have Aunt Trixie call you from the grave, needing money, of course.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

How to Delete Your 23andMe Genetic Profile

3/25/2025

 
Picture
​Your genetic blueprint is your most unique identifier, packed with deeply personal information.

How might it be used? Your DNA could be subpoenaed by law enforcement to connect you to an investigation. It could be used to predict your predisposition to a disease, prompting an insurance company to raise your premiums. It can also compromise the privacy of your children and other relatives up, down, and across your family tree.

Seven million 23andMe customers learned this the hard way in 2023 when hackers gained access to their family trees, birth years, and geographic locations.

If you’ve ever sent in a saliva test for a 23andMe genetic profile, you should seriously consider having it and your data destroyed NOW. This is because 23andMe is going into voluntary Chapter 11 restructuring and could be sold – and with it, all your supremely private information the company holds.
​
Here are instructions from California Attorney General Rob Bonta on how to destroy your sample and delete your genetic data with 23andMe. Other DNA home-testing sites also offer delete functions in their account settings.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

School Laptops Spied on Teens in Their Bedrooms

3/25/2025

 
Picture
On one day in 2010 Blake Robbins, 15, a high school sophomore, was relaxing in his bedroom popping Mike and Ike candy, “fruity, chewy candy … bursting with five fun flavors.” He was soon called to the principal’s office at Harriton High School, in a community west of Philadelphia. Blake was accused of selling drugs.
 
Blake, along with 2,000 other students, had received a laptop computer from the school district that he was allowed to take home with him. What parents were not told was that the laptops’ cameras would activate and transmit an image every 15 minutes – capturing teenagers in their bedrooms, and any family members who happened to cross in the path of the very-watchful eye.
 
Keron Williams, an African-American honors student, says images were used to profile him to promote a false accusation that he had been stealing. In all, it is alleged that 56,000 webcam images of students and their families were captured through the donated laptops.
 
Keep an eye out for more on this story on Spy High, a documentary produced by Mark Wahlberg, that will stream on Amazon April 8. (Check out the Spy High trailer on People.com.)
 
You might dismiss this as an old story – and one that was well reported in the local media. It was also adjudicated in the courts. The Robbins family received a $610,000 settlement from the school district. But this story remains startlingly relevant, in two ways.
 
First, the incidents behind Spy High were not outliers but omens of things to come. As we reported last year, Gaggle safety software is reviewing student messages and flagging issues of concern. In one Kansas high school, students in a high school art class were called in to defend the contents of their art portfolio. Software had flagged digital files of their art for “nudity.”
 
A report compiled by the Center for Democracy & Technology found that over 88 percent of schools use some form of student device monitoring, 33 percent use facial recognition, and 38 percent share student data with law enforcement.
 
Second, this story is relevant because it warns us that there are wide swaths of American officialdom that are either dismissive or blithely unaware of the Fourth Amendment and its warrant requirement. To be fair, there are plenty of disfunctions and dangers in the modern American high school that administrators need to anticipate and counter. But placing spyware over all student messages and content seems like overkill. The price we pay is that the next generation of Americans is learning to accept life in a total surveillance state.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Vision Language Models Are Steroids for the Surveillance State

3/23/2025

 
Picture
​Imagine a law enforcement agent – an FBI agent, or a detective in a large police department – who wants to track people passing out leaflets.
 
Current technology might use facial recognition to search for specific people who are known activists, prone to such activity. Or the agent could try not to fall asleep while watching hours of surveillance video to pick out leaflet-passers. Or, with enough time and money, the agent could task an AI system to analyze endless hours of crowds and human behavior and to eventually train it to recognize the act of leaflet passing, probably with mixed results.
 
A new technology, Vision Language Models (VLMs), are a game-changer for AI surveillance, as a modern fighter jet is to a biplane. In our thought experiment, all the agent would have to do is simply instruct a VLM system, “target people passing out leaflets.” And she could go get a cup of coffee while it compiled the results.
 
Jay Stanley, ACLU Senior Policy Analyst, in a must-read piece, says that a VLM – even if it had never been trained to spot a zebra – could leverage its “world knowledge (that a zebra is like a horse with stripes.)” As this technology becomes cheaper and commercialized, Stanley writes, you could simply tell it to look out for kids stepping on your lawn, or to “text me if the dog jumps on the couch.”
 
“VLMs are able to recognize an enormous variety of objects, events, and contexts without being specifically trained on each of them,” Stanley writes. “VLMs also appear to be much better at contextual and holistic understandings of scenes.”
 
They are not perfect. Like facial recognition technology, VLMs can produce false results. Does anyone doubt, however, that this new technology will only become more accurate and precise with time?
 
The technical flaw in Orwell’s 1984 is that each of those surveillance cameras watching a target human required another human to watch that person eat, floss, sleep – and try not to fall asleep themselves. But VLMs make those ever-watching cameras watch for the right things.
 
In 1984, George Orwell’s Winston Smith ruminated that:
 
“It was terribly dangerous to let your thoughts wander when you were in a public place or within range of a telescreen. The smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself – anything that carried with it the suggestion of abnormality, of having something to hide." 
 
Thanks to AI – and now to VLMs – the day is coming when a government official can instruct a system, “show me anyone who is doing anything suspicious.”
 
Coming soon, to a surveillance state near you …

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Federal Court in Mississippi Rejects Search Warrants for Cell-Tower Data

3/21/2025

 

Can the Government Access “An Entire Haystack Because It May Contain a Needle?”

Picture
​The drafters of the U.S. Constitution could not have imagined Google, Apple, and cell-site technologies that can vacuum up the recorded movements of thousands of people. Still smarting from the British colonial practice of ransacking rows of homes and warehouses with “general warrants,” the founders wrote the Fourth Amendment to require that warrants must “particularly” describe “the place to be searched, and the persons or things to be seized.”
 
Courts are still grappling with this issue of “particularity” in geofence warrants – technology that analyzes mass data to winnow out suspects. Now a federal court in Mississippi has come down decisively against non-particular searches in location-and-time based cell tower data.
 
To reach this conclusion, Judge Andrew S. Harris had to grapple with a Grand Canyon of circuit splits on this question. His opinion is a concise and clear dissection of divergent precedents from two higher circuit courts.
 
Harris begins with the Fourth Circuit Court of Appeals in Virginia in United States v. Chatrie (2024), which held that because people know that tech companies collect and store location information, that a defendant has no reasonable expectation of privacy.” The Fourth Circuit reached its decision, in part, because Google users must “opt in to Location History” to enable Google to track their locations.
 
The Fifth Circuit Court of Appeals in New Orleans took the Fourth Circuit’s reasoning and chopped it up for jambalaya. The Fifth drew heavily on the U.S. Supreme Court’s 2018 United States v. Carpenter opinion – which held that the government’s request for seven days’ worth of location tracking from a man’s wireless carrier constituted an unconstitutional search.
 
This data, the Supreme Court reasoned, deserves protection because it provides an intimate window into a person’s life, revealing not only his particular movements, but through them his “familial, political, professional, religious, and sexual associations.”’ Despite a long string of cases holding that people have no legitimate expectation of privacy when they voluntarily turn over personal information to third parties, the U.S. Supreme Court held that a warrant was needed in this case.
 
The Fifth followed up on Carpenter’s logic with a fine distinction in United States v. Smith (2024): “As anyone with a smartphone can attest, electronic opt-in processes are hardly informed and, in many instances, may not even be voluntary.” That court concluded that the government’s acquisition of Google data must conform to the Fourth Amendment.
 
The Fifth thus declared that geofence warrants are modern-day versions of general warrants and are therefore inherently unconstitutional. That finding surely rattled windows in every FBI, DEA, and local law enforcement agency in the United States.
 
Judge Harris worked from these precedents when he was asked to review four search-warrant applications for location information from a data dump from a cell tower. The purpose of the request was not trivial. An FBI Special Agent wanted to see if he could track members of a violent street gang implicated in a number of violent crimes, including homicide. The government wanted the court to order four cell-service provides to produce data for 14 hours for every targeted device.   
 
Judge Harris wrote that the government “is essentially asking the Court to allow it access to an entire haystack because it may contain a needle. But the Government lacks probable cause both as to the needle’s identifying characteristics and as to the many other flakes of hay in the stack … the haystack here could involve the location data of thousands of cell phone users in various urban and suburban areas.”
 
So Judge Harris denied the warrant applications.
 
Another court in another circuit may have well come to the opposite conclusion. Such a deep split on a core constitutional issue is going to continue to deliver contradictory rulings until it is resolved by the U.S. Supreme Court. In the meantime, Judge Harris – a graduate of the University of Mississippi Law School – brings to mind the words of another Mississippian, William Faulkner: “We must be free not because we claim freedom, but because we practice it.”

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Rep. Emmer Takes the Lead Against a Surveillance Currency

3/12/2025

 
Picture
Photo: House Creative Services / MGN
​Americans value privacy in the marketplace when we vote with our dollars no less than when we go behind the curtains of a polling booth.
 
Now imagine if every dollar in our possession came with an RFID chip, like those used for highway toll tags or employee identification, telling the government who had that dollar in their hands, how that consumer spent it, and who acquired it next.
 
That would be the practical consequence of a policy proposal being promoted now in Washington, D.C., to enact a Central Bank Digital Currency (CBDC). Some have recently asked Congress to attach such a currency to the Bank Secrecy Act, to enable surveillance of every transaction in America.
 
Such a measure would end all financial privacy, whether a donation to a cause, or money to a friend. “If not designed to be open, permissionless, and private – resembling cash – a government-issued CBDC is nothing more than an Orwellian surveillance tool that would be used to erode the American way of life,” said Rep. Tom Emmer (R-MN).
 
This would happen because CBDC is a digital currency, issued on a digital ledger under government control. It would give the government the ability to surveil Americans transactions and, in the words of Rep. Emmer, “choke out politically unpopular activity.”
 
The good news is that President Trump is alert to the dangers posed by a CBDC. One of his first acts in his second term was to issue an executive order forbidding federal agencies from exploring a CBDC.
 
But the hunger for close surveillance of Americans’ daily business by the bureaucracy in Washington, D.C., is near constant. There is no telling what future administrations might do. Rep. Emmer reintroduced his Anti-Surveillance State Act to prevent the Fed from issuing a CBDC, either directly or indirectly through an intermediary. Rep. Emmer’s bill also would prevent the Federal Reserve Board from using any form of CBDC as a tool to implement monetary policy. The bill ensures that the Treasury Department cannot direct the Federal Reserve Bank to design, build, develop, or issue a CBDC.
 
Prospects for this bill are good. Rep. Emmer’s bill passed the House in the previous Congress. It doesn’t hurt that Rep. Emmer is the House Majority Whip and that this bill neatly fits President Trump’s agenda.
 
So there is plenty of reason to be hopeful Americans will be permanently protected from a surveillance currency. But well-crafted legislation alone won’t prevent the federal bureaucracy from expanding financial surveillance, as it has done on many fronts. PPSA urges civil liberties groups and Hill champions of surveillance reform, of all political stripes and both parties, to unite behind this bill.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

One Billion Faces: Meta Reminds Us What Facebook Is About – Collecting Our Data

3/9/2025

 
Picture
​We’re not sure which is most disconcerting: that Meta has a division named Global Threat Disruption, that their idea of said global threats includes deepfake celebrity endorsements, or that this has become their excuse to reactivate the controversial facial recognition software they shelved just three years earlier (so much for the “Delete” key).

Meta has relaunched DeepFace to defend against celebrity deepfakes in South Korea, Britain, and even the European Union. “Celeb-baiting,” as it’s known, is where scammers populate their social media posts with images or AI-generated video of public figures. Convinced that they’re real – that Whoopi Goldberg really is endorsing a revolutionary weight loss system, for example – unwitting victims fork over their data and money with just a few clicks. All of which, according to Meta, “is bad for people that use our products.”

Celeb-baiting is a legitimate problem, to be sure. We’re no fans of social media scammers. What’s more, we know full well that “buyer beware” is meaningless in a world where it is increasingly difficult to spot digital fakes. But in reviving their facial recognition software, Meta may be rolling out a cannon to kill a mosquito. The potential for collateral damage inherent in this move is, in a word, staggering. Just ask the Uighurs in Xi’s China.

Meta began tracking the faces of one billion users, beginning in 2015. And initially, it didn’t bother to tell people the technology was active, so users couldn’t opt out. As a result of Meta’s sleight of hand, as well as its own strict privacy laws, the EU cried foul and banned DeepFace from being implemented.

But that was years ago … and how times have changed. The privacy-minded Europeans are now letting Meta test DeepFace to help public figures guard against their likenesses being misused. But can regular users be far behind? Meta could rebuild its billion-face database in no time.

For its part, the U.K. is courting artificial intelligence like never before, declaring that it will help unleash a “decade of national renewal.” Even for a country that never met a facial recognition system it didn’t love, this feels like a bridge too far.

We have written about the dangers, both real and looming, of a world in which facial recognition technology has become ubiquitous. When DeepFace was shelved in 2021, it represented an almost unheard-of reversal, in effect putting the genie (Mark Z, not Jafar) back in the bottle.
​
That incredibly lucky bit of history is unlikely to repeat itself. Genies never go back in their bottles a second time.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Director Tulsi Gabbard Stands Up to UK “Snoopers’ Charter” to Defend “the Constitutional Rights of U.S. Citizens”

2/27/2025

 
Picture
​A letter from Tulsi Gabbard, the new director of national intelligence, in response to a recent letter from Sen. Ron Wyden (D-OR) and Rep. Andy Biggs (R-AZ), is a good sign that the new boss is not the same as the old boss.
 
What is most remarkable about Director Gabbard’s letter is that it exists and is a prompt response. Many letters from Members of Congress in the past seemed to disappear into interstellar space. Or, when the government did deign to answer them, it was often with the overcautious double-speak that avoids avoid promises and commitments or even judgment.
 
Gabbard’s reply to Sen. Wyden and Rep. Biggs is prompt, direct, and actually responsive to the concerns of these two critics of surveillance abuse. She speaks directly about the secret order issued by the UK Home Secretary instructing Apple to create a back door capability in its iCloud feature that would allow the British government to access the personal data of any customer in the world.
 
Gabbard writes that the UK government did not inform her office of this order, which seems like an astonishing breach of protocol for a “Five Eyes” ally with which the United States shares mutual intelligence. Gabbard refers to the UK’s Investigatory Powers Act of 2016, also known as the “Snoopers’ Charter,” which allowed London to gag Apple from voicing its concerns, even secretly with the U.S. government.
 
As a result of the UK’s pressure on Apple, Gabbard says she has:
  • Assigned a senior intelligence community officer to work with the director’s Office of Civil Liberties, Privacy, and Transparency, as well as the office that deals with external partners, to discover all the implications of the UK’s secret surveillance order.
 
  • Dispatched government lawyers to investigate the secret order in light of the bilateral CLOUD Act agreement. That agreement, Gabbard writes, forbids the United Kingdom from issuing “demands for data of any U.S. citizens, nationals, or lawful permanent residents (‘U.S. persons’), nor is it authorized to demand the data of persons located inside the United States.”
Gabbard further writes:
 
“Any information sharing between a government – any government – and private companies must be done in a manner that respects and protects the U.S. law and the Constitutional rights of U.S. citizens.”
 
She closes her letter by referring to obligations to protect “both the security of our country and the God-given rights of the American people enshrined in the U.S. Constitution.”
 
Missing from Director Gabbard’s letter is the oblique and lawyerly tone of past administrations. We applaud Gabbard for her responsiveness and encourage her to continue to break with her predecessors in a new spirit of openness and a real concern for Americans’ constitutional rights.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS
<<Previous

    Categories

    All
    2022 Year In Review
    2023 Year In Review
    2024 Year In Review
    Analysis
    Artificial Intelligence (AI)
    Call To Action
    Congress
    Congressional Hearings
    Congressional Unmasking
    Court Appeals
    Court Hearings
    Court Rulings
    Digital Privacy
    Domestic Surveillance
    Facial Recognition
    FISA
    FISA Reform
    FOIA Requests
    Foreign Surveillance
    Fourth Amendment
    Fourth Amendment Is Not For Sale Act
    Government Surveillance
    Government Surveillance Reform Act (GSRA)
    Insights
    In The Media
    Lawsuits
    Legal
    Legislation
    Letters To Congress
    NDO Fairness Act
    News
    Opinion
    Podcast
    PPSA Amicus Briefs
    Private Data Brokers
    Protect Liberty Act (PLEWSA)
    Saving Privacy Act
    SCOTUS
    SCOTUS Rulings
    Section 702
    Spyware
    Stingrays
    Surveillance Issues
    Surveillance Technology
    The GSRA
    The SAFE Act
    Warrantless Searches
    Watching The Watchers

    RSS Feed

FOLLOW PPSA: 
© COPYRIGHT 2024. ALL RIGHTS RESERVED. | PRIVACY STATEMENT
Photo from coffee-rank