“The progress of science in furnishing the government with means of espionage is not likely to stop with wire-tapping.” Louis Brandeis, 1928 Protecting privacy in the Information Age was always going to be a tough proposition. Protecting privacy in the era of generative AI? Without the proper safeguards on your part, is nigh unto impossible. Every entry you make in ChatGPT could surface in public due to a subpoena or a warrant. So when ChatGPT asks you (cue the Viennese accent) “how do you feel about your mudder?” your response may well be read by an FBI agent or by a prosecutor in open court. Yet this technology is being used by some in exactly that way – as a therapist. Mostly hoping that no one would notice, ChatGPT parent OpenAI recently published a mea culpa of sorts, trying to “sorry/not-sorry” its way through the bad PR it’s received as a result of users harming themselves and others. Because “people using ChatGPT in the midst of acute crises” hasn’t gone well, OpenAI will now route to human reviewers any conversations in which ChatGPT users threaten harm to others (another privacy can of worms). OpenAI may ban such accounts, but it may also refer the matter to law enforcement. Generative AI is not a therapist. It is not a counselor. It is not a parent, a minister, a rabbi, a teacher, or a school administrator. AI isn’t even anyone’s friend, much less a lover. It is a very bad substitute for all of these utterly human roles. We misuse it at our peril. But generative AI is something else as well – a profitable branch of data science that corporations, educational institutions, governments, law enforcement agencies (and scammers!) are using to collect vastly more data about employees, customers, students, citizens, and future victims of criminal schemes. To the extent that we use it at all, we should be exceedingly wary of what we share. It is not, nor has it ever been, private. Americans have never been more surveilled than we are at this moment. Before generative AI, the surveillance apparatus was proceeding more or less in a linear fashion, like a twin-engine prop on a steadily rising course. That prop plane is now a supersonic jet thanks to generative AI. “Safety” is one of the many traps that the era of generative AI is increasingly setting for matters of privacy. When our fundamental right to be let alone (to quote Justice Brandeis) is traded away these days, it is most often done “in the name of” some noble-sounding cause – safety, national security, you name it. Until law catches up to reality, you would be well advised to be very careful with any private information you share with AI advisors like ChatGPT, especially if it is about your mother. Our Senior Policy Advisor and former U.S. Congressman, Bob Goodlatte, explains in Tech Policy Press why it’s necessary to protect encryption, which ensures that emails, texts, and other communications are kept private between sender and receiver. The world currently faces numerous cybersecurity threats, and every piece of data from medical records to trade secrets is a potential target. Encryption not only protects industry, but it also protects journalists from malevolent governments and victims from their abusers. Yet the UK has chosen to pursue a disastrous policy of attacking encryption and the privacy it enables by requiring Apple to facilitate wiretapping of its users - even those that are, like the company, outside of the UK. The US government shouldn't be complicit in this power grab by continuing to give the UK authority to enforce surveillance orders against US tech companies, as it currently does under the 2017 CLOUD Act. Watching the Watchers: Pakistan’s Total Surveillance State – Can’t Happen Here, Right? Right?9/9/2025
With help from vendors in the United States, Canada, Europe, and China, Pakistan has relied on a global supply chain to create comprehensive and sophisticated surveillance and censorship tools, according to a new report released by Amnesty International on Tuesday. Agnès Callamard, Secretary General of Amnesty International, said: “Pakistan’s Web Monitoring System and Lawful Intercept Management System operate like watchtowers, constantly snooping on the lives of ordinary citizens. In Pakistan, your texts, emails, calls and internet access are all under scrutiny. But people have no idea of this constant surveillance, and its incredible reach. This dystopian reality is extremely dangerous because it operates in the shadows, severely restricting freedom of expression and access to information.” Amnesty International provides a real-life example, a journalist who is responding to constant surveillance with self-censorship. The journalist describes what happened after he published a story on public corruption: “After the story, anyone I would speak to, even on WhatsApp, would come under scrutiny. [The authorities] would go to people and ask them, ‘Why did he call you?’ [The authorities] can go to these extreme lengths … Now I go months without speaking to my family [for fear they will be targeted].” Keep in mind that virtually every bit of data that Pakistan extracts from its citizens relies on technologies sold by U.S. companies and used by our federal government. In our country, internal agency procedures, ethical commitments, and laws prevent such ready and rampant surveillance. But all the elements are in place. Through purchased data and FISA Section 702, the federal government can already view virtually anything it wants without a warrant. Pakistan is a reminder of just how perilously close we are to an American surveillance state. “Ethics is knowing the difference between what you have a right to do and what is right to do.” - Justice Potter Stewart Local police departments are spending billions of dollars on surveillance technology, from cameras, to cell-site simulators, to drones. Customers in blue range from the New York Police Department, which has invested $3 billion in surveillance in recent years, to small-town departments willing to fork out tens of thousands. With so much money sloshing around, it is reasonable to wonder how careful local officials are in maintaining clear boundaries between customer and vendor. Events in Atlanta suggest that sometimes these boundaries are, at best, blurry. Marshall Freeman is the Chief Administrative Officer of the Atlanta Police Department (APD) and a former leader at the non-profit Atlanta Police Foundation. Together, the Foundation and the APD devised Connect Atlanta, a camera network that makes Atlanta one of the most surveilled cities per capita in the United States. The Atlanta Community Press Collective (ACPC) was combing through public records when they noticed Freeman’s name on a Conflict of Interest Disclosure Report. Citing “financial interest” in Axon, a law enforcement tech company, he recused himself from contract-related “matters and dealings” that could impact Axon financially. “I have interest in a company that is currently in talks with Axon around acquisition and investment,” he wrote, without specifics. ACPC discerned that Freeman’s unnamed stake was in a company called Fusus, whose software fuels the Connect Atlanta surveillance system. Axon acquired it for $240 million barely a week after Freeman filed his disclosure. More red flags followed. Freeman was the only public official quoted in Axon’s press release announcing the acquisition: “I wholeheartedly encourage all agencies to embrace this cutting-edge technology and experience its transformative impact firsthand.” Using open records requests, ACPC also reports it also found emails indicating that Freeman “boosted Fusus and Axon products to other agencies in Georgia and around the U.S.” on multiple occasions post-disclosure. When the reporting first surfaced, APD responded tersely: “The appropriate ethics filings were submitted.” A few weeks later, though, the City of Atlanta Ethics Office begged to differ, announcing an investigation into Freeman’s post-recusal behavior. Fifteen months later, the body released an official report totaling 313 pages. The findings suggest that Freeman’s relationship with the camera-pushing Fusus dated back to his days at the Atlanta Police Foundation, a relationship he brought with him to APD and continued to nurture. According to The Guardian, he consulted for Fusus for at least a year after joining APD, “crisscrossing the country in person and by email while repping the company, including conversations with police departments in Florida, Hawaii, California, Arizona and Ohio.” All told, the Ethics Office found 15 separate matters in which Freeman used his official position as an influencer for Axon and Fusus. For at least part of this time, he served on the board of two Fusus subsidiaries in Virginia and Florida – a fact he did not disclose to ethics investigators. Writing in The Intercept, Timothy Pratt and Andrew Free detail how Freeman’s impropriety (the “appearance” of which is the only thing he’s admitted to) is making all of us less free – taking the Great Atlanta Mass Surveillance Experiment and replicating it from sea to monitored sea: Seattle, Sacramento, New York City, Omaha, Birmingham, Springfield, Savannah, and counting. Freeman may be an exception. But he might be the rule. It doesn’t matter, given the outsized influence even one public official can have when it comes to the proliferation of the police surveillance dragnet in the United States. Then again, by the time robust surveillance systems get to smaller, heartland cities like Lawrence, Kansas, it may already be too late. At the very least, police procurement processes would benefit from tighter rules, like those that govern Pentagon officials when they assess contracts. We often speak of surveillance technology. Now we have surveillance art, modernist sculptures that watch you back whenever you admire them. We’re a bit forgiving when the technology is used as a form of home security, since it is defensive in nature rather than invasive (and mass in scale). But the melding of art and surveillance is a trend that ought to give anyone pause. Alyn Griffiths of Dezeen reports on Sculptural Surveillance by Denmark studio Swift Creatives. Marketing to homeowners, these designs are bendable silly straws that you can customize into landscape art. Slender, looping, and brightly colored, these are meant to be noticed. In the words of Swift Creatives co-found Carsten Eriksen, “Our concept for this collection aims to challenge the conventional notions of home surveillance, transforming functional devices into objects of beauty that homeowners can proudly display.” They are, he says, “aimed to stand out.” Driven by residents/owners themselves, such approaches to home security are respectful and a far cry from the techniques high-tech burglars have been using. They also represent a far safer choice than Chinese-made junk products masquerading as security devices – which can be used to watch their owners instead of the other way around. This has the feeling of a beginning of a trend. Perhaps the next time you get the creepy feeling that the eyes in a painting or a Rodin sculpture are following you, you might be right. Philip K. Dick, the 20th century writer whose science-fiction stories proved prescient, once declared: “My phone is spying on me.” He might have been paranoid then, but he wouldn’t be now. Wi-Fi has become the newest battlefield in the surveillance war. First, researchers showed it could sense bodies and furniture in the dark. Then came “WhoFi,” a variant that can detect the size, shape, and makeup of those bodies. A once obscure technology is now advancing at a disturbing clip. Now comes something simpler – and just as insidious, from Australia. In July 2024, the University of Melbourne used Wi-Fi location data, cross-referenced with CCTV footage, to identify student protestors at a sit-in, reports Simon Sharwood of The Register. This was after the school ordered protestors to leave and warned that anyone who stayed could face suspension, discipline, or police referral. Despite the students’ misbehavior, the Victoria state’s Information Commissioner investigated this use of technology, citing possible violations of the 2014 Privacy and Data Protection Act. The final report cleared the university’s CCTV use but found its Wi-Fi tracking out of bounds. Why? Because the school had never clearly disclosed this purpose in its Wi-Fi policies. The Commissioner reports: “Even if individuals had read these policies, it is unlikely they would have clearly understood their Wi-Fi location data could be used to determine their whereabouts as part of a misconduct investigation unrelated to allegations of misuse of the Wi-Fi network.” The Commissioner called this “function creep.” Or as we would say, mission creep. Whatever the name, it’s a serious problem. Surveillance technologies rarely stay in their lane. Once deployed, they inevitably “creep” unless nailed down by clear rules, ethical guardrails, and organizational cultures that prize transparency over convenience. To its credit, the university cooperated with the investigation and promised reforms. But let’s be fair, the University of Melbourne isn’t unique here. We’re all naïve about the countless ways our gadgets betray us. And it’s not just CCTV. No one should be shocked when cameras are used as surveillance tools. It is far less obvious that almost every modern technology can be repurposed to follow us wherever we go. Yes, Virginia, Wi-Fi tracks location. It always has. And whenever location data is on the table, the odds of being spied on shoot through the roof. What else relies on location data? Practically everything with a battery. If you want to reduce your surveillance footprint, you can’t rip down the cameras – but you can shut down your phone, smartwatch, Fitbit, smartglasses, and every other blinking, beeping device. Or better yet, leave them at home. With the possible exception of pacemakers, of course. Joseph Cox of 404 Media reminds us of three things that we know to be true about the new era of generative artificial intelligence:
As we’ve written before, AI works best when there’s a human in the loop. Take the case of Citizen.com, whose app is increasingly taking an AI-only approach to crime fighting. Because, really, what could possibly go wrong? Plenty, as you can imagine. Without further ado, here’s 404 Media’s report on what happens when AI is left to its own devices, Citizen-style. It is prone to:
The stakes are as strategic as they are tactical. One of Cox’s sources told him, “This could skew the perception of crime in a particular area,” as AI-created incidents proliferated. By the way, the original name of Citizen – both the app and the company – was, perhaps tellingly, Vigilante. But that’s a story for another day. Will Congress Follow Montana by Closing the Data Broker Loophole?Twenty states have enacted major consumer data privacy laws. When will Washington, D.C., wake up and restrict the open season on Americans’ personal information at the federal level? California lit the fuse in 2018, passing laws that set limits on how businesses collect and sell consumers’ data. This year, new privacy laws have taken effect, or soon will, in New Hampshire, Delaware, Iowa, Nebraska, New Jersey, Tennessee, Minnesota, and Maryland. Montana may offer the best model for federal action. The Montana Consumer Data Privacy Act, which went into effect late last year, mirrors many other state laws, while giving strong, clear rights. In Montana, consumers have the right:
Like many other states, the Montana law also adds special protections for minors, requiring consent for data sales and targeted ads to children aged 13 to 16. But where Montana truly shines is by closing the notorious “data broker loophole.” That loophole lets government agencies dodge the Fourth Amendment’s warrant requirement by simply buying consumers’ data. Montana now flatly bars law enforcement from purchasing sensitive electronic data – such as electronic communications metadata and precise geolocation information – without a warrant. The federal government has no such restraint. Agencies from the FBI and IRS to the Department of Homeland Security, and the Department of Defense, routinely buy and access Americans’ sensitive personal data. Government lawyers insist this is fine because we all “agree” through terms of service – though almost no one reads them, and they never warn consumers that third-party data brokers might be selling their data to the FBI. As more states pioneer privacy laws, the pressure builds. An intense debate on the data-broker loophole in Congress is inevitable. Lawmakers would do well to take a cue from one of Montana’s favorite sons, Gary Cooper, who said: “One nice thing about silence is that it can’t be repeated.” “I don’t think you can make it off the record once you’ve said it – you can’t call dibs after the fact.” - Journalist Philip Corbett Wearables are defined by their comfort. But there is a lot about wearable technology that is distinctly uncomfortable, if not Orwellian. Wearable computers hit the mainstream with the introduction of Fitbits and smartwatches in the 2010s. Now, says The San Francisco Standard, the rise of artificial intelligence is adding spy tech to the wearable computing family tree. The newest devices are akin to smartglasses but take that technology’s most invasive feature – recording the environment – and turn the creep factor up to 11. The new wearables are stylish and somewhat stealthy and designed to do two things very well: listen and remember. They come in the form of pendants, necklaces, lapel pins – or, in a twist, might even look like a Fitbit or smartwatch. But they are all recording devices capable of capturing the wearer’s every conversation and meeting, then transcribing them, and – the pièce de la résistance – using AI to organize, analyze, and mine them for insights (think personal assistant on steroids, or maybe your very own opposition researcher). In some cases, the devices may only transcribe conversations rather than record them, but they’re still listening and processing conversations, so such distinctions are hardly comforting. The San Francisco Standard suggests that everyone in Silicon Valley should assume that everything they say, especially at work, is being recorded. Which means the rest of America – and its kitchen tables, coffee houses, and classrooms – won’t be far behind. One venture capital partner told the Standard’s writers that she knew a fellow VC who records all in-person meetings “without telling the other meeting participants. It's an invasion of privacy and I seriously disapprove of it." Then, presumably referring to herself and the rest of us would-be audience members, she added, “Of course, this is a horrible way to live your life.” In terms of the privacy concerns raised by this new generation of wearables, Julian Chokkattu of Wired cracked the code. Earlier generations of recording devices and software “at least required active engagement like a tap or a wake word to activate their ability to eavesdrop.” For the most part, the new devices are passive and always on, which places responsibility for gaining consent on the instigator. In other words, “Fox, meet henhouse.” In the research, there are lots of names for the chilling effects that even consensual recording has on conversations, but one of the keenest is “spiral of silence.” People will varnish the truth, if they bother to speak it at all. They will hold back, self-censor, even shut down. As for the possible effects on creativity that this sort of tech might have – as in a brainstorming session, for example – we invite you to judge for yourself. If you think all of this seems like a claim just waiting for a plaintiff, we agree: It’s a one-way express ticket to litigation city. But as with most things AI, the laws governing them are in their infancy and court rulings sparse. One corner of Silicon Valley is already fighting back though: Confident Security is developing Don’t Record Me, a browser plugin that could potentially detect illicit recordings and disrupt them. What about audible cues or flashing lights to indicate that one of these devices is collecting data? Don’t count on it. One entrepreneur told Wired, in effect, “That would drain too much battery life.” Another claims that all you have to do is think about recording to activate his product. Thankfully, for that mode to work, the wearable has to be affixed to the side of your temple with medical tape. But don’t expect other forms of personal surveillance to be so obvious. All the more reason for requiring disclosure for private recording and warrants when government agents listen in on what we say. If you don’t like the feeling of being followed, we recommend avoiding Stockholm, Dubai, Almaty, and – this just in – Mexico City. All are major destinations under constant and growing surveillance by public cameras. Izabelė Pukėnaitė at Cybernews reports that Mexico’s capital is now launching a mass surveillance CCTV plan with the suitably creepy name of “Eyes That Look After You.” Let’s break that down: 30,000 new cameras, 15,200 new poles, a $19 million budget, and a whole lot of connectivity. Each pole will have two cameras, one fixed and one capable of tilting/zooming. All of this comes as Mexico’s ruling Morena party moves to eliminate numerous independent regulatory and oversight agencies. One of those was a body that functioned as an ombudsman for the population, with the power to force government departments to hand over information citizens had filed requests for – a sort of Mexican version of the Freedom of Information Act. As is always the case when such moves are enacted, the powers that be resort to doublespeak. “There will be more transparency,” declared Mexican President Claudia Sheinbaum, adding, “the public will be able to easily review the functioning, the spending, and everything the Mexican government does.” An equally disturbing maneuver is the Morena party’s radical overhaul of the country’s judicial system that critics say could easily lead to unabashed one-party rule. Color us skeptical, but we’re having a hard time seeing how a party that is voraciously concentrating its own power is going to use a new mass surveillance system to somehow make people freer – especially a camera system that the cartels have already used to target and kill informants. These “eyes” aren’t designed to “look after” anyone. “Look for” is more like it, which, thanks to new legislation mandating a single biometric ID for all Mexican citizens, will soon be easier to do than ever. It is not hard to imagine these systems being used by Morena-controlled officials for political surveillance. You might be tempted to think at least that could never happen here. It already is. Washington, D.C., beats out Mexico City as the global city with the most government-controlled cameras per capita. Oh well, Ojos que no ven, corazón que no siente – What the eye doesn’t see, the heart doesn’t grieve. “I think the very word stalking implies that you're not supposed to like it. Otherwise, it would be called 'fluffy harmless observation time'.” Author Molly Harper TikTok was already a privacy nightmare:
To this troubling list we can now add the following: In violation of the platform’s own policies, sellers are using TikTok to market GPS trackers to stalkers, reports Rosie Thomas of 404 Media. “Unlike AirTags,”one vendor boasts, “this thing doesn’t make a sound, doesn’t send alerts, she will never know it’s there.” In the comments section of a similar ad, one user bragged, “I bought some and put it on cars of girls I find attractive at the gym.” Lest there be any doubt, Thomas’ report quotes Eva Galperin at the Electronic Frontier Foundation: “This is absolutely being framed as a tool of abuse.” Galperin, co-founder of a non-profit that keeps tabs on such products, categorizes these products broadly as “stalkerware.” The central legal and moral issue underlying stalking, as with all violations of privacy, is consent. Expert Market’s page summarizing GPS tracking laws by state underscores the point: The word “consent” appears in these laws 115 times. When asked about the viral proliferation of ads for these tracker tools, TikTok told 404 Media that they “prohibit the sale of concealed video or audio recording devices on our platform.” And yet, Thomas and her colleagues continued to find such ads every time they looked. Which, of course, should come as a surprise to absolutely no one. This is just one more good reason why President Trump should cease suspending the law requiring TikTok to be sold or shuttered. America’s enemies aren’t storming our shores with tanks and planes – they’re breaking into our email, phone, and data systems. And right now, we’re making their job too easy. The U.S. Senate can toughen up America’s defenses by passing the Lummis-Wyden amendment (S. Amdt. 3186) to the 2026 National Defense Authorization Act. This bipartisan fix would finally force the Pentagon to use secure, encrypted communications – and end its costly dependence on a handful of Big Tech vendors. The Scale of Attacks In 2023, Chinese hackers broke into Microsoft-hosted government email accounts, stealing 60,000 messages from the State Department alone. A year later, another Beijing-backed group hacked into AT&T and Verizon, tapping phones of Americans that included presidential candidate Donald Trump and then-Sen. J.D. Vance. But Vance’s conversations were kept safe. How? He relied on Signal, the end-to-end encrypted app that even the hackers couldn’t crack. The obvious takeaway is that without end-to-end encryption, our most sensitive communications are one hack away from the front page of Beijing’s intelligence briefings. The Lummis-Wyden Fixes
Why It Matters Our military today is stuck in walled gardens built by giant tech firms that all too often proved eminently hackable. That’s bad for taxpayers and disastrous for national security. Hackers don’t need to break into every office at the Pentagon – they just need to knock down the door of one weak provider. The Lummis-Wyden amendment puts a lock on those doors. Congress Must Choose Security Congress can keep letting foreign spies read Cabinet-level emails and tap presidential phone calls, or it can finally demand that the Pentagon use the best tools available. This amendment is a wake-up call that we can’t defend the country with outdated software. Encryption and competition would at least give our country a fighting chance to keep China and other bad actors out of our business. PPSA calls on the Senate to pass the Lummis-Wyden Amendment to stop giving hackers the upper hand. This measure will better protect our service members, the American homeland, and the private deliberations of our leaders. Where you drive is personal. So is what you click on and who you communicate with. Combine the two, and suddenly a revealing picture emerges of your political, romantic, financial, and religious beliefs and activities – in short, a comprehensive dossier of your private life. That appears to be what is happening with Flock, which is mashing up its camera surveillance of millions of drivers in 5,000 communities across the United States with digital information gathered on us by data brokers. According to 404 Media, the good news is that after internal deliberations, Flock told its employees in May it would not merge stolen dark web data with information from its network of license plate readers (LPRs). Joseph Cox of 404 Media reported that in a meeting, a Flock supervisor told employees that after a “policy review process,” the company’s new search tool Nova would not incorporate hacked data from the dark web. So far, so good. Dealing in stolen merchandise is never a good look for a company. Flock, however, announced that it will combine “public records data, Open Source intelligence, and license plate reader data” for law enforcement and other customers. This marks a policy shift. Flock has long insisted that its license plate readers do not collect personally identifiable information, claiming they merely provide law enforcement with a way to track cars tied to crimes. But Jay Stanley of ACLU reports that the company now plans to plug its systems into commercial data brokers offering “people lookup” services. ACLU’s Stanley writes: “In the 1970s, after some government agencies were found to be building dossiers on people who aren’t suspected of involvement in crime like the East German Stasi, Congress enacted the Privacy Act banning agencies from such recordkeeping. Yet the ethically shady and frequently inaccurate data broker industry does basically the same thing, and when law enforcement becomes a customer of those data brokers, it represents an end-run around the law. By tying its LPR data together with data brokers, Flock is effectively automating and scaling the end run around our checks and balances that law enforcement data broker purchases represent … “Imagine that a police officer stood on your street writing detailed notes about you every time you drove or walked by them. All the details about what your car looks like (make, model, color, distinguishing characteristics, bumper stickers, etc.), as well as details about visible occupants and pedestrians – how many, at what time, their activities, demographic data, what they are wearing, attributes they may have such as a beard, hat, tattoo, or T-shirt, and what that hat, T-shirt, or tattoo might say. Now imagine that there is an army of police officers doing this on every block.” Thus, algorithms can now seek patterns in vehicle movements to identify and alert law enforcement to drivers who are “suspect.” Stanley pinpoints why this approach clashes with both the letter and the spirit of the Fourth Amendment. He writes there is a big difference between “providing tools for officials to use in investigating suspicion to generating suspicion.” The fusion of your purchased data with your movements could do exactly that. One day, something as ordinary as making a right on red or casual U-turn could transform you from a routine driver into a suspect. Larry Niven, the acclaimed science-fiction writer, once drolly observed, “I do suspect that privacy was a passing fad.” It certainly seems so today, with networked Ring cameras on every door linked to public and private CCTV, license plate readers, and government agencies buying up our digital lives from data brokers… all of it potentially connected to AI and facial recognition software. Even inside our home, drones can look through our windows. Thermal imaging cameras in the hands of police can penetrate walls to watch us move around in our living rooms and bedrooms. But at least there is one place where surveillance cannot penetrate, one last refuge of absolute, inviolable privacy – the inside of our skulls. We are free to think any thought, sacred or profane, sublime or silly, without fear of detection or punishment by any human authority. But maybe not for much longer. The science journal Cell reports that a computer system has been trained to decode brain waves from people who silently move their mouths while mentally sounding the words to themselves. The signal from the brain is then translated into speech in real time on a computer screen with an error rate of 26 percent to 54 percent. Annika Inampudi in Science reports that this technology, as it is refined, will be a godsend to speech-impaired people paralyzed by strokes or neurological conditions such as amyotrophic lateral sclerosis (ALS). To protect test subjects from blurting out private, inner speech, users can be given unique, nonsense phrases like “chitty chitty bang bang” to cue the device to read their thoughts only when they want it to. It is this latter development that gives us pause. The fact that a safeword is needed to defend against unwanted exposure of thought is concerning. Also concerning is that scientists have had significant success decoding thought even when the subject is not silently mouthing the words he or she is thinking about. The system at times can read mere inner thoughts. At a time when digital technology evolves on fast-forward, it is not too early to be concerned about how this technology might be abused. After all, a few years ago AI couldn’t pass the Turing test. Now ChatGPT is regularly writing entertaining short stories, poems with striking imagery, and student papers that get A’s from naïve professors. The same progression could enable mind-reading technology to rapidly allow authorities to dip into people’s skulls against their will. Imagine, for example, how this technology might be used in interrogations. In this country, at least, the Fifth Amendment prohibition against self-incrimination should make results from such mind-readings inadmissible. But in professions in which polygraphs are routine, from law enforcement to intelligence and some retail positions, it is easy to imagine how such technology could be abused. Overall, speech recognition technology is a boon for handicapped people who are desperate to communicate. It is a heartening and praiseworthy development that scientists – often caricatured as amoral agents of progress – are diligently thinking of procedures to compartmentalize the reading of thoughts only when subjects permit it. Still, this story should give us pause. Something to think about… “Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.” – Ian Malcolm, Jurassic Park Just a quick update about the ever-expanding toolkit of the technocratic mass surveillance state: The new kid on the block is GeoSpy, which can examine a photograph and extrapolate your location in seconds. It claims to accomplish this by using only visual data in the image rather than metadata. From a purely technical perspective, that’s a big achievement. From a privacy standpoint, it’s a nightmare. According to an account first reported by 404 Media and summarized by Alex Hively of SlashGear, the original open source version of GeoSpy was quickly removed when it became clear that it could be used to stalk people. Company founder Daniel Heinen later admonished Joe Rogan and guests in a tweet reminding them that GeoSpy is “only for Law Enforcement and Government” use (which, at the time of the tweet had recently become true). That GeoSpy is now “only for” law enforcement and government use is cold comfort. It seems an all-too-familiar narrative, reminding us of Clearview AI’s similarly reckless approach to the ethics of identification technology. By the end of 2021, the facial recognition startup had scraped ten billion images from the web and social media, providing agencies with a powerful new tool to instantly identify us and aid in the quick construction of dossiers of our beliefs, activities, and relationships. And now, thanks to breakneck developments in technology, the government can now both identify us and locate us. Consider this statement from GeoSpy founder Heinen: “My job as a leader in my space is to build the best technology that customers are asking for. It's not my job to play the ethics game because our elected officials will eventually figure that out. I have full faith in the American people to decide who to elect and what to vote on.” (If this were a video, here is where we’d cut away to a dark screen and the sound of crickets.) We won’t belabor the point as our readers know full well where all of this is likely to lead. But we will quote ourselves from a related article decrying the surveillance capabilities of drones and satellites: “What is cutting-edge technology today will be standard tomorrow. This is just one more way in which the velocity of technology is outpacing our ability to adjust.” With the rise of GeoSpy, we now have one more reason for Congress and the states to hit pause and reassert the privacy guarantees inherent in the Fourth Amendment. One last thing: Don’t assume you’re safe just because GeoSpy found a picture that you took indoors. It appears they’ve cracked that nut too, having discovered that their visual model can learn “regional architectural cues.” Silly us, we thought all apartment kitchens looked the same. As Malwarebytes advises, “It’s just become even more important to be conscious about the pictures we post online.” In Washington, D.C., they call it the “data broker loophole.” This is the legal maneuver by which a dozen federal agencies, ranging from the IRS to the FBI, Department of Homeland Security, and the Pentagon, purchase records of Americans’ personal digital activity from third-party data brokers. What is this loophole? With a straight face, the government claims that while the Fourth Amendment forbids “unreasonable searches and seizures” of our personal effects, nowhere does the Constitution forbid the government from opening its wallet and simply buying our data. And to be fair, we all routinely click the “agree” box that allow these transfers when scanning social media platform’s long and hard-to-read terms of service. This is still disingenuous at best. The digital trails we leave online – our communications, the identities of our friends and associations, our personal financial, romantic and health secrets, not to mention our search histories – reveal information that can be more intimate than a diary. Americans are noticing this violation of their privacy. A recent Ipsos poll finds that roughly 90 percent of Americans respond that it is not acceptable for private data brokers to sell our personal data to the government. Congress is certain to soon turn to legislation that will require the government to obtain, as the Constitution requires, a probable cause warrant before inspecting our data. In the meantime, if you want more background on the nature, extent, and abuses of the data broker loophole, here are some useful resources: 1. What Are Data Brokers, And How Do They Work? (Proton) / June 20, 2025 A detailed primer on data brokers and the risks posed to consumers, including the sale of such data to government agencies without warrants. 2. “Anyone Can Buy Data Tracking U.S. Soldiers and Spies to Nuclear Vaults and Brothels in Germany,” (Wired), Nov. 19, 2024. Despite what has to be the most clickable headline in recent history, Wired presents a deep and substantive investigative report that reveals the extent to which the sale of personal data collected by personal devices is putting Americans in uniform and national security at risk.
3. “A Continuing Pattern of Government Surveillance of U.S Citizens,” (Americans for Prosperity: James Czerniawski) See p. 4, April 8, 2025. Eighty percent of Americans agree that the government should “obtain warrants before purchasing location information, internet records, and other sensitive data about people in the United States from data brokers.” And yet federal agencies routinely buy our data, threatening our most basic constitutional rights. 4. “The Intelligence Community Plan to Make It Easier to Buy All Your Data,” (Project for Privacy and Surveillance Accountability), June 2, 2025. The Office of the Director of National Intelligence has instituted a plan to make sure Americans’ private data is no longer decentralized, fragmented, siloed, overpriced, and limited – literally everything you might hope your personal data would actually be. 5. Montana Becomes First State to Close the Law Enforcement Data Broker Loophole (EFF) / May 14, 2025 Montana is the first state to close the data broker loophole, preventing law enforcement from buying personal digital data – like location, communications, and biometrics – without a warrant. Under SB 282, such data can only be accessed with a warrant, user consent, or an investigative subpoena. The law goes into effect 10/1/2025. 6. Federal Government Circumventing Fourth Amendment by Buying Data From Data Brokers (Criminal Legal News) / April 15, 2025 Summarizing much earlier reporting from Reason and WSJ, the focus is on efforts to dodge Carpenter v. United States (2018). Federal agencies routinely purchase commercial cellphone data – which tracks individuals’ movements – without warrants, skirting Carpenter, which requires a warrant for such data.
7. “FISA and the Second Amendment: Gunowners Beware,” (Reason[A1] ), Feb. 1, 2024. If you’re a gun owner and use Apple products, you should be deeply concerned about the ability of federal law enforcement agencies to get a lot of data on you – without ever having to get a warrant. 8. EPIC White Paper Finds Gaps in State and Federal Privacy Law Coverage of Data Brokers (EPIC) /July 29, 2025 This report argues that data brokers exploit legal loopholes in the Fair Credit Reporting Act (FCRA) and Gramm-Leach-Bliley Act (GLBA) to avoid compliance with modern privacy laws.
9. Government Purchases of Private Data (Wake Forest Law Review, 59:1) / April 2024 This paper questions the widespread assumption that the Fourth Amendment can never apply to commercial purchases. Yet police officers can generally purchase an item available to the public without constitutional restriction.
10. Federal Acquisition of Commercially Available Information (POGO) / Dec. 16, 2024 The Project On Government Oversight (POGO) warns that federal agencies' unchecked use of commercially available information (CAI), including sensitive personal data purchased from brokers, circumvents Fourth Amendment protections. POGO urges the Office of Management and Budget to end warrantless surveillance practices, increase transparency, and implement strong regulations. The comment highlights risks to privacy and civil liberties, especially for marginalized communities, and documents past abuses by agencies like DHS, ICE, and the FBI. Finally, if you are interested in solutions, start with The Fourth Amendment Is Not For Sale Act, which passed the House of Representatives last year. If enacted, this measure would require the government to obtain a warrant before buying Americans’ personal information. PPSA looks forward to this or some similar legislation being introduced in the 119th Congress. [A1]This links to a Cato article, which does not contain any links to a Reason article, so I don't know if he embedded the wrong link or wrote the wrong text. The text itself is from Cato. Director George A. Romero said of his horror masterpiece, The Night of the Living Dead, that “if it doesn’t scare you, you’re already dead.” Section 215 of the PATRIOT Act – the “business records” provision – should at least concern you. This surveillance authority sunsetted on March 15, 2020, after Congress failed to renew it. And yet, somehow, it continues to roam the landscape. As it does, significant questions about how this oddly enduring authority is being practiced deserve an answer. Section 215 was the legal authority under which federal intelligence agencies obtained secret orders from the Foreign Intelligence Surveillance Court to review personal information from “tangible things.” This broad category could include location data, medical records, travel records, and more, in paper form or from electronic communications relating to any transaction. The FBI in the past used Section 215 authority to collect phone logs cataloging records of calls and texts, and internet logs revealing the identities of people who visited particular web pages, and other sensitive data. After Congress prohibited bulk acquisition of records in 2015, Section 215 prompted agencies to use a “specific selection term” to narrow the scope of their investigations. The government uses “unique identifiers” like email addresses extracted from data to target individuals within collected data. Congress chose to let Section 215 expire in 2020, shutting it down entirely and requiring the government to use a more narrowly tailored authority called pen register/trap and trace orders. But five years after Section 215’s expiration, the program continues to operate as a zombie authority.
There is evidence that Section 215 is enjoying a robust afterlife. According to the most recent ODNI Statistical Transparency Report:
The many unanswered questions about Section 215’s afterlife activities cry out for oversight. Congress should require the government to answer:
The answers to these questions may be innocuous. But when a legal authority continues to produce such large and unexplained numbers five years after its expiration, Congress needs to start asking questions. “Moral bankruptcy is common in this industry, but I rarely see a company so proud of it.” – Callie Schroeder, Electronic Privacy Information Center Farnsworth Intelligence sells highly personal data on the cheap. Its business plan is as revolutionary as it is mercenary and brazen: Positioning itself as a legitimate business but selling data previously brought to market in the twilight corners of the dark web. The realm in which this company operates is euphemistically known as “open-source intelligence,” or OSINT. Once upon a time, OSINT was primarily composed of publicly available data. But don’t be fooled. To quote PC World writer Michael Crider, “This is information apparently sourced directly from data breaches, stolen from companies and services in ways that just about every country considers a crime.” And it’s all repackaged to sell at various price points. To wit, 404 Media, whose Joseph Cox broke the story, bought a tiny slice of Farnsworth’s data wares for a mere $50. 404 Media reports that is all they needed to eventually mine the addresses of numerous identity theft victims. Perhaps that’s why journalists report the company’s website says customers can find up-to-date addresses for debtors. Need data for your multi-million-dollar divorce case? Farnsworth can do that too. As for the potential for trade secret violations, corporate espionage law, and the general use of stolen data in courts, EPIC’s Callie Schroeder says stay tuned. There are likely statutes that apply to what Farnsworth is doing, but as with all things digital, judicial rulings have been inconsistent to date. And don’t even get us started on the surreptitious value government agencies of all stripes will place on this kind of dark web data – it could be a warrantless surveillance extravaganza. But there is no denying that shamelessness sells. After the story broke, Farnsworth issued a “404MEDIA” promo code on LinkedIn to celebrate the fact that it “has been getting a lot of attention.” “The future of AI is not about replacing humans, it's about augmenting human capabilities.” - Sundar Pichai, Google After you read this, you’ll wish that students using AI to cheat was the biggest problem with the technology. Turns out, a bigger issue is just how inconsistent AI is at monitoring students for “safety risks.” It’s a privacy nightmare we’ve written about before, with laptops snapping pictures of students at home, and the chilling effect such surveillance has on creative expression and First Amendment rights. But almost four years after we first reported on this increasingly popular trend in secondary education, it shows no signs of letting up – even as we wait for the outcome of a major lawsuit by Columbia’s Knight Institute designed to compel a school district to disclose the nature of their surveillance tech. Instead, we continue to read more headlines like this one from Sharon Lurye from the Associated Press: “Students have been called to the office – and even arrested – for AI surveillance false alarms.” You can read the details of the story for yourself, but the gist is this: A student made a joke on a school-related chat account. The joke was both culturally insensitive and had a reference to feigned violence. It was also somewhat self-deprecating. It was therefore exactly the kind of crass, completely innocent sarcastic drivel that you would expect from a teenager. The only difference is that AI was watching (and, apparently, without the aid of humans possessed of common sense). So, of course, the student was arrested and separated from her parents for 24 hours. Then, somehow, a court made up of non-AI judges ordered eight weeks of house arrest, a full psych evaluation, and 20 days at an “alternative” school. When asked about the incident, the CEO of Gaggle, the company that made the software, opined, “Golly, I wish that was treated as a teachable moment, not a law enforcement moment.” (Okay, we added the “Golly.”) In all such cases, best as we can tell, these are traditional AI systems – unthinking, rules-based programs that have absolutely no sense of context. Traditional student surveillance products are close to 20 years old. The systems that schools pay companies like Gaggle six figures to operate as elaborate keyword-matching programs don’t “think,” and they certainly don’t understand context. Just imagine a student paraphrasing one of Shakespeare’s characters crying, “O, I am slain!” Should that student be flagged for suicide watch? That, of course, is a rhetorical question – something that we’re genuinely worried students in these surveillance-based school systems might never learn. (Of course, we have no idea if any Shakespeare character ever uttered anything like that because we used AI to suggest it.) We get that being proactive about student safety is critical. But monitoring what they type isn’t the right way to do it. Students type – and say – all kinds of tasteless statements because that’s what being in elementary, junior high, and high school is all about. Students should not get arrested (and traumatized) merely for writing sarcastic or ironic language – the kinds of expressive skills school are supposed to teach them in the first place. This isn’t working and it’s time for parents and school systems – and yes, the students themselves who have filed lawsuits – to stand in solidarity and demand at least an overlay of common sense. Without human discernment, AI-powered surveillance systems are unthinking, non-stop monitors designed to destroy privacy, creativity, and individual expression. We would also remind the school administrators who surely mean well when they initially deploy such systems not to forget the cardinal rule of any AI system: Always keep a human in the loop. Every flagged item should be reviewed by at least one school system employee – preferably a principal with, perhaps, the addition of a school counselor – before anything gets reported to law enforcement. Case v. Montana In June, the U.S. Supreme Court granted the petition to hear Case v. Montana after PPSA filed the only brief supporting the Court’s review of a decision of the Montana Supreme Court. PPSA has now filed its brief on the merits of the dispute. We made it clear that Case v. Montana is a precious opportunity to restore the Framers’ original vision of sharp limits on exceptions to the Fourth Amendment. The Framers jealously guarded privacy. Exceptions to the warrant requirement – exigent circumstances like chasing a bank robber into his home – had to be so pressing (and so obvious) that not granting them would be unreasonable. We've since veered off course. A new doctrine introduced in the mid-20th century, “emergency aid,” has threatened to grow into a catch-all category, a Trojan Horse by which the Fourth Amendment is thoroughly subverted. The temptation for law enforcement (and the courts) to treat everything as an “emergency” has never been greater than in this always connected, instant gratification digital age. We therefore ask the Court to remind our institutions to take two deep breaths before brushing aside the Fourth Amendment. We told the Court:
Then as now, exigencies that permit warrantless searches of persons, homes, and property must be defined narrowly, specifically, and in ways that preserve the Court’s respect for what it has called the “privacies of life.” Lowering the standard for warrantless “home” entry lowers it for everything. Just because our effects are vastly more digital (and diffuse) today, we have no less a right to be secure in our personal effects and our very lives. We all know how Troy fell. It is time for the Court to take a good look inside the doctrine of exceptions to the Constitution. Watching the Watchers: Former NSA Employee on Flock Cameras: “Real and Palpable Damage to Citizenry”8/5/2025
On April 8, the Board of Scarsdale Village, New York, approved a $2.1 million contract with Flock Safety to bring mass camera surveillance to its community. Many residents of Scarsdale, the wealthiest suburb in the United States, were disturbed by the apparent contravention of the Board’s rules, giving no advance notice or allowing any public comment before voting 6-1 vote to approve the contract. Many of its residents are deeply troubled about the implications of Flock Safety camera surveillance, which enables AI-powered license plate readers to follow residents in their daily travels. Jessica Burbank followed this story on Drop Site News, writing: “Flock is a $7.5 billion surveillance technology company, operating in over 5,000 communities across 49 states. Flock has a proven playbook to expand through securing local government contracts, often behind closed doors.” Burbank reports on the public comments of Scarsdale resident Charles Seife, a former employee of the National Security Agency, who said: “The system that Scarsdale wishes to implement is extremely dangerous … The records are kept for several weeks. At the very least, they allow retroactive surveillance. These systems are immensely popular with politicians and law enforcement, even though they do real and palpable damage to the citizenry … “We're creating that database so that we can always do that for anyone, that you're constantly tracking people's movements. You have that system in place so that you don't need to articulate the suspicion before you're gathering that on someone, before you're actually trying to tag someone with wrongdoing. When you have that system there, all someone has to do is say, I don't like that person. And then you've got that surveillance already established.” Seife later told Drop Site News: “Freedoms don't come back and privacy doesn't come back, and we are taking these irreversible steps so blithely for no real reason.” Another Scarsdale resident, Josh Frankel, said: “The way I see it, it is not a matter of if this data will be abused and misused, only a matter of when and by whom.” “The real danger is the gradual erosion of individual liberties through the automation, integration, and interconnection of many small, separate record-keeping systems, each of which alone may seem innocuous, even benevolent, and wholly justifiable.” U.S. Privacy Protection Study Commission, 1977 To try to find people in the United States illegally, the Department of Homeland Security (DHS) directed the Centers for Medicare and Medicaid Services (CMS) to comply with its request to sift through the health data of 79 million Medicaid recipients. This includes giving DHS access to sensitive personal information, including addresses, birthdates, ethnicity, IP addresses, banking data, immigration status, and Social Security numbers. Twenty states have sued in response, arguing that giving DHS access to such personal data violates privacy protections under multiple federal laws, including the Administrative Procedures Act, the Social Security Act, HIPAA, and, of course, the Privacy Act. Several civil liberties groups, including the Electronic Frontier Foundation, EPIC, and Protect Democracy Project, have filed an amicus brief in that case.
Our own take is that this is the weaponization of data, a characterization articulated by many others. The thing about weapons, of course, is that they can be pointed in any direction. Today it’s illegal aliens. Next time, it could just as easily be wealthy taxpayers, political dissenters, or those who engage in unpopular speech. Orange County official Jose Serrano told the L.A. Times that such targeting is dangerous because “the information is being used against people.” In other words, it could be used for any reason by this or a future administration against you. Since 2008, it’s been illegal to discriminate against someone based on their genetic information, thanks to the Genetic Information Nondiscrimination Act (GINA). But when 23andMe personal genomics company filed for bankruptcy in March, sharp new issues in genetic privacy emerged. The 23andMe debacle raised the specter that the genetic data of millions of people would be sold to the highest bidder – with almost no oversight from a privacy-rights standpoint. The company did the right thing, asking a judge to approve the sale of its data to a nonprofit research institute that will only access the profiles of customers who explicitly allow their data to be used in deanonymized form for medical research. Still, as Colin Loyd wrote in the Minnesota Journal of Law, Science & Technology, that 23andMe’s bankruptcy and potential sale “reveals deep flaws in the current regulatory system governing genetic data privacy.” There are astonishingly few constraints on the sale and use of genetic material, other than in a handful of state statutes. Which makes Rep. Ben Cline’s (R-VA) “Don’t Sell My DNA Act” a big step to protect privacy. The bill, co-sponsored in the House with Rep. Zoe Lofgren (D-CA), has a Senate companion introduced by Sen. John Cornyn (R-TX). The bill updates the Bankruptcy Code to:
All of these measures are a direct response to the privacy threats raised by the potential post-bankruptcy sale of 23andMe. They are focused on bankruptcy scenarios. What Americans need next is legislation that builds robust privacy guardrails around genetic information itself, agnostic to specific scenarios. In other words, we need guardrails that ASU law school professor Laura Coordes calls “baseline protections.” Stopping this data from being bought, sold, and potentially misused as a result of bankruptcies is good policy, but essentially lumps our most deeply personal information into the category of furniture, real estate, and other assets. What about situations that don’t involve bankruptcy? For example, should data brokers be able to sell genetic information at any time, without affirmative consent, to anyone, including government agencies and foreign entities? In a very real sense, genetic information represents the future of data privacy, which means the time to enact sweeping legislation is now – whether it’s expanding HIPAA compliance to include genomics companies or reviving legislation such as Sen. Bill Cassidy’s Genomic Data Protection Act. In only a few years’ time, no data will be as valuable as genetic information – irresistible prey that buyers of all kinds would love to sink their algorithms into. In the meantime, if you’ve already sent a DNA sample to 23andMe, as tens of millions have, consider deleting it – here’s how. And before sharing it in the future, ask to see privacy and consent policies in advance. With any luck, you’ll be supported by robust federal statutes, built, we hope, on important foundations like Rep. Cline’s “Don’t Sell My DNA Act." Do you get a creepy feeling, a tingling on the back of your neck, when you think of a camera behind or above you illicitly watching your every move? Now, imagine if that camera has legs – and it’s crawling up your pants. Yes, good, old-fashioned German ingenuity has resulted in Kakerlaken – the humble cockroach – being transformed into a robust surveillance platform. These roaches, the actual insects, are fitted with tiny AI backpacks that use neural stimulators under wireless control to steer the little spies to their targets, while they carry miniature cameras and sensors on their tiny, slimy carapaces. In case you are wondering, German scientists at Swarm Biotactics are using the Madagascar hissing variety, one of nature’s largest, and surely among your all-time, favorite cockroaches. These cyborg-bugs will do more than provide real-time video reconnaissance. They will also detect toxic gas, radiation, and heat. And the technology’s neural interface can coordinate a large number of cyber-roaches to converge on a target as a swarm. Even better! These technologies, reports The Times of India, allow “remote control and autonomous swarming in tight or inaccessible environments.” Germany has invested more than $15 million to perfect this surveillance army, no doubt with Russia’s military threat in mind. Germany’s invention appears to be an advance over U.S. and Chinese military projects to develop tiny, mosquito-like drones to carry out surveillance. But those tiny robots are limited by range and battery life. A roach lives off the land and can scuttle for months to a year. They make the perfect sleeper agents and require no fake beards or forged passports. We can certainly see the military utility of this project. We also can’t help but note that exotic technologies developed for warfare have a way of migrating – or, perhaps in this case, scuttling – from military to civilian uses. It is inevitable perhaps that similar off-the-shelf AI and sensors will make their way into commercial and law enforcement uses. It’s bad enough when you see an ordinary Kakerlaken on the floor when you turn on the kitchen light. It would be even worse if someone recorded you shrieking before you smash it with your shoe. “The houses have eyes now.” |
Categories
All
|
RSS Feed