Watching the Watchers: Amazon’s Ring Superbowl Commercial Demonstrates “Terrifying” Surveillance2/10/2026
Watch Amazon’s Super Bowl ad and tell us what you see: a heartwarming story of a family reunited with a lost dog, or another element in America’s comprehensive surveillance state. As the ad shows, Amazon’s free “Search Party” function connects cameras in a whole neighborhood to look out for a lost dog. Amazon’s AI, trained by tens of thousands of dog videos, can recognize different breeds, fur patterns, shapes and sizes to spot the lost puppy. That is not a bad thing at all. But many viewers found the ad “terrifying,” not heartwarming, according to Kelly Kazek of al.com. One commenter on X wrote: “Ring just casually outing themselves as literal spyware that can be accessed by anyone on the network. This is insane.” Another wrote: “Amazon owns Ring and they want to use all these devices to make a mesh network for Amazon sidewalk … The American consumer just got a Trojan horse packaged as home security.” As EFF’s Matthew Guariglia reported last year: “Not only is the company reintroducing new versions of old features which would allow police to request footage directly from Ring users, it is also reintroducing a new feature that would allow police to request live-stream access to people’s home security devices … “This is a grave threat to civil liberties in the United States. After all, police have used Ring footage to spy on protestors, and obtained footage without a warrant or consent of the user.” The Search Party AI function greatly amplifies Ring’s surveillance capability. This default feature of Amazon Ring that can identify Fido can also identify you, where you go, and people you visit. At the very least, Amazon should announce limits on how this technology can be trained to follow Americans in our daily movements. It seems like such a good idea: You lose your dog Ziggy, and you might – but likely won’t – find him by nailing flyers to telephone poles and making social media posts. But with a massive national database of dog photos and a search image function powered by AI, you can save the day. Another technology to find individual dogs comes from “snout recognition,” the canine version of facial recognition. This tech has dubious origins – blacklisted Chinese AI giant Megvii, which has been developing such canine facial-recognition technology (for snouts of all shapes) since 2019. A more common technology links poop to pups through DNA analysis of dog waste. One innovative company, PooPrints, caters to landlords and HOAs desperate to sniff out dog owners who don’t pick up after their pets. No joke: If you want to live at a swanky condominium along the Hudson in New Jersey, for example, you may be required to have your dog’s DNA swabbed and put on file. (If it can happen in Italy, it can happen here). But there’s a flip side to these otherwise noble uses of detection/recognition technology – this isn’t really just about our pets. Though well-intentioned, these methodologies can be leveraged as yet another way to bypass our privacy expectations. At least one published study recounts how canine DNA was used to convict four men of murder. All it took was a crime tip from a caller and some residual dog poop from the scene found on one of the perpetrator’s shoes. All other evidence was inconclusive, but the DNA analysis showed the odds of the sample coming from a different dog other than the one at the crime was 1 in 1.16 billion. We’re all for analyzing DNA and snouts to solve such criminal cases, as the Fourth Amendment clearly permits. What’s concerning is the cavalier way in which something as deeply and uniquely ours as DNA – and now that of our pets – can be gathered, stored indefinitely, and misused without permission or legitimate purpose. Just add human and canine DNA to the thousands of other data points already purchased and warrantlessly accessed by federal agencies and stolen by bad actors. It's just one more knot in the ever-tightening surveillance net that surrounds us. Remember that the next time you wonder whether to pick up Ziggy’s contribution at the dog park. One of the unintended consequences of living in the digital age is that everything, sooner or later, becomes quantified as a data point. That now includes – insert “Rated R” warning here – an app user’s masturbation frequency. (Exercising great discipline, we will resist the temptation to make tasteless puns throughout this piece, though they practically write themselves. So, use your imagination.) Back to the story – addictions of many sorts are as old as humanity. If there’s a silver lining to the otherwise debatable benefits of social media, it may be the proliferation of apps now claiming to offer support for those who seek to overcome their habits. That includes the category of sexual addiction to pornography and masturbation. 404 Media, which originally broke the story, says that an app devoted to helping users defeat their porn addiction is inadvertently sharing related data. This includes how often users look at porn, how they respond, and how it makes them feel when they do. 404 says the story is “a good reminder to think twice before giving any app your personal information.” The data also includes the users’ age. 404 Media’s reporting suggests that many of the affected users described themselves as minors – as many as 100,000 of the 600,000 whose records proved to be accessible. These vulnerabilities were apparently first reported to the app maker by an independent security researcher in September. To date, however, the company has not resolved the issue. In fact, its founder has dismissed the allegations as “a bit of a joke,” suggesting the potential for a data leak was faked. For privacy reasons, 404 isn’t naming the app. The root cause of this vulnerability is a long-understood flaw in Google Firebase, which is used by developers to build apps. This flaw is therefore easily replicated by experts. In other words, it’s no joke. The report indicates that for reasons unknown, Google itself hasn’t fixed the issue. But it’s even more curious that all app makers and even app marketplaces – in whose trust users place their data – haven’t done so, either. All of which means that when it comes to data security, an entry made in confidence can amount to global oversharing. "The data they can get on what motivates you, what actually makes you take an action – that's so valuable," says technology journalist Elaine Burke. “This is [about] so much more than what your browsing habits are and what you're interested in.” She warns that developers are sold on the notion that humans are “mathematical problems that can be solved with the right metric.” This story points to the larger issue of falsely believing that when it comes to defeating age-old personal issues in the 21st century, it’s as simple as thinking there’s an app for that. That impulse leads many to unknowingly risk their most personal data with the tap of a digital button. The promise is self-control. But the price might be a loss of privacy. This demolition of personal privacy by datapoint is made worse by the regular practice of a dozen federal agencies – ranging from the FBI to the IRS – to purchase Americans’ private digital information from data brokers and review it at will. That is all the more reason for Congress to pass a law that imposes a probable-cause warrant requirement before agencies can inspect Americans’ most private information. In the meantime, practice caveat venditor: seller beware – especially when the product is you. Chatrie v. United States The Project for Privacy & Surveillance Accountability is asking the U.S. Supreme Court to consider whether the Fourth Amendment allows law enforcement to use geofence warrants to retroactively track the movements of everyone in a defined area. These so-called “reverse warrants” involve law enforcement’s request for information from technology companies – like Google, Apple, Snapchat, Lyft, or Uber – that allows them to identify potential suspects in a crime. This case began with a robbery in 2019 of $200,000 from a credit union in Midlothian, Virginia. Detectives soon hit a dead end in a search for suspects. So they served Google with a geofence warrant to provide certain cellphone data for everyone who passed through a circumscribed area around the credit union. As a result, people suspected of no crime had their personal information examined by police. Targets included residents of a nursing home, diners and wait staff at a Ruby Tuesday restaurant, and guests who had checked into a Hampton Inn. The search led to the arrest and guilty plea of one Okello T. Chatrie, who now seeks to exclude this evidence on constitutional grounds. Federal Judge Mary Hannah Lauck noted that because Google logs cellphone users’ location 240 times a day, technology gives police “an almost unlimited pool from which to seek location data” in a broad area in which everyone has “effectively been tailed.” But the U.S. Court of Appeals for the Fourth Circuit, sitting en banc to review a divided panel decision, held that this geofence warrant did not violate the Fourth Amendment. The U.S. Supreme Court is now set to take up this question. In our brief, we are telling the Court that such dragnet surveillance is fundamentally incompatible with the Fourth Amendment’s core protections. Geofence Warrants Are “Digital General Warrants” One of the primary abuses that motivated the Founders to create the Fourth Amendment was the use in colonial times of general warrants – broad search authorizations that allowed the King’s agents to rummage through private lives and property without individualized suspicion. Geofence warrants are their modern equivalent. Instead of naming a person or place to be searched based on probable cause, geofence warrants similarly authorize the government to sift through massive location databases to identify people who might be worth investigating. PPSA told the court that these warrants invert the constitutional order – everyone becomes a suspect first, and probable cause, if it appears at all, comes afterward. The Supreme Court’s Carpenter Decision Was Not a Narrow Exception Lower courts have struggled to apply the Supreme Court’s landmark decision in Carpenter v. United States (2018), which held that people have a reasonable expectation of privacy in long-term cellphone location records, even when those records are held by a third party. In Chatrie, the Fourth Circuit treated Carpenter as a narrow exception limited to long-term tracking of a single suspect. PPSA demonstrates that this take misreads the case entirely. Carpenter reaffirmed a broader principle: Fourth Amendment protections must preserve the level of privacy that existed at the nation’s founding, even as technology evolves. The fact that data is held by a third party – or that the government demands only a “slice” of a much larger tracking database – does not erase reasonable expectations of privacy. A two-hour window into a comprehensive location history can still reveal intensely private information – where someone worships, seeks medical care, attends political meetings, or simply lives their daily life. PPSA is telling the Court that the privacy concerns raised by geofence warrants are even more severe than those in Carpenter, because they involve mass surveillance of unknown and unsuspected individuals. This is not targeted policing. It is suspicionless data mining. Your Privacy Rights Depend on Where You Live Courts across the country are sharply divided on this issue. The Fourth and Eleventh Circuits have suggested that geofence searches may not even trigger the Fourth Amendment. By contrast, the Fifth Circuit has correctly recognized that geofence warrants are unconstitutional in nearly all circumstances because they lack particularity and probable cause. That split leaves Americans’ privacy rights dependent on geography, and in the case of Texas, whether state or federal proceedings are involved. PPSA urges the Supreme Court to step in now, before this powerful surveillance tool becomes permanently normalized. The Constitution Must Keep Up with Technology As PPSA warns, geofence warrants are only the beginning. We told the High Court: “Fourth Amendment protections are not categorically lost when a person shares or stores his data with a third party while maintaining reasonable expectations and assurances of privacy. The Court should … prevent a contrary understanding of Carpenter from continuing to erode Americans’ privacy – especially now, as third-party storage becomes more ubiquitous and artificial intelligence becomes powerful enough to piece together intimate information from seemingly innocuous details about a target’s life.” The data that this practice puts at risk is not limited to location. The government has used other forms of these “reverse search warrants” to extract other private data, such as identifying anyone who has searched for a specific phrase or forcing commercial genealogy companies to allow access to their DNA databases. Advances in artificial intelligence already allow law enforcement to infer locations from photos and videos, even when no geolocation data is attached. Without firm constitutional limits, today’s location dragnet could become tomorrow’s visual surveillance dragnet. The Fourth Amendment’s precise wording is designed to prevent unchecked surveillance. PPSA’s calls on the Supreme Court to reaffirm that Americans do not surrender their constitutional rights simply by carrying a cellphone. Are you having a good day! I certainly am! When I got to work this morning I could barely contain my excitement at seeing such a full inbox of wonderful things to do! I swear, at times it seems almost criminal to accept pay for doing work I love so much! [Smile in the direction of the workplace surveillance camera.] Anyway, I’d love to join you in the breakroom, but I really can’t wait to get back to my workstation! Toodles! Artificial intelligence is getting better at reading human emotion. It is used by commercial technology to perform “sentiment analysis,” reading the emotional tone of written communications – a valuable tool for HR departments, advertisers, and customer-engagement consultants. The next bold step is already at the threshold: AI that can read emotions in our voices, the fleeting micro-expressions on our faces, and our body language. This technology will certainly expand into policing, hiring, and education. Are you acting guilty? Did you hide something in your job interview? Are you bored by the teacher’s lecture? As biometric corridors become commonplace in U.S. airports, AI is being tested to read facial expressions and body language that could identify potential terrorists – based on the tidy theory that people who plan to blow themselves up at 35,000 feet tend to be nervous. But so are people who are running late for their connection, who just had an argument with a spouse, got fired, or are jet-lagged. Emine Akar in a blog for the Institute for the Future of Work enumerated the potential pitfalls of emotional surveillance: “Emotions are not simply reflexes. They are complex, contextual, and culturally shaped experiences. A tear can mean grief, joy, manipulation, or even boredom.” The other risk is that AI, which improves by the day, will read our emotions all too well. Pervasive emotional surveillance may force us to put on a happy face at work, school, and the airport. To frown may be to risk detention, detainment, or delay. We could even risk committing “facecrime,” to name just one of the clever neologisms of George Orwell’s 1984. That novel’s protagonist, Winston Smith, was well acquainted with facecrime. One had to always have an expression of love when watching Big Brother on the telescreen. One had to have an expression of rage when engaging in the mandatory two minutes of hate. Smith knew that the “smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself – anything that carried with it the suggestion of abnormality, of having something to hide.” When we allow machines to read our emotions, we risk giving them power over us. “The danger here is not just that machines fail to understand us,” Akar wrote. “It’s that they may begin to discipline us – nudging our expressions, altering our behavior, shaping our emotional lives in invisible ways.” This kind emotional manipulation was well captured in the movie Her, in which a man falls in love with an AI (not hard to do when the voice belongs to Scarlett Johansson). Pope Leo XIV is not being prescient – he is simply being current – when he warned us over the weekend about getting involved with “overly affectionate” chatbots, lest they become “hidden architects of our emotional states.” We need to be more concerned about the implications of emotionless minds that can read, exploit, and manipulate our emotions. The European Union’s AI Act is one example of how to restrict emotional surveillance at school, work, and other sensitive areas. It is time for Congress, states, and technology leaders to put proper guardrails on emotional surveillance of Americans as well. The next time you get a letter asking you to join a class-action lawsuit for something that is in fact relevant to you … it’s probably not a coincidence. Epic Systems is the largest vendor of electronic health records (EHR) in the United States. A few years ago, its engineers noticed that some of its customers were behaving suspiciously. Their internal investigation revealed what they allege are “organized syndicates” that purchased records under false pretenses in order to use the data for non-treatment purposes – mostly to generate client leads for law firms. It's all in a new federal lawsuit against Health Gorilla and its customers. This suit was filed by Epic and various healthcare partners, including UMass Memorial, as detailed by Daniel Gilbert in The Washington Post last week (paywalled story here). Among other things, Epic’s investigation revealed that as many as thirty law firms appeared to have accessed patient records. Though no firms are named in the litigation, Epic says they don’t need to be. The suit alleges that, as gatekeeper, Health Gorilla was knowingly “in league with its connections’ misuse of health information as a commodity.” Epic also claims that Health Gorilla’s customers went to great lengths to disguise themselves as healthcare providers to hide their true intent. These tactics included adding junk data to patient charts to “give the false impression they are treating patients.” Fictitious websites, shell companies, and the use of sham National Provider Identification numbers are cited as additional evidence of malfeasance mentioned in the complaint. The lawsuit suggests that the schemers operate like a Hydra: “When one fraudulent entity is exposed, the bad actors birth a new one.” If Epic asked one company about unusual patterns in its records requests, submissions would abruptly stop only to be restarted by another. As Brittany Trang of STAT News notes, the current lawsuit “raises fresh questions about how to guarantee patient records are only shared with legitimate medical providers.” Industry expert Don Rucker agrees, calling it “a fight over who controls access to clinical data and how those data are governed once they move outside the provider's EHR.” Rucker and others point out that the HIPAA Privacy Rule – like most federal statutes on the matter – poorly defines “purpose of use,” leaving room for broad secondary categories that include, among other things, marketing. The legitimate use of anonymized patient data is beyond dispute, especially when combined with responsible AI practices. Meta-analyses, for example, can lead to scientific breakthroughs including lifesaving treatments and cures. Anonymized data can improve quality standards and innovations in both practice and research methods. In order for that to happen, HIPAA needs to be updated to protect privacy. A good first step would be for Congress to put guardrails on data brokers’ selling of Americans’ personal digital data. School prepares students for the world of work by instilling discipline, the ability to manage a schedule and prioritize, to solve problems with curiosity and teamwork… and to become accustomed to always being under the watchful eye of the American surveillance state. Public schools use AI software like Gaggle to scrutinize the emails, online chats, and online searches students make on school equipment. Joe Wilkins of Futurism recounts the ordeal reported by Lesley Mathis, a mother in Tennessee, whose eighth-grade daughter was “arrested, interrogated, strip-searched, and held in jail for a night, over some teasing online.” What was this student’s offense? Wilkins: “Specifically, the student’s friends had heckled her about her ‘Mexican’ complexion, even though she has a different ancestry. ‘On Thursday we kill all the Mexico’s,’ [sic] the eighth-grader quipped back.” Was the remark stupid, tasteless, and uncalled for? Yes, yes, and yes. But, as Wilkins writes, “it was clearly a bit of eighth-grade immaturity boiling over, not an actionable threat.” A school counselor would have seen this for what it was. AI did not. “It made me feel like, is this the America we live in?” Mathis said. “And it was this stupid, stupid technology that is just going through picking up random words and not looking at context.” But this was in keeping with Tennessee’s zero-tolerance law requiring any threat of mass violence against a school to be reported immediately. For its part, Gaggle’s CEO Jeff Patterson told The Milwaukee Independent that in this case the school did not use Gaggle the way it is intended. “I wish that was treated as a teachable moment, not a law-enforcement moment,” Patterson said. It is understandable – given how this nation is regularly traumatized by school shootings – why Tennessee has embraced such a standard. But when the filters are set so wide, and the reactions to infractions so extreme, it is hard to justify such a system on the basis of public safety as well as free speech. Schools are learning, slowly, to put up guardrails against overreaction, but only after hard bumps into reality. Consider the policy of Philadelphia schools, which in 2010 allowed students to take school laptops home. None of these students were told that when opened, their laptops would snap an image of them at home – often in their bedrooms – every 15 minutes. One student, 15-year-old Blake Robbins, was accused by his school of being involved with illegal drugs on the basis of what his laptop had recorded. This charge was based on images of Blake lying on his bed, popping fruit-flavored candy into his mouth. Schools have since been taught by public backlash that watching a student in his bedroom is illicit. But privacy-infringing technology continues. It is legal for schools to monitor students’ public social-media posts and online activity made on students’ own devices and on their own time. All of which prepares America’s public-school students for the new American workplace. In many offices, active surveillance of employees extends from the parking lot to the workstation, to the breakroom. Employers not only use technology to scrutinize employees’ search histories. They also use sensors to monitor “desk attendance,” and to follow employees as they move from office to office, on their breaks, and even – in some states – into the bathroom. Nicole Kobie of ITPro reports that one in five office workers are now being monitored by some kind of activity tracker. She also reports surveys that tracked employees are 73 percent more likely to distrust their employer, and twice as likely to be job-hunting as those who are not tracked in their workplace. In California, Assembly Bill 1331 would have barred monitoring in employee-only areas such as break rooms and locker rooms. The bill, which would have fined employers $500 per violation, recently died in the California State Senate. There is likely a human cost – and thus a cost in learning at school and productivity at work – when surveillance records a person’s every move and utterance – all initially judged by artificial intelligence that lacks nuance and social intelligence. Such systems are not only Orwellian; they are also destructive of the trust that is needed for effective teamwork, whether between teacher and student, or employer and employee. Consider the story of Olivia Stober, in her interview with CBS News, who compared her old retail job – where her every interaction with customers was monitored and critiqued by her employer – with her new job, where she is a trusted employee and the cameras are aimed only at the establishment’s front door. Unlike Stober, today’s students are being inured to constant surveillance as they graduate from classrooms to workplaces under the watchful eye of those who claim to only have our best interests at heart. Watching the Watchers: “Un-Personing People,” or How To Control a Population in Three Easy Steps1/20/2026
The ACLU’s Jay Stanley just published a critique of the increasing push by states to adopt digital ID systems. It’s his fifth admonition in as many months, and the message is more urgent than ever: the digital ID bandwagon is becoming a rush job that threatens to discard privacy guardrails. Of the many possible pitfalls, the greatest may be the ability of authorities to “un-person” someone. In the parlance of Orwell and his novel 1984, an “unperson” simply vanishes as every last record of that person’s existence is expunged. Stanley's version of Orwell hinges on what happens when authorities revoke an ID that exists only in digital form. In his new essay, “How to Give the Government New Power to ‘Un-Person’ Someone, in Three Easy Steps,” Stanley unmasks the underlying features of digital IDs that can be revoked at will:
Stanley recommends that lawmakers impose statutory limits on the revocation of state-issued IDs, along with strong due-process protections. He also recommends adding technical guardrails against abusive revocation. Stanley’s original piece goes into much more detail. We also recommend GovernmentTechnology reporter Nikki Davidson’s recent interview with Stanley – it is more than worth ten minutes of your time. Has there ever been a more Orwellian-sounding program than “Total Information Awareness?” This was the post-9/11 brainchild of the Defense Advanced Research Projects Agency (DARPA), a think tank for the Department of Defense. The idea was simple: collect all data on all Americans, then data-mine that giant pile of information to identify “terrorist patterns.” The goal of Total Information Awareness was “predictive policing,” applying the same data-modeling techniques credit card companies use to spot fraudsters in order to catch terrorists before they act. The premise was dubious at its core – identifying terrorist patterns involves a far greater order of complexity than spotting someone misusing a credit card number. Worse, in order for Total Information Awareness to work, the government would need to have access to virtually all information about every American. It would be like stamping out drunk driving – which every year kills four times as many Americans as the terrorist attacks of 9/11 did – by stopping every motorist every few miles to give them a breathalyzer. Admiral John Poindexter, one of the masterminds of the project, wasn’t kidding when he called Total Information Awareness a “Manhattan Project for counterterrorism.” Sen. Ron Wyden (D-OR) called it the “biggest surveillance program in the history of the United States.” The ACLU in 2003 called it “the closest thing to a true ‘Big Brother’ program that has ever been seriously contemplated in the United States.” But nothing was more telling than the slogan of the Information Awareness Office, the Pentagon office that ran the program: “Knowledge is Power.” But power over whom and for what purpose? Total Information Awareness could be used for terrorism today, tax compliance tomorrow, and political surveillance the day after that. Congress was sufficiently alarmed to pull the plug on the Information Awareness Office in 2003. But in 2026, to quote the little girl in Poltergeist II, “they’re back.” This time, the architects of total surveillance have been smart about branding. An executive order issued in March was titled “Stopping Waste, Fraud, and Abuse By Eliminating Information Silos.” It instructs all agencies and departments to make their information on Americans available to all other agencies. These silos were there for a reason. They were put there by the Privacy Act of 1974, often described as “an American Bill of Rights on data.” The law’s purpose was to establish a Code of Fair Information Practice to govern the collection, maintenance, use, and dissemination of on all personally identifiable information (PII) of Americans. Despite this law, federal agencies are complying with the executive order, seeking data from each other and from the states (though 20 blue states are suing in federal court to stop data sharing). The Immigration and Customs Enforcement agency (ICE) is now the gleaming tip of a data “ICEberg,” after a federal judge ruled that the Centers for Medicare and Medicaid Services can share the personal Medicaid data of 80 million Americans. Many agree with the administration that Medicaid needs to be reserved for Americans, not illegal aliens. But no one believes that there is anything close to 80 million illegals in the United States. How might all this PII on Americans be used? How long will this data be kept? How might it be shared with other agencies for very different purposes? “Every generation imagines itself to be more intelligent than the one that went before it, and wiser than the one that comes after it,” George Orwell wrote. To blithely discard the guardrails of the Privacy Act – and to trust that vast amounts of highly personal information won’t one day be abused by the FBI, the IRS, and other agencies – is either cynical or beyond naïve. PPSA has long warned that allowing federal intelligence and law enforcement agencies to purchase Americans’ personal digital data from data brokers would build a surveillance state. Now the federal government has put in place the most effective tools to activate that surveillance state in America. This is the natural consequence of two technologies purchased by Immigration and Customs Enforcement (ICE). Whether you believe ICE’s approach to mass deportations is necessary, or an exercise in cruelty, there is no question that what ICE is doing with technology is guaranteed to transform the whole balance between the federal government and its citizenry. It is deploying two forms of surveillance without a warrant that can track people to meetings with friends, their place of work, and homes, their houses of worship, while also drawing on data gleaned from social media to compile dossiers on Americans’ beliefs and personal associations. In using these technologies, ICE often doesn’t know if the target is an American citizen or someone who is not lawfully in this country. Joseph Cox of 404 Media, in his most recent blockbuster revelation, details the consequences of two technologies purchased from a company called Penlink. One such technology is Webloc, which allows ICE to draw a rectangle, circle, or polygon around a portion of a city and pick out smartphones of interest. Cox writes that “they can get more details about that particular phone, and, by extension, its owner by seeing where else it has traveled both locally and across the country. Users can click a route feature which shows the path the device took.” Webloc’s surveillance relies on exploiting code in ordinary apps on our phones, like games and weather apps, that track our location. The rest comes from data brokers that sell our private information through real-time bidding. In the digital age, we are all standing on the digital auction block. Another Penlink technology, called Tangles, is a social media monitoring product that can take an image of a person’s face on the street, identify that person, locate that person’s social media feeds, and produce a “sentiment analysis” from that target’s posts. At a glance, the government will have a file on your beliefs. These new government capabilities should worry conservatives, libertarians, and MAGA supporters, as well as liberals and progressives. The effectiveness of such technologies makes it inevitable that it will spread beyond ICE to the FBI, IRS, and other agencies, as the government works to break down the traditional data silos between agencies. They are sure to be used against Americans by administrations of both parties. Webloc and Tangles cost only a few million dollars – a rounding error for the federal government. As these capabilities expand and become daily practice, the constitutional balance of government by the consent of the governed – based on the Fourth Amendment’s requirement for a probable cause warrant – will inevitably give way to authoritarian control. Only Congress can stop this. As the surveillance debate heats up ahead of the reauthorization of FISA Section 702 in April, Congress must urgently use that debate to pass a bill or an amendment that will restrict the currently unrestricted purchasing of Americans’ data by the government. As an old Kenny Loggins rock song put it, “make no mistake where you are, your back’s to the corner … stand up and fight.” Let Congress know it is not acceptable for federal agencies to buy our private and sensitive data without a warrant. Here’s some good news – when it comes to privacy, California is catching up to privacy leaders like Utah and Montana. Three new data-privacy bills in Sacramento would give California consumers powerful new tools to manage their personal information. The three legislative initiatives would:
These bills would build on a solid base of existing reforms. California has launched a data-broker enforcement strike force. In the 2023 Delete Act, it created a centralized website for consumers to opt out of sales of their data and delete the data already collected by brokers. Another new law also requires web browsers to let consumers set one universal privacy control. California, home to the nation’s tech industry, is suddenly a national leader on privacy. As our readers know, the gathering, buying, and selling of personal data is big business. Worse, it takes shockingly few data points to identify us as individuals. In today's information economy, that knowledge is gold – to government agencies, police, marketers, and hackers alike. The Delete Act now shines a spotlight on data brokers and their shadowy privacy practices. And the new enforcement strike force adds muscle, promising real accountability for brokers – and the businesses that rely on them – to adhere to their privacy policies. CalPrivacy executive director Tom Kemp said: “Data brokers pose unique risks to Californians through the industrial-scale collection and sale of our personal information. The widespread availability of digital dossiers makes it easier for our personal information to be weaponized against us, and even well-meaning data brokers can be victims of data breaches that leave all of us vulnerable.” Under the law, brokers must register with the state and pay an annual fee. That annual registration fee is funding the new Delete Request and Opt-Out Platform (DROP). Starting in August, California residents who use this free service can have their data profiles wiped – and kept that way, with mandatory deletions every 45 days. Next up – the California Opt Me Out Act, which goes into effect in 2027. It will require major browsers to offer users one simple switch – one click to say “no” to data sharing across thousands of websites. Technically, it’s known as an OOPS, an Opt-Out Preference Signal. It certainly doesn’t sound like a mistake. Here’s hoping – California dreamin’ – that these initiatives take root. Perhaps they will be so well received that our representatives in Washington will be inspired to follow suit by curbing the limitless appetite of federal agencies for Americans’ personal data. The last hope may still be a dream. But if the nation’s most populous state can take such steps, it’s a dream worth having. Michael Moore is a retired public-school teacher living in San Francisco. Nearly every day, as he drives to the store, to his sons’ schools, or to meet friends and family, his movements are watched and recorded at every turn. But he is not being tailed by a private detective or by the police. Moore, like every other driver in San Francisco, is being tracked because he must navigate through the city’s network of almost 500 automated license plate readers (ALPRs). These devices, operated by the San Francisco Police Department (SFPD), constitute a major link in the national surveillance network that the vendor Flock Safety is providing to state and local law enforcement. Moore has had enough. At the end of December, he filed a class action lawsuit in a federal courtroom on his behalf and on behalf of his fellow San Franciscans against the city and its police department over this continuous violation of their Fourth Amendment rights. In his suit, Moore states that Flock ALPRs “make it functionally impossible to drive anywhere in the City without having one’s movement tracked, photographed, and stored in an AI-assisted database that enables the warrantless surveillance of one’s movements.” Here are some of the topline revelations from Moore’s lawsuit: Suspiciousness surveillance: Of the over 1 billion license plate scans collected by 82 agencies nationwide in 2019, “99.9 percent of this surveillance data was not actively related to any criminal investigation when it was collected.” Creates “vehicle fingerprints”: “When Flock Cameras capture an image of a car, Flock’s software uses machine learning to create what Flock calls a ‘Vehicle Fingerprint.’ The ‘fingerprint’ includes the color and make and model of the car and any distinctive features, like an anti-Trump bumper sticker or roof rack. Flock’s software converts each of those details into text and stores them into an organized database.” Tracks social networks: “Flock provides advanced search and artificial intelligence functions that SFPD officers can use to output a list of locations a car has been captured, create lists of cars that have visited specific locations, and even track cars that are seen together.” Data stored indefinitely: “The data that Flock Cameras collect belong to the SFPD but Flock retains data on a rolling 30-day basis. Nothing, however, prevents the SFPD or its officers from downloading and saving the data for longer than SFPD’s 365-day retention period.” Flock doesn’t just see and record – it thinks and analyzes: “ALPR technology is a powerful surveillance tool that is used to invade the privacy of individuals and violate the rights of entire communities. ALPR systems collect and store location about drivers whose vehicles pass through ALPR cameras’ fields of view, which, along with the date and time of capture, can be organized by a database that develops a driver profile revealing sensitive details about where individuals work, live, associate, worship, protest and travel.” Moore’s lawsuit poses a profound constitutional question: Can a city turn every resident into a perpetual suspect simply for driving on public roads? The Fourth Amendment was written to forbid dragnet surveillance untethered to suspicion, warrants, or individualized cause. Yet San Francisco has quietly constructed a system that records nearly every movement of its citizens, not because they are suspected of wrongdoing, but because technology makes it easy. If this practice is allowed to stand, the right to move freely without government monitoring may become a relic – honored in theory, but surrendered in practice to cameras, algorithms, and convenience. If you got a Roomba for Christmas, we have good news and bad news. The good news is that your product will likely continue to be supported despite the company’s recent bankruptcy filing. The bad news: this Massachusetts-based brand may soon be just another piece of Chinese-owned spy tech. Amazon tried to buy iRobot, the maker of Roomba, in 2021, but that deal was ultimately nixed by the Federal Trade Commission on antitrust grounds. Now, if a judge approves the pending sale of iRobot to Shenzhen Picea Robotics, Roomba will join numerous brands under the ever-expanding surveillance umbrella that many Chinese products represent. Not that China is the sole problem when it comes to protecting the privacy of American consumer data. The United States has no robust privacy laws apart from a few state initiatives, and the data practices of companies like Amazon are a mixed bag. But the Chinese Communist Party doesn't even pretend to care about privacy, instead marketing highly functional (and affordable) electronics capable of gathering all manner of personal information. This ill-fated combination has created a veritable Wild West when it comes to the consumer electronics market. iRobot says Roomba will remain an American brand, a claim that means little when no one is minding the privacy store in the first place. So you can either trust that your data will be treated with care (good luck) or you can try to protect yourself just a bit. According to experts, disconnecting from Wi-Fi and Bluetooth will likely disable any advanced features but will not prevent Roomba models from actually cleaning. “Advanced features” in this context mostly mean updates to the app, which Roombas can operate without. And it certainly refers to a data pipeline that goes straight to who-knows-where, replete with maps of your home’s layout and eye-level images of your pets and you playing on the floor. Remember, any connected devices, including vacuum cleaners, can be (and have been) hacked. Apps are black holes for data and privacy anyway. So just press “Clean” and forget it. In the 1980s, singer Rockwell went to the top of the charts with “Somebody’s Watching Me,” a synth-pop, R&B celebration of unrestrained paranoia. In one verse, he asks: “Can the people on TV see me, or am I just paranoid?” On that point, you can relax. The (truly) odious Jackson Lamb in Slow Horses and the passive-aggressive aliens on Pluribus cannot see you. But if you have a smart TV equipped with a camera for recognizing gesture control, or for making video calls, your TV itself might be watching you – although manufacturers are dropping this feature after being hit with a tsunami of consumer outrage. After all, who is at their best sitting on the couch at 9 o’clock at night? The real danger is Automated Content Recognition (ACR) technology, which can capture screenshots of a user’s television display every 500 milliseconds, monitoring your viewing in real time, and transmitting that information back to the company without your knowledge or consent. Your personal information then becomes a commodity on the consumer data market. Texas Attorney General Ken Paxton said in a statement that this technology can put private and sensitive information – from passwords to bank information – at risk. Consumer activist and privacy expert Louis Rossmann explains that if your TV is connecting to home security cameras, if you use your TV as a computer screen for searching the web, or if you send videos and photos through your TV, ACR captures all that information. “The television is, unfortunately, a form of spyware,” says Rossmann. Paxton is now suing Sony, Samsung, and LG, as well as Chinese-based Hisense and TCL Technology Corporation for secretly recording and harvesting consumer data. “Companies, especially those connected to the Chinese Communist Party, have no business illegally recording Americans’ devices inside their own homes,” Attorney General Paxton said. “This conduct is invasive, deceptive, and unlawful. The fundamental right to privacy will be protected in Texas because owning a television does not mean surrendering your personal information to Big Tech or foreign adversaries.” Watch Rossmann for detailed descriptions of these companies’ labyrinthine concept of informed consent and the technical ways you can try to sidestep surveillance. As Paxton’s lawsuit matures, we will see if courts will find actual law-breaking here, or just another abuse of consumer trust. Stay tuned. There are few spaces meant to be more private than the bedroom. But that, writes Wired’s Chloe Valentine, may be about to change. In a trend that gives a twisted new meaning to the concept of the “Internet of Things,” sex toys are joining the ranks of app-connected devices. As they do, the adult toy industry has found a way to breach one of privacy’s few remaining sanctums. Who knew there was an app for that? But here’s the thing about apps: users see them as a way to interact with devices. Companies, however, view them as something much more valuable – collectors of data that can be monetized. And what better place to collect personal information than the boudoir? As if data privacy wasn’t already teetering on the brink, along comes a new – and deeply invasive – set of variables to track and mine for insights. Think of it this way: If it’s a setting on the device, it’s measurable. And if it’s measurable, it has value to the company that markets it. Behavioral data is especially valuable but was long notoriously difficult to obtain until about a decade ago, when the consumer IoT market began to proliferate. Thanks to the rise of connected devices, companies can now acquire behavioral data about their consumers in the most accurate and intimate way possible – by observing them in the act. For those who are comfortable with sex toy companies gathering their behavioral data, that’s their prerogative. But sexual behavior data potentially includes many things: location information, usage frequency, which toy a consumer is using, even which functions and intensity settings they choose. When combined with purchase records and demographic data, this amounts to an expansive – and intensely personal – profile. Moreover, there is no way to truly guarantee anonymity, despite what organizations may claim. Meanwhile, the potential actions of hackers or other bad actors remain an ever-present threat. And in the end consumer data is just as likely as not to end up in the hands of brokers who won’t hesitate to sell it to any interested parties (whether obtained legally or not, the rotten practice of data brokering remains perfectly legal). If you add cameras and Wi-Fi to the mix, then you’ve got another layer of “What could possibly go wrong?” Here one need only recall the sordid tale of the Savkom Siime Eye, an early entrant in the field of IoT adult toys. If you get one of the new generation of adult toys, start by checking permission settings in the product’s app – and on your smart phone more generally. Most smartphones eagerly assist apps in sharing information, so you might be shocked to learn just how much your data gets around. As a reminder, check the app settings for your other connected devices, including: Appliances, smart glasses, security cameras, vehicles, doorbells, wearables, children’s toys, small electrics, TVs, thermostats, plugs and switches, lightbulbs, speakers, navigation systems, locks, motion detectors, smoke alarms, air purifiers, humidifiers, blinds, garage door openers, irrigation systems, solar panels, rechargeable batteries, carbon monoxide detectors, projectors, soundbars, gaming consoles, rings, hearing aids, scales, bikes, scooters, conference systems, printers, lighting panels, pet feeders, litter boxes, aquariums, and birdhouses. Plus your toothbrush. And don’t forget your mattress. Feeling safe now? Have Citizenship, Will (Not Necessarily Be Able To) Travel Fresh on the heels of the Bill of Rights’ 234th birthday comes a salient reminder of just how difficult it is for those in power to resist abusing their authority, and why the Fourth Amendment in particular is every bit as relevant today as it was in 1791. Wilmer Chavarria is suing the U.S. Department of Homeland Security (DHS) for an incident in Houston in July. According to his lawsuit, U.S. Customs and Border Patrol (CBP) agents detained him, demanded his passwords, then searched the contents of his devices as he tried to enter the country at George Bush Intercontinental. Actually, make that returning home rather than trying to enter – Wilmer Chavarria is as American as tarta de manzana. He’s a school superintendent in Vermont, where apples are the state fruit and apple pie is literally the state pie (either à la mode or with cheddar). Born in Nicaragua, Chavarria became a citizen of the United States in 2018 after coming here a full decade earlier to do that most American of things – get an education. That day in July, this American citizen was returning home after visiting his mother and family in Nicaragua. CBP separated him from his husband, then interrogated Chavarria for several hours before releasing him without explanation. Along the way, he was informed that he had no Fourth Amendment right to resist. The primary problem with that argument is, of course, that the Fourth Amendment applies to all American citizens. It clearly states that no one living under the authority of the Constitution must endure unreasonable search and seizure, and that a warrant, based on probable cause, must be obtained by authorities whenever one’s personal effects are to be searched. To be clear, these protections do not apply to noncitizens seeking to enter the country. Chavarria was utterly and completely covered the moment he finished swearing “so help me God,” on the day of his naturalization. Another potential problem with the DHS/CBP argument is a landmark 2014 decision in which the U.S. Supreme Court declared that digital devices like cellphones are covered by the amendment’s original language of “persons, houses, papers, and effects.” But the ruling left the notorious “border exception” intact, which may explain CBP’s inclination to take a constitutional mile with the mere inch the parchment actually gives them. With any luck, Chavarria’s case may breathe renewed life into the space that United States v. Smith clawed back from the border exception in 2023. Despite such rulings, border agents seem not only unfazed but also emboldened. According to research by the Pacific Legal Foundation, warrantless searches of electronic devices have quadrupled in the decade since the high court’s original 2014 ruling. When asked about cases like Chavarria’s, CBP demurs. These tactics are “rare” and “highly regulated” according to the agency’s assistant commissioner Hilton Beckham. She also insisted to the Houston Chronicle that such searches are only used to combat serious crimes. “Lawful travelers,” she says, need not fear. By such logic, Chavarria must have somehow represented a danger to national security. Perhaps New England schoolchildren, gay marriage, and naturalized Nicaraguans are a greater existential threat to the future of the republic than anyone previously realized. Or it could be good old fashioned political targeting. In April, mere months before his trip, Chavarria refused to sign his state’s request to certify to the U.S. Department of Education that Vermont was not using “illegal DEI practices.” And he did so on the record, noting that his district is the most diverse in the state. The federal request was one that some 19 states, eventually including Vermont, simply refused to comply with. Agree or disagree with that position, it should be a matter of serious concern for people of all political stripes if the government applied a political standard to its warrantless intrusion into an American’s digital devices. It is perhaps no coincidence, then, that before he even boarded his domestic flight back to Vermont that day, Chavarria received an email. In it, CBP announced that his longtime TSA Global Entry status had been revoked because he suddenly “did not meet program eligibility requirements.” So it’s come to this: If you’re traveling abroad, consider using burner phones and leaving your personal and work devices at home. VICE recently interviewed privacy expert Jason Bassler about the many ways that surveillance has crept into our daily lives and become more or less normalized. Jason is the co-founder of the Free Thought Project, whose site you might not want to visit if you’re already paranoid about being watched. Among the observations that Jason offered VICE were the following. Think of them as a “State of Our Privacy” report: Smartphones are the well-connected spies in our hand: “Today’s mobile tech goes far beyond anything we saw even five years ago. Our phones constantly ping GPS satellites, Wi-Fi networks, and cell towers to triangulate our location, whether or not you’re using a map app. Apps quietly harvest this data and sell it to data brokers, who in turn sell it to agencies like ICE, the FBI, and even the U.S. military.” If it’s a border, it’s biometric: “TSA is expanding biometric surveillance across nearly all U.S. airports as part of a $5.5 billion modernization push. Airports nationwide will be utilizing facial recognition software, and over 250 airports will be accepting digital ID verification. It’s a similar situation with the U.S. Customs and Border Protection. Biometric data collected at borders is often retained indefinitely, and it’s increasingly shared with law enforcement and intelligence agencies, raising concerns about lack of oversight. Border control isn’t just about fences anymore. It’s about fingerprints, facial scans, and AI predictions.” License plate readers are nearly ubiquitous: “They’re designed to capture, analyze, and store vehicle data in real time. Think of them as a cop on the corner of your street, taking notes about every car that passes – its color, its make, its year, where it’s going, how often it goes there, how long it stays, and much more. Now, imagine an army of cops on every corner of your city doing that. This is what Flock [Safety brand] cameras are, except they are mounted on poles and traffic lights.” Bassler also recommends the following ways to fight back against what he calls the growing “ecosystem” of surveillance and its normalizing influence:
Finally, Bassler reminds us to push back politically and let our voices be heard. One way to do that is to remind Congress to finish passing the Fourth Amendment Is Not For Sale Act and send it to the president’s desk. For Vice’s interview with Bassler go here. Now more than ever, be careful about choosing collaboration partners. That’s the lesson Strategy Risks and the Human Rights Foundation are drawing in a new report. Their findings are a jaw-dropping wake-up call about China manipulating Western institutions into giving up cutting-edge AI knowledge to serve its dictatorship. Here’s the play-by-play:
It gets worse. U.S. Department of Defense agencies were also involved in the funding process, and their specialized involvement helped drive research into national security questions: Optical-phase-shifting tech and biometric monitoring, to cite two examples. The Chinese military is keen on tracking people using drones and facial recognition algorithms. Or more to the point: it is keen on surveilling, detaining, and persecuting more than one million Uyghur Muslims. The report found that ethics watchdogs on the Western side lost their bark. Only two bothered to call out the troubling connection between Western institutions and their Chinese collaborators in the five years since 2020. “A staggering lack of interest,” is how the Human Rights Foundation characterized it to Fox News Digital. Still, in defense of what may have simply been an appalling level of naiveté on the part of Western researchers, the report concludes: “Chinese laboratories are rarely listed as direct grant recipients, allowing them to bypass due-diligence checks while benefiting directly through co-authorship and knowledge transfer. Taxpayer resources generate knowledge that flows into institutions embedded in China’s apparatus of repression.” The report then calls for the following guardrails: Mandatory due diligence on human rights, full disclosure of international partnerships, and expanded ethics mandates for AI institutes. It’s a lesson the FBI itself still needs to learn. We would add that this revelation cries out for congressional oversight and hearings – and if the facts warrant it – threats to cut off federal funding. Of course, those guardrails will have no effect on China’s institutions, where security and technology firms are required to share their findings with the Chinese Communist Party. But at least such reforms will give us a fighting chance to stymie these covert spycraft efforts, as well as to disabuse ourselves of the Faustian illusion that such collaborations were ever, or will ever be, business as usual. Axios contributors Christine Clarridge and Russell Contreras recently assessed the increasingly ominous role artificial intelligence is playing in cybercrime. Deepfakes, ransomware, identity hijacks, and infrastructure hacks are all newly elevated threats – widely varied acts that previously required specialized expertise and massive organizations. But not anymore. Now, they write: “Off-the-shelf AI lowers the skill level and cost of carrying out attacks, enabling small crews to execute schemes that previously required nation-state resources.” Here's what else their snapshot revealed:
When it comes to cybercrime, these stats suggest that it pays to be more than a little paranoid. Security consulting firm Koi recently published an exposé about a new online privacy threat, one with the unforgettable name of “ShadyPanda.” The scheme allowed browser extensions to infect 4.3 million Chrome and Edge users. In this case, “infect” means sit there quietly, take control whenever it wants, then pretty much do whatever it pleases, including:
ShadyPanda’s extensions often worked legitimately for years before being activated and turned into full-blown spyware – making it an especially effective tool for keeping tabs on businesses. Some of the extensions were simple wallpaper galleries or productivity tools, and many had been marked as “trusted” or “verified” by the marketplaces that hosted them. One of the key vulnerabilities this research exposed was the whole “trust and verify” approach. Once approved by various marketplaces, extensions were never re-verified. And because most users opt for “auto-updating,” the extensions could continue to build up a large user base and then be activated as spy tools when needed. Koi reports: “Chrome and Edge's trusted update pipeline silently delivered malware to users. No phishing. No social engineering. Just trusted extensions with quiet version bumps that turned productivity tools into surveillance platforms.” And where is all that collected data going? To surveillance-obsessed China, of course. Worried that you might be infected? Check out The Hacker News’ partial list of the culprits. Infosecurity Magazine recommends you also check your browser extensions and remove anything you don’t recognize or no longer use. And turn off auto-updating while you’re at it. It is a dispiriting truth of modern life that we are – and likely always will be – in a footrace against hackers and thieves, whose tools will grow even more dangerous as AI evolves. But we don’t have to be helpless. At least we can take satisfaction in knowing that by embracing best practices, we can at least be a step ahead and leave the ShadyPandas of the world empty-handed. If you’re making a holiday shopping list for the kids, be grateful that Kumma “talking toy bears” will no longer be on store shelves. It is creepy enough that AI-enabled toys allow companies to track what your children (and any family members in the vicinity) say. How long such data is kept – and how it might be used when children become adults – is anyone’s guess. Worse, an advocacy group found that FoloToy’s Kumma bear had no problem recommending kinky sex as a way to spice up relationships. (It offered, among other things, tips on how to tie knots). Completely unrelated and of no concern at all is the news that OpenAI announced a partnership with Mattel in June of this year. Now back to the bear: Not only did Kumma discuss very adult sexual topics, but it also introduced new ideas the evaluators hadn’t even mentioned – “most of which are not fit to print.” They also found AI-powered children’s toys (including Kumma) that variously:
And as that last bullet suggests, don’t even think about privacy: “These toys can record a child’s voice and collect other sensitive data, by methods such as facial recognition scans,” warn the researchers. It’s unclear what the (mostly Chinese) companies pushing these products will do with all the data they mine from these toys, but deleting it seems highly unlikely. To date, such AI systems remain eminently hackable. Earlier talking toys like Hello Barbie relied on machine learning and could only follow predetermined scripts. But the rise of generative AI has introduced true conversationality into the mix – and with it, massive unpredictability (randomness, after all, is baked into generative AI models). The responses are often completely novel – and may be entirely inappropriate for younger audiences (or, as adults have discovered, just plain wacko). Parents need to understand that children might be having detailed, potentially formative conversations on all kinds of important topics – without their knowledge or involvement. And many of the toys in question use gamification techniques and other strategies (as in the list above) to keep children engaged and continuously coming back for more. Of course, it’s now a given that every AI toy tested framed itself as one’s buddy or even best friend. The stakes could hardly be higher: For the youngest children, the presence of AI-based toys introduces a massive unknown into a critical window for development. For now at least, Kumma the bear is off the market in the wake of the revelations about its kinky side and tell-all personality. Being a parent or caregiver was already hard enough. Now thanks to generative AI and the mad rush to reinvigorate a market (children’s toys) that had long been stagnant, gift-giving is turning out to be almost as fraught as parenting itself. Sometimes the best defense against privacy violations is as simple as choosing a good password. Such was the case in South Korea, where officials recently arrested multiple suspects accused of hacking into private surveillance cameras and capturing footage as pornography for voyeurs. The 120,000 cameras were inherently hackable because they are, after all, internet devices. But users made it all the easier by choosing exceptionally weak passwords. It's uncertain just how explicit the footage was (sourced from homes, Pilates studios, and even a women’s health clinic). Some of it was sold on overseas platforms that appear to cater to sexually exploitative content. Pro tip: “11111” and “12345” are terrible passwords, as are any other repeating or sequential numbers. And this maxim is especially relevant when dealing with devices that are internet-connected. Yet from Zoomers to octogenarians, the password problem remains, as The Register’s Connor Jones reports, as “prevalent and dangerous as ever.” Case in point: the recent news that the password for the ransacked Louvre’s CCTV system was “Louvre.” So clearly the vulnerability of camera systems is a problem that goes beyond South Korea and this particular (ab)use case. In June, security researchers found that they could access tens of thousands of internet-connected cameras worldwide (35 percent of which were in the United States). Vulnerable systems were everywhere in addition to homes: retail sites, construction zones, hotels – you name it. By studying the feeds, researchers noted, bad actors can find a treasure trove of useful information – from poorly lit spots to unguarded doors to times when no one’s around. Somewhere out there is a black market for anything a “security” camera might capture. So think twice about even having Internet-connected cameras (CCTVs that record directly to local devices are a better alternative). If you must be connected, however, then at least up your password game. Finally, if you’ve installed connected cameras, try not to forget where they are five years hence on some enchanted evening. When your identity is confirmed by a string of numbers in a computer, are you still yourself if the algorithm determines you (the person) are not you (the digital ID)? One state, Utah, is leading the nation in answering this question with policies that safeguard humans, while Washington, D.C. is heading down the path of reducing humans to algorithms. Consider ACLU’s Jay Stanley, who praised Utah for its “State-Endorsed Digital Identity” (SEDI), the state’s new framework for digital ID systems. In an approach that should be the norm rather than the notable exception, the Beehive State puts privacy first. Utah begins with the conviction that identity “is not something bestowed by the state, but that inherently belongs to the individual; the state merely ‘endorses’ a person’s ID.” In other words, our identities belong to us. We are born with them. We own them. With that realization comes new-found respect for privacy and other forms of personal freedom. This view of identity stands in sharp contrast to the definition Stanley found in the data-driven world of federal law enforcement. With the feds, identity is becoming something only the state can grant, defaulting to incomplete or faulty digital verification of citizenship. To be clear, both Utah’s SEDI platform and the federal approach utilize digital ID systems, but one is a case study in digital due diligence while the other illustrates the dangers of slapdash digital recklessness. The federal system is based on incomplete databases, poorly designed architecture, evolving (meaning, far from perfect) technology, and an utter disregard for the constitutional rights of individuals. Utah’s approach differs from the federal approach in very important ways:
Stanley goes on to quote the Ranking Member of the House Homeland Security Committee, who reports that an app (called Mobile Fortify) used by Immigration and Customs Enforcement (ICE) now constitutes “definitive” determination of a person’s status “and that an ICE officer may ignore evidence of American citizenship – including a birth certificate.” That’s bad enough on its own of course, but along the way, the government now sweeps up Americans’ biometric identifiers en masse. The databases Mobile Fortify accesses contain not only our photographs but enough records to constitute a permanent digital dossier. Congress did not get to review, much less approve, any of this. The American people never voted on it. In fact, the whole thing leaves us wondering what happened to the Privacy Act, signed into law by President Ford in 1974. It has been described as “the American Bill of Rights on data.” By declaring that identity is solely digital, determined by stealthy algorithms and policies, and deniable to those whose data is non-existent, incomplete or inaccurate, the federal standard – in sharp contrast to Utah’s – subverts 250 years of traditional, constitutional practice. Remember: Our founders built the world’s most vibrant democracy on pieces of parchment copied by hand. In any truly free society, identities are personal possessions (to help secure individual rights and facilitate their voluntary participation in society). Identities bestowed by the state ultimately serve only the state. That we even need to ponder the nature of identity reveals the absurdity of these abuses our personhood and privacy. Nevertheless, here we are. Without transparent conversations and healthy debate, we face a future in which we are whomever the state says we are, made of malleable 0s and 1s, with nothing grounded in the physical world. It's a discussion that, as of now, Utah alone seems committed to having. The Double-Edged Sword Wrapped in Eric Swalwell’s Privacy Lawsuit Against Housing Chief Bill Pulte12/1/2025
Those who live by surveillance cry by surveillance. We wonder how many times politicians on both sides of the aisle will have to get slammed by the very government spying practices they’ve supported before this lesson sinks in. Case in point: Rep. Eric Swalwell (D-CA). Last week, he filed a lawsuit against Bill Pulte, President Trump’s director of the Federal Housing Finance Agency, for accessing and leaking private mortgage records in retaliation for political speech. Pulte has issued criminal referrals to the Department of Justice (DOJ) against Swalwell, New York Attorney General Letitia James, Sen. Adam Schiff (D-CA), and Federal Reserve Governor Lisa Cook on the basis of alleged mortgage fraud. A federal judge dismissed the charges against James, while President Trump used the allegation against Cook to fire her from the Federal Reserve Board (she remains in her job while the Supreme Court reviews the case). Rep. Swalwell’s lawsuit makes an important point: “Pulte’s brazen practice of obtaining confidential mortgage records from Fannie Mae and/or Freddie Mac and then using them as a basis for referring individual homeowners to DOJ for prosecution is unprecedented and unlawful.” We cannot think of any prior use of private mortgage applications to harass political opponents (at least one of them, James, is arguably guilty of using lawfare herself to harass Donald Trump). Pulte’s actions appear to be a flagrant violation of the Privacy Act of 1974, which governs how the government can and cannot handle Americans’ private information. The law, as Swalwell notes, “explicitly forbids federal agencies from disclosing – or even transmitting to other agencies – sensitive information about any individual for any purpose not explicitly authorized by law.” Congress passed the Privacy Act to prevent the creation of a federal database that would create comprehensive dossiers on every American, something we’ve warned is now being attempted. The law specifically forbids agencies from freely sharing Americans’ confidential data gathered for one purpose (such as IRS tax collection), for another purpose (an FBI investigation). Agencies must issue written request justifying any such information sharing. Pulte is anything but transparent. “I’m not going to explain our sources and methods, where we get tips from, who are whistleblowers,” Pulte told the media. This mindset is in keeping with the corrupting spread of the best practices of the intelligence-surveillance state playbook. Today, it is the federal housing agency. We shouldn’t be surprised if tomorrow such “sources and methods” thinking trickles down to federal poultry inspections. Meanwhile, we remain dry-eyed over Rep. Swalwell’s plight. As a member of the House Judiciary Committee, Swalwell argued against – and voted against – the Protect Liberty and End Warrantless Surveillance Act. This bill would have reformed Section 702 of the Foreign Intelligence Surveillance Act by requiring a warrant before the government could access U.S. citizens’ data collected through programs enacted to surveil foreign threats on foreign soil. The Protect Liberty Act would have ended the government practice of using a foreign database to conduct “backdoor searches” on Americans… not unlike, say, a regulatory agency pulling a political opponent’s private mortgage application. The principle of mutually assured payback is something to keep in mind when lawmakers again debate the provisions of Section 702 in April. Once upon a time, in Google’s 2004 IPO filing, it aspired to “Don’t Be Evil,” imagining itself a company “that does good things for the world.” Dateline, November 2025: Various outlets have reported that Google’s app store now includes a version of its Mobile Identify app for Customs and Border Protection. This version is tailored to state and local law enforcement officers who are deputized to work with Immigration and Customs Enforcement (ICE) by using facial recognition to scan people using facial recognition algorithms. If a match is found on federal databases, officials at ICE are notified. And those databases (at least the ones we know of) contain records on more than 270 million people. Odds are you and your loved ones are in those databases. The fact that the law enforcement officers who use Mobile Identify are deputized to work alongside ICE is beside the point, as is the fact that ICE has its own, presumably more powerful version of the same app, called Mobile Fortify. Of far greater concern is that any government agency possesses this ability. It’s easily shared across jurisdictions and Google seems to have no qualms about enabling a tool that could be deployed as a weapon to surveil American citizens at will. After all, Google’s leaders could’ve just said “no.” But they didn’t, and now an insidious new public-private partnership is afoot. Today, it’s Google and ICE and the issue is immigration enforcement, but don’t expect it to stay that way for long. These kinds of surveillance technologies never stay contained, nor do limitations on who they target. Soon it will be Google and the government – federal, state, county, and local – and the reasons for spying on us could be our religion, political party, ethnicity, affiliation, or – well, you name it. Mobile Identify is just one more reason why Congress must debate how federal agencies are accessing our private information without a warrant. This is something to keep in mind when FISA Section 702, a federal surveillance policy, comes up for reauthorization in April. |
Categories
All
|
RSS Feed