|
The Internet of Things (IoT) strikes again. Most modern vehicles possess a tire pressure monitoring system (TPMS), a legal requirement since 2007. A recent study shows that it is possible to capture unencrypted Wi-Fi messages sent by TPMS sensors. Each sensor sends a unique ID number, which makes tracking specific vehicles child’s play for a hacker. Think about this for a moment – the average car or truck is broadcasting four such unique IDs (one per tire), with no need for license plate readers with high-tech cameras and AI software. That, says the IMDEA Networks Institute, “makes TPMS-based tracking cheaper, harder to detect, and more difficult to avoid than camera-based surveillance, and therefore a stronger privacy threat.” A motivated hacker need only place a series of low-cost receivers near the appropriate parking lots and roads. Within weeks: “These tire sensor signals can be used to follow vehicles and learn their movement patterns. This means a network of inexpensive wireless receivers could quietly monitor the patterns of cars in real-world environments. Such information could reveal daily routines, such as work arrival times or travel habits.” It gets worse: TPMS signals can even be captured from moving vehicles. Some sensors reveal actual tire pressure values (as opposed to merely “Low”), which could, for example, be used to determine if a vehicle is carrying a heavy payload or to distinguish vehicles by type. Pretty soon we’re in Mission: Impossible territory. As is so often the case with the IoT, safety was the motivation behind the development of tire pressure monitoring systems in the first place. Because privacy was never a consideration, privacy-by-design protections were missing from the start. The result is a familiar IoT pattern: unencrypted signals and wide-open vulnerabilities becoming the rule rather than the exception. When it comes to privacy issues, safety never seems to stay in its lane. “Our findings show the need for manufacturers and regulators to improve protection in future vehicle sensor systems,” notes researcher Yago Lizarribar. If nothing changes, yet another safety tool will be perverted into an instrument of general population surveillance. But change does not seem to be an industry priority. As Aaron Pruner of CNET points out, we’ve had sixteen years to address this vulnerability. A study by Rutgers University and the University of South Carolina identified the problem in 2010, a mere three years after TPMS was mandated. Which means that if TPMS sensors were kids, they’d be old enough by now to start driving – and be tracked every mile of the way. The media reported on the drama of the Pentagon’s AI contracts as a horse race: Anthropic tried to limit what the War Department could do with the company's Claude AI product. The administration subsequently rescinded all government contracts with the company. OpenAI offered its products as the alternative and won the day. But beneath this drama lies a deeper and more dangerous reality: In the absence of meaningful guardrails, the AI tech of any company can be used for surveillance and – if combined with data collected under Section 702 of the Foreign Intelligence Surveillance Act (FISA) – could allow government employees across the federal bureaucracy to run searches on Americans’ private communications. Such AI-powered surveillance could extend far beyond the Department of War’s use cases and even the Justice Department’s FBI investigations. Government AI-enabled mass surveillance of the domestic population would:
The danger of AI surveillance in a government that shares data between agencies should prompt Congress to strengthen Fourth Amendment privacy protections. With such a vast datascape available to the world's most powerful government – where many existing restrictions have already been weakened – we otherwise risk the irrevocable loss of personal privacy and the rise of a permanent surveillance state. We need to come to terms with the fact that AI tech makes rummaging through our private lives and personal histories easier and faster than anyone could have imagined even a few years ago. Americans’ communications could become permanently accessible to the prying eyes of government agents in almost any agency with a whim (or a political directive) to pursue. It wasn't supposed to be this way. AI was supposed to have guardrails, as was Section 702, enacted by Congress to enable the surveillance of foreign threats on foreign soil, but has instead been used by the government to search the private communications of Americans without a warrant. RISAA was a noble attempt to rein in the misuse of Section 702 as a domestic spy tool. Its reforms included oversight and restrictions on FBI searches involving people inside the United States. It implemented rules for queries involving high-profile groups or individuals. It established training and accountability measures, while enhancing oversight of the two secret courts FISA created. These were important reforms, but they were weakened by last-minute changes to the bill. When Section 702 comes up for renewal next month – this time in the context of an AI juggernaut – it may well be our last chance to protect our freedoms while protecting national security. There is a point early in a marriage when spouses get comfortable and uninhibited around each other in the bedroom and even the bathroom. That’s because there is no third set of eyes in the room… unless one of them just happens to be wearing a pair of smart glasses. We recently covered the perils and pitfalls of Meta adding facial recognition software to its Ray-Ban smartglasses. Now Victor Tangermann of Futurism has uncovered a genuine horror story about private images captured by these glasses, millions of which are already in circulation. Meta, in order to refine its AI imaging, sends footage from consumers’ glasses to contractors in Kenya and other countries to label them for training. This tedious process is necessary to enable AI to learn to recognize everyday objects. At that point, almost anything recorded by Meta glasses is liable to be sent abroad for data annotation. “I saw a video, where a man puts the glasses on the bedside table and leaves the room,” one data annotator told two newspapers in Sweden. “Shortly afterwards his wife comes in and changes her clothes.” Another data annotator said: “In some videos you see someone going to the toilet, or getting undressed.” Tangermann reports that other footage included “imagery of people’s bank cards, users watching porn, or even filming entire ‘sex scenes.’” Meta customers have no recourse. Data protection lawyer Kleanthi Sardeli told the Swedish press, “Once the material has been fed into the models, the user in practice loses control over how it is used.” Of course, as the Internet of Things weaves together Ring cameras, cloud-based voice-activated AI assistants, baby monitors, and robot vacuums, we are all subject to being surreptitiously recorded at, well, inconvenient moments. But none of them have the reach into personal privacy that happens when one spouse is wearing a pair of smart glasses and the other announces that the toilet paper holder is empty. Rep. Jim Jordan, Chairman of the House Judiciary Committee, and Rep. Brian Mast, Chairman of the House Foreign Affairs Committee, are urging the United Kingdom Home Secretary to reveal details of a secret order to Apple that may kill encryption for Americans and Apple customers around the world. The secret order involves Apple’s Advanced Data Protection, which offers customers end-to-end encryption so strong that even Apple itself does not have the ability to break it. As a result, journalists and their sources, women and their children hiding from stalkers, dissidents around the world, businesses communicating about proprietary products, and people who simply value their privacy, all rely on Apple’s ADP to protect their communications. In February 2025, the UK Home Office – roughly equivalent to the U.S. Department of Homeland Security – issued a Technical Capability Notice (TCN) to Apple demanding access to end-to-end encrypted data stored in Apple’s iCloud. In order to be able to continue to serve Britons with other products and services, and to protect customers’ privacy, Apple was forced to comply with the law by disabling ADP for 35 million iPhone users in the UK. This had the additional unfortunate effect of depriving Americans and people from around the world of the ability to privately communicate with UK Apple customers – including with other Americans inside the UK. The UK’s Gag Order – an American Company Cannot Talk to Its Government “However, it remains unclear whether this action satisfies the UK’s demands, particularly as the order reportedly extends to data of users outside the UK, including American citizens,” Jordan and Mast wrote in a letter to Home Secretary Shabana Mahmood. Such an order is not only in violation of the Clarifying Lawful Overseas Use of Data (CLOUD) Act, which authorizes the U.S. to enter into data-sharing agreements with the UK and a few other countries, but prohibits orders that require providers to decrypt data. Incredibly, the UK government’s TCN imposes a gag order on Apple that makes it a criminal violation for this American company to petition or even discuss the order with the U.S. Department of Justice. The “Bare Details” of the TCN Are Not Enough Since then, a tribunal in the UK rejected the idea that “the revelation of the bare details of the case would be damaging to the public interest or prejudicial to national security.” Late last year, the Investigatory Powers Commissioner, which advises Prime Minister Keir Starmer, agreed with the tribunal’s ruling, saying that disclosure of some details about the TCN is necessary for “a mature and informed public debate.” Yet no such briefing is in the works, which is why the chairmen are now making a direct request to UK Home Secretary Mahmood to provide a briefing that would spell out the terms of the TCN to the committees by March 11. What’s more, the committees need more than the “bare details” of the TCN to ensure that the actions of the UK government are within the terms of the CLOUD Act. Otherwise, how could Chairmen Jordan and Mast ascertain if the order weakens “the security, privacy, and constitutional rights of American citizens”? PPSA applauds the chairmen for taking this stand for the right of Americans. The U.S. Can Suspend the CLOUD Act Agreement with the UK Bob Goodlatte, former Chairman of the House Judiciary Committee and PPSA Senior Policy Advisor, who helped lead the passage of the CLOUD Act in 2018, is pointing to a way out if the UK does not respond to Jordan and Mast. In a letter to Attorney General Pam Bondi on Dec. 12, Goodlatte noted that the CLOUD Act was intended to streamline cross-border cooperation, but “was never intended by Congress to be leveraged by a foreign partner to compel any form of ‘backdoor’ access or other types of decryption assistance.”
The letter from Chairmen Jordan and Mast did not invoke the possibility of taking this strong action. But Home Secretary Mahmood would be wise to realize that this is likely a step the Trump administration and Congress will take if the British government continues to remain resistant to American concerns. Why do so many Americans object to the expansion of surveillance networks like Flock technology that can track where we drive, pervasive Ring networks that show where we walk, and government purchases of our personal data that reveal information about us that is more sensitive than a diary? After all, this is for our own good – to protect us. We can trust the government, right? One reason for alarm among the civil liberties community is that we have seen how these separate surveillance systems can be woven together by AI to create a comprehensive surveillance state. This used to be the stuff of dystopian science fiction. Today, it is a functioning model we can see in real time across the Pacific. Consider the Fujian Police Academy in China, which at the end of last year released an internal document that shows how AI can detect unrest by weaving together actionable intelligence from sound sensors, cameras, reports from paid community spies called “grid workers,” and other sources. The China Media Project unearthed and analyzed this document (linked here for Mandarin readers) showing how comprehensive surveillance can further the cause of “social governance.” China Media Project reports that:
China Media Project summarizes: “Throughout the past year, institutions across China, both private and state-owned, have proposed variations of the same system: taking big data from China’s extensive surveillance system – including input from street cameras and satellites, noise sensors, social media posts, as well as reports from social services – and feeding it into AI models to aid predictive policing.” Of course, Washington is not Beijing. We are not going to find ourselves having to memorize the platitudes of our Dear Leader and spout them online in order to enjoy internet and travel privileges. But the technological ambition – to fuse disparate surveillance streams into systems for “predictive policing” – is not uniquely Chinese. This ambition was reflected in the post-9/11 attempt by the Pentagon to create “total informational awareness” – an ambition finding new life in the many surveillance elements that PPSA reports on daily. Unlike the “netizens” of China, we can urge our elected leaders to take us off the path that leads to a surveillance state. Congress has an immediate opportunity to do exactly that. One step off this path would be the passage, this April, of measures to end the purchasing of Americans’ most sensitive and personal data by the FBI, the IRS, the Department of Defense, the Department of Homeland Security, and other federal agencies. The lesson from China is not that America is doomed to follow the same path – but that once surveillance systems integrate, pulling them apart becomes exponentially harder. We will keep you posted as the surveillance debate heats up in Congress. The Internet of Things (IoT) remains a glass house when it comes to privacy, as evidenced by this recent headline: “MAN ACCIDENTALLY GAINED CONTROL OF 7,000 ROBOT VACUUMS IN 24 COUNTRIES WHEN HE TRIED TO GET CREATIVE.” Sammy Azdoufal just wanted to see if he could control his fancy new China-made DJI Romo vacuum cleaner with his PlayStation 5 controller (because, why not?). With the help of some AI coding tools, he not only succeeded, but soon found himself in charge of every currently connected DJI vacuum around the world, with access to camera feeds, microphones, floorplan maps, and more. Because of the available Internet Protocol addresses associated with each device’s connection, he also had the ability to determine their approximate location. Now imagine what a burglary syndicate could do with that information. Or, for that matter, Chinese intelligence, which under Chinese law has rights to all the data collected by Chinese companies. The ability to vacuum up the personal information of people around the world is a big lesson in consumer privacy. It also portrays the Wild West that IoT has become, which Live Media News summed up nicely: “It seems like the smart-home sector is constantly urging us to embrace the ‘trust us’ design principle. Convenience is always the selling point: let the thermostat anticipate your routine, let the doorbell recognize a face, and let the vacuum clean while you’re away. However, in reality, convenience typically translates to ‘cloud.’ Furthermore, cloud frequently implies that someone, somewhere, created a permissions system that must be flawless every day, forever, across all updates, regions, and hurried sprints. Even for businesses that prioritize security, that’s a high standard. Many don’t.” Which should give us all pause as we consider whether we really need connected refrigerators, doorbells, coffee makers, vacuum cleaners, sex toys, and more. Our personal privacy seems a terrible thing to wager in the name of a little more convenience. Azdoufal just happened to do the right thing by reporting a vulnerability that he didn’t have to publicize (and one that he wasn’t deliberately looking for in the first place). In other words, we got lucky this time. “Great to see you … Bob … How’s … Maggie ... and those three wonderful … dogs of yours.” You have to admit, it will be a boon to politicians. Adding facial recognition software to smartglasses will enable them and anyone at a cocktail party to dispense with all those tiresome strategies for remembering names and familiar facts about the person in front of them. According to a 2025 internal company memo obtained by The New York Times, Meta plans to quietly equip its line of smartglasses with facial recognition technology dubbed “Name Tag.” Facial recognition technology is one of the most robust privacy-destroying tools. This was an idea that was floated and dropped five years ago for Meta’s social media platforms. Now it is back, this time as a wearable in Meta’s Ray-Ban and Oakley smartglasses. The strategy behind this policy reversal is breathtakingly cynical. The Meta memo held that the new feature’s debut would go largely unnoticed if it were launched “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” This presumably is a nod to the looming FISA Section 702 debate in April, as well as a torrent of other privacy-destructive technologies, like the unfolding national network of Flock cameras. So plan ahead. You might be at lunch several years from now with a bunch of business prospects wearing Ray-Bans or Oakleys, finding them unusually quiet from time to time. That’s because they will be reading up on you in real time with the help of Meta’s AI assistant. Meta is weighing identification only of people who are on its platforms, not strangers you pass on the street. But we are skeptical. Even without facial recognition tech installed, the company’s smartglasses can be hacked and made to identify strangers. And let’s not forget that Meta smartglasses already offer livestreaming and the ability to post directly to Instagram. But try to think of the bright side: You’ll never have to introduce yourself again. ICE has become enough of a household word that, like NASA, it’s no longer necessary to spell out its acronym. ICE’s aggressive enforcement of immigration law, now the nation’s hottest political flashpoint, is dividing Americans like nothing else in recent memory. Regardless of where you stand on ICE and illegal immigration, we should all agree that ICE’s massive expansion into domestic surveillance is a grave concern for anyone who values the Fourth Amendment and privacy. When a protester recording video on her phone wants to know why a masked agent is taking down her information and he replies – “Because we have a nice little database and now you’re considered a domestic terrorist!” – Sheera Frankel of The New York Times rightly suggests that we’ve entered uncharted territory. Political dissent is now being treated as domestic intelligence. The masked agent was not kidding. The Department of Homeland Security (DHS) is launching a pressure campaign to get Big Tech to identify persons who post content deemed “critical” of ICE. Rather than traditional investigative work, the government appears to be leaning on something akin to an abuse of process, filing hundreds – if not thousands – of subpoenas intended to compel tech giants to cough up user data. This data grab of lawful speech is unprecedented. It amounts to using an exceptional legal maneuver – an emergency procedure meant for crimes like child trafficking – to collect constitutionally protected political expression. And let’s be clear about the constitutional claim: The contents of our “friends-only” digital posts are modern “papers and effects,” private possessions the Fourth Amendment was designed to shield from generalized searches. If tech companies cave (and, as highly regulated companies, they likely will), and ICE plugs the data of protesters into its increasingly Orwellian surveillance architecture, then the genie will already be out of the bottle. Once such a capability is developed, it rarely remains confined to a single mission or a single agency. Surveillance tools migrate. Authorities expand. Bureaucracies replicate what works. These tools – algorithms housed in digital fortresses – will almost certainly be shared with the FBI, IRS, FTC, SEC, and a dozen other agencies eager for their piece of the silicon pie. And they won’t just target Americans who are anti-ICE. Depending on the political winds of the day, databases built to track one form of dissent can just as easily be turned against pro-choicers, pro-lifers, critics of the administration in power, progressives, or MAGA supporters. This looks less like law enforcement and more like the construction of a permanent political-intelligence system – the start of a security-state apparatus on a scale never before seen, primarily and perversely used to surveil and catalog the political beliefs of Americans. Congress should examine this emerging capability and look to install guardrails when it debates surveillance policy in March and April. PPSA Tells Eleventh Circuit that AI-Powered License Plate Tracking Violates the Fourth Amendment2/17/2026
United States v. Slaybaugh Artificial intelligence has handed government surveillance a superpower the Founders never envisioned – the ability to quietly track millions of Americans, then rewind their movements later without a warrant. In United States v. Slaybaugh, PPSA is urging the U.S. Court of Appeals for the Eleventh Circuit to draw a constitutional line around the warrantless use of automatic license plate reader (ALPR) databases. At stake is more than one defendant’s conviction. The court must decide whether rapidly evolving surveillance tools will stretch the Fourth Amendment beyond recognition for all Americans. When Public Data Becomes Private Surveillance Law enforcement offers a simple argument with surface appeal: License plates are visible on public roads, so collecting them invades no one’s privacy. In our brief, PPSA details how that simple argument collapses before the reality of modern surveillance. This case is not about a single camera capturing a passing car. It is the government’s ability to aggregate billions of scans into a searchable chronicle of a person’s life. ALPR systems collect time-stamped and geolocated images of every passing vehicle, and store them indefinitely, allowing officers to reconstruct travel histories “with just the click of a button.” Far from snapping one static image of a license plate, ALPR systems have the power to tail everyone and anyone in a given city or county. That power transforms fleeting public observations into something fundamentally different – a digital dossier revealing where we sleep, worship, seek medical care, protest, or attend political meetings. The U.S. Supreme Court recognized this danger in Carpenter v. United States (2018), holding that long-term location tracking can trigger Fourth Amendment protections even when a person’s movements occur in public. While Carpenter involved the extraction of a suspect’s geolocation history from a cellphone tower, ALPR surveillance raises the same constitutional concerns – but at a vastly higher scale. The Myth of a Numerical “Safe Harbor” One of the most significant errors PPSA identifies in the lower court’s ruling is the idea that surveillance becomes unconstitutional only after it collects a certain number of data points or weeks of tracking. The federal court treated the retrieval of 72 plate “reads” over three weeks as too limited to reveal the whole of one person’s movements. This take misreads Carpenter. The danger lies not in how many time police officers choose to view images, but in the existence of the massive surveillance database itself. Car “Fingerprints” and “Digital Time Travel” PPSA told the court:
With such databases, officers can effectively travel back in time and retrace anyone’s movements long before suspicion arises. That retrospective power, PPSA demonstrates, far exceeds the general warrants and other abuses the Fourth Amendment was designed to restrain. In colonial America, the King’s agents lacked the ability to catalog every citizen’s movements. Modern technology has erased that practical limitation. Without constitutional safeguards, PPSA warns, the government can monitor entire populations’ travel histories and associations – whether political, romantic, or religious. From License Plates to a Surveillance Ecosystem ALPR systems are only one piece of a rapidly expanding surveillance architecture. PPSA warns that these tools increasingly integrate with other technologies – including AI analytics, neighborhood camera systems, and vast databases of commercial data sources holding personal information. The concern is not simply about license plates. It is about the emergence of an interconnected surveillance ecosystem capable of mapping people’s lives in unprecedented detail. The Solution Is Already in the Constitution PPSA’s position is not anti-technology. We acknowledge that modern policing can benefit from advanced tools – so long as they operate within constitutional limits. The solution is straightforward and familiar – requiring law enforcement to obtain a warrant supported by probable cause before querying historical ALPR data. That safeguard preserves investigative power while ensuring judicial oversight of government tracking. The Future of Privacy The Eleventh Circuit’s decision may shape how courts treat digital tracking technologies far beyond license plate readers. As geofenced surveillance, AI drones, and integrated camera networks expand, the dangers of technology will only become more acute, and the constitutional principles at issue in Slaybaugh will only become more urgent. Slaybaugh may well determine whether every time we get in our car, we are freely roaming public streets or becoming caught in a permanent dragnet. Watching the Watchers: Amazon’s Ring Superbowl Commercial Demonstrates “Terrifying” Surveillance2/10/2026
Watch Amazon’s Super Bowl ad and tell us what you see: a heartwarming story of a family reunited with a lost dog, or another element in America’s comprehensive surveillance state. As the ad shows, Amazon’s free “Search Party” function connects cameras in a whole neighborhood to look out for a lost dog. Amazon’s AI, trained by tens of thousands of dog videos, can recognize different breeds, fur patterns, shapes and sizes to spot the lost puppy. That is not a bad thing at all. But many viewers found the ad “terrifying,” not heartwarming, according to Kelly Kazek of al.com. One commenter on X wrote: “Ring just casually outing themselves as literal spyware that can be accessed by anyone on the network. This is insane.” Another wrote: “Amazon owns Ring and they want to use all these devices to make a mesh network for Amazon sidewalk … The American consumer just got a Trojan horse packaged as home security.” As EFF’s Matthew Guariglia reported last year: “Not only is the company reintroducing new versions of old features which would allow police to request footage directly from Ring users, it is also reintroducing a new feature that would allow police to request live-stream access to people’s home security devices … “This is a grave threat to civil liberties in the United States. After all, police have used Ring footage to spy on protestors, and obtained footage without a warrant or consent of the user.” The Search Party AI function greatly amplifies Ring’s surveillance capability. This default feature of Amazon Ring that can identify Fido can also identify you, where you go, and people you visit. At the very least, Amazon should announce limits on how this technology can be trained to follow Americans in our daily movements. Look up. There is a good chance a drone is looking back. From government agencies to insurance companies, drones now routinely patrol American neighborhoods, hovering over backyards and rooftops in search of violations, liabilities, and profit. What was once pitched as a tool for emergencies or remote inspections has quietly become a pervasive system of aerial surveillance of American homes without public consent. In Virginia, under current law, surveillance drones may conduct close inspections of private property without a warrant in emergency or “exigent” circumstances. These exceptions include searches for a missing child or an elderly person who has wandered off, or tracking a dangerous suspect on the run. Now a bill introduced in Virginia’s lower chamber by Alfonso Lopez, a Democratic member of the House of Delegates, would expand this list of emergency exceptions in which the Fourth Amendment’s requirement for a probable-cause warrant can be swept aside. If this bill passes, the Commonwealth of Virginia will be able to spy on citizens to make sure they follow environmental rules on sediment control and erosion management, as well as regulations regarding water and wetlands. In short, this bill would allow the Virginia Department of Environmental Quality to deploy surveillance drones not for the usual dire exigent circumstances, but just to make sure that property owners are in compliance with that department’s environmental regulations. Virginia’s proposal shows how easily “emergency” drone powers can be repurposed for routine regulatory enforcement. But government is not the only actor exploiting the skies. As drone surveillance becomes normalized, private companies have eagerly followed – deploying the same technology not to enforce the law, but to grow profits. Texas provides one example of how the private sector is using drones to impinge on homeowners’ privacy. KUT News in Austin interviewed dozens of homeowners, industry experts, and insurance watchdogs, and reviewed hundreds of pages of complaints and state filings, to document how insurance companies are using aerial drone technology to spy on their customers. KUT reports that poor images of homes often prompt insurance providers to unfairly raise rates or cancel policies. Customers have been told to replace their roofs when in fact their roofs only need a good cleansing rain. As Audrey McGlinchy of KUT writes: “And with the proverbial click of a button, companies can decide if they want to renew a homeowner’s policy.” How pervasive is commercial surveillance? KUT reports that one aerial-imaging technology firm providing imagery for insurance companies estimates there are “eyes on 99.6 percent of the country’s population.” State laws and courts are not adjusting to this new reality. For example, in 2024 the Michigan Supreme Court punted on the Fourth Amendment implications of a township’s low-flying drone that crossed over a couple’s fence line to search for zoning violations. At the national level, the U.S. Supreme Court has yet to fully define drone-specific privacy rights. Lawmakers and courts need to catch up to a simple reality – pervasive drone surveillance over homes is no longer hypothetical, rare, or futuristic. It is routine, largely unregulated, and already being used to punish Americans financially and intrude on their privacy. If the Fourth Amendment is to mean anything in this age of mass aerial surveillance, our laws must recognize that what hovers over our roofs and backyards today can be just as invasive as a warrantless step into our homes. Chatrie v. United States The Project for Privacy & Surveillance Accountability is asking the U.S. Supreme Court to consider whether the Fourth Amendment allows law enforcement to use geofence warrants to retroactively track the movements of everyone in a defined area. These so-called “reverse warrants” involve law enforcement’s request for information from technology companies – like Google, Apple, Snapchat, Lyft, or Uber – that allows them to identify potential suspects in a crime. This case began with a robbery in 2019 of $200,000 from a credit union in Midlothian, Virginia. Detectives soon hit a dead end in a search for suspects. So they served Google with a geofence warrant to provide certain cellphone data for everyone who passed through a circumscribed area around the credit union. As a result, people suspected of no crime had their personal information examined by police. Targets included residents of a nursing home, diners and wait staff at a Ruby Tuesday restaurant, and guests who had checked into a Hampton Inn. The search led to the arrest and guilty plea of one Okello T. Chatrie, who now seeks to exclude this evidence on constitutional grounds. Federal Judge Mary Hannah Lauck noted that because Google logs cellphone users’ location 240 times a day, technology gives police “an almost unlimited pool from which to seek location data” in a broad area in which everyone has “effectively been tailed.” But the U.S. Court of Appeals for the Fourth Circuit, sitting en banc to review a divided panel decision, held that this geofence warrant did not violate the Fourth Amendment. The U.S. Supreme Court is now set to take up this question. In our brief, we are telling the Court that such dragnet surveillance is fundamentally incompatible with the Fourth Amendment’s core protections. Geofence Warrants Are “Digital General Warrants” One of the primary abuses that motivated the Founders to create the Fourth Amendment was the use in colonial times of general warrants – broad search authorizations that allowed the King’s agents to rummage through private lives and property without individualized suspicion. Geofence warrants are their modern equivalent. Instead of naming a person or place to be searched based on probable cause, geofence warrants similarly authorize the government to sift through massive location databases to identify people who might be worth investigating. PPSA told the court that these warrants invert the constitutional order – everyone becomes a suspect first, and probable cause, if it appears at all, comes afterward. The Supreme Court’s Carpenter Decision Was Not a Narrow Exception Lower courts have struggled to apply the Supreme Court’s landmark decision in Carpenter v. United States (2018), which held that people have a reasonable expectation of privacy in long-term cellphone location records, even when those records are held by a third party. In Chatrie, the Fourth Circuit treated Carpenter as a narrow exception limited to long-term tracking of a single suspect. PPSA demonstrates that this take misreads the case entirely. Carpenter reaffirmed a broader principle: Fourth Amendment protections must preserve the level of privacy that existed at the nation’s founding, even as technology evolves. The fact that data is held by a third party – or that the government demands only a “slice” of a much larger tracking database – does not erase reasonable expectations of privacy. A two-hour window into a comprehensive location history can still reveal intensely private information – where someone worships, seeks medical care, attends political meetings, or simply lives their daily life. PPSA is telling the Court that the privacy concerns raised by geofence warrants are even more severe than those in Carpenter, because they involve mass surveillance of unknown and unsuspected individuals. This is not targeted policing. It is suspicionless data mining. Your Privacy Rights Depend on Where You Live Courts across the country are sharply divided on this issue. The Fourth and Eleventh Circuits have suggested that geofence searches may not even trigger the Fourth Amendment. By contrast, the Fifth Circuit has correctly recognized that geofence warrants are unconstitutional in nearly all circumstances because they lack particularity and probable cause. That split leaves Americans’ privacy rights dependent on geography, and in the case of Texas, whether state or federal proceedings are involved. PPSA urges the Supreme Court to step in now, before this powerful surveillance tool becomes permanently normalized. The Constitution Must Keep Up with Technology As PPSA warns, geofence warrants are only the beginning. We told the High Court: “Fourth Amendment protections are not categorically lost when a person shares or stores his data with a third party while maintaining reasonable expectations and assurances of privacy. The Court should … prevent a contrary understanding of Carpenter from continuing to erode Americans’ privacy – especially now, as third-party storage becomes more ubiquitous and artificial intelligence becomes powerful enough to piece together intimate information from seemingly innocuous details about a target’s life.” The data that this practice puts at risk is not limited to location. The government has used other forms of these “reverse search warrants” to extract other private data, such as identifying anyone who has searched for a specific phrase or forcing commercial genealogy companies to allow access to their DNA databases. Advances in artificial intelligence already allow law enforcement to infer locations from photos and videos, even when no geolocation data is attached. Without firm constitutional limits, today’s location dragnet could become tomorrow’s visual surveillance dragnet. The Fourth Amendment’s precise wording is designed to prevent unchecked surveillance. PPSA’s calls on the Supreme Court to reaffirm that Americans do not surrender their constitutional rights simply by carrying a cellphone. School prepares students for the world of work by instilling discipline, the ability to manage a schedule and prioritize, to solve problems with curiosity and teamwork… and to become accustomed to always being under the watchful eye of the American surveillance state. Public schools use AI software like Gaggle to scrutinize the emails, online chats, and online searches students make on school equipment. Joe Wilkins of Futurism recounts the ordeal reported by Lesley Mathis, a mother in Tennessee, whose eighth-grade daughter was “arrested, interrogated, strip-searched, and held in jail for a night, over some teasing online.” What was this student’s offense? Wilkins: “Specifically, the student’s friends had heckled her about her ‘Mexican’ complexion, even though she has a different ancestry. ‘On Thursday we kill all the Mexico’s,’ [sic] the eighth-grader quipped back.” Was the remark stupid, tasteless, and uncalled for? Yes, yes, and yes. But, as Wilkins writes, “it was clearly a bit of eighth-grade immaturity boiling over, not an actionable threat.” A school counselor would have seen this for what it was. AI did not. “It made me feel like, is this the America we live in?” Mathis said. “And it was this stupid, stupid technology that is just going through picking up random words and not looking at context.” But this was in keeping with Tennessee’s zero-tolerance law requiring any threat of mass violence against a school to be reported immediately. For its part, Gaggle’s CEO Jeff Patterson told The Milwaukee Independent that in this case the school did not use Gaggle the way it is intended. “I wish that was treated as a teachable moment, not a law-enforcement moment,” Patterson said. It is understandable – given how this nation is regularly traumatized by school shootings – why Tennessee has embraced such a standard. But when the filters are set so wide, and the reactions to infractions so extreme, it is hard to justify such a system on the basis of public safety as well as free speech. Schools are learning, slowly, to put up guardrails against overreaction, but only after hard bumps into reality. Consider the policy of Philadelphia schools, which in 2010 allowed students to take school laptops home. None of these students were told that when opened, their laptops would snap an image of them at home – often in their bedrooms – every 15 minutes. One student, 15-year-old Blake Robbins, was accused by his school of being involved with illegal drugs on the basis of what his laptop had recorded. This charge was based on images of Blake lying on his bed, popping fruit-flavored candy into his mouth. Schools have since been taught by public backlash that watching a student in his bedroom is illicit. But privacy-infringing technology continues. It is legal for schools to monitor students’ public social-media posts and online activity made on students’ own devices and on their own time. All of which prepares America’s public-school students for the new American workplace. In many offices, active surveillance of employees extends from the parking lot to the workstation, to the breakroom. Employers not only use technology to scrutinize employees’ search histories. They also use sensors to monitor “desk attendance,” and to follow employees as they move from office to office, on their breaks, and even – in some states – into the bathroom. Nicole Kobie of ITPro reports that one in five office workers are now being monitored by some kind of activity tracker. She also reports surveys that tracked employees are 73 percent more likely to distrust their employer, and twice as likely to be job-hunting as those who are not tracked in their workplace. In California, Assembly Bill 1331 would have barred monitoring in employee-only areas such as break rooms and locker rooms. The bill, which would have fined employers $500 per violation, recently died in the California State Senate. There is likely a human cost – and thus a cost in learning at school and productivity at work – when surveillance records a person’s every move and utterance – all initially judged by artificial intelligence that lacks nuance and social intelligence. Such systems are not only Orwellian; they are also destructive of the trust that is needed for effective teamwork, whether between teacher and student, or employer and employee. Consider the story of Olivia Stober, in her interview with CBS News, who compared her old retail job – where her every interaction with customers was monitored and critiqued by her employer – with her new job, where she is a trusted employee and the cameras are aimed only at the establishment’s front door. Unlike Stober, today’s students are being inured to constant surveillance as they graduate from classrooms to workplaces under the watchful eye of those who claim to only have our best interests at heart. Has there ever been a more Orwellian-sounding program than “Total Information Awareness?” This was the post-9/11 brainchild of the Defense Advanced Research Projects Agency (DARPA), a think tank for the Department of Defense. The idea was simple: collect all data on all Americans, then data-mine that giant pile of information to identify “terrorist patterns.” The goal of Total Information Awareness was “predictive policing,” applying the same data-modeling techniques credit card companies use to spot fraudsters in order to catch terrorists before they act. The premise was dubious at its core – identifying terrorist patterns involves a far greater order of complexity than spotting someone misusing a credit card number. Worse, in order for Total Information Awareness to work, the government would need to have access to virtually all information about every American. It would be like stamping out drunk driving – which every year kills four times as many Americans as the terrorist attacks of 9/11 did – by stopping every motorist every few miles to give them a breathalyzer. Admiral John Poindexter, one of the masterminds of the project, wasn’t kidding when he called Total Information Awareness a “Manhattan Project for counterterrorism.” Sen. Ron Wyden (D-OR) called it the “biggest surveillance program in the history of the United States.” The ACLU in 2003 called it “the closest thing to a true ‘Big Brother’ program that has ever been seriously contemplated in the United States.” But nothing was more telling than the slogan of the Information Awareness Office, the Pentagon office that ran the program: “Knowledge is Power.” But power over whom and for what purpose? Total Information Awareness could be used for terrorism today, tax compliance tomorrow, and political surveillance the day after that. Congress was sufficiently alarmed to pull the plug on the Information Awareness Office in 2003. But in 2026, to quote the little girl in Poltergeist II, “they’re back.” This time, the architects of total surveillance have been smart about branding. An executive order issued in March was titled “Stopping Waste, Fraud, and Abuse By Eliminating Information Silos.” It instructs all agencies and departments to make their information on Americans available to all other agencies. These silos were there for a reason. They were put there by the Privacy Act of 1974, often described as “an American Bill of Rights on data.” The law’s purpose was to establish a Code of Fair Information Practice to govern the collection, maintenance, use, and dissemination of on all personally identifiable information (PII) of Americans. Despite this law, federal agencies are complying with the executive order, seeking data from each other and from the states (though 20 blue states are suing in federal court to stop data sharing). The Immigration and Customs Enforcement agency (ICE) is now the gleaming tip of a data “ICEberg,” after a federal judge ruled that the Centers for Medicare and Medicaid Services can share the personal Medicaid data of 80 million Americans. Many agree with the administration that Medicaid needs to be reserved for Americans, not illegal aliens. But no one believes that there is anything close to 80 million illegals in the United States. How might all this PII on Americans be used? How long will this data be kept? How might it be shared with other agencies for very different purposes? “Every generation imagines itself to be more intelligent than the one that went before it, and wiser than the one that comes after it,” George Orwell wrote. To blithely discard the guardrails of the Privacy Act – and to trust that vast amounts of highly personal information won’t one day be abused by the FBI, the IRS, and other agencies – is either cynical or beyond naïve. PPSA has long warned that allowing federal intelligence and law enforcement agencies to purchase Americans’ personal digital data from data brokers would build a surveillance state. Now the federal government has put in place the most effective tools to activate that surveillance state in America. This is the natural consequence of two technologies purchased by Immigration and Customs Enforcement (ICE). Whether you believe ICE’s approach to mass deportations is necessary, or an exercise in cruelty, there is no question that what ICE is doing with technology is guaranteed to transform the whole balance between the federal government and its citizenry. It is deploying two forms of surveillance without a warrant that can track people to meetings with friends, their place of work, and homes, their houses of worship, while also drawing on data gleaned from social media to compile dossiers on Americans’ beliefs and personal associations. In using these technologies, ICE often doesn’t know if the target is an American citizen or someone who is not lawfully in this country. Joseph Cox of 404 Media, in his most recent blockbuster revelation, details the consequences of two technologies purchased from a company called Penlink. One such technology is Webloc, which allows ICE to draw a rectangle, circle, or polygon around a portion of a city and pick out smartphones of interest. Cox writes that “they can get more details about that particular phone, and, by extension, its owner by seeing where else it has traveled both locally and across the country. Users can click a route feature which shows the path the device took.” Webloc’s surveillance relies on exploiting code in ordinary apps on our phones, like games and weather apps, that track our location. The rest comes from data brokers that sell our private information through real-time bidding. In the digital age, we are all standing on the digital auction block. Another Penlink technology, called Tangles, is a social media monitoring product that can take an image of a person’s face on the street, identify that person, locate that person’s social media feeds, and produce a “sentiment analysis” from that target’s posts. At a glance, the government will have a file on your beliefs. These new government capabilities should worry conservatives, libertarians, and MAGA supporters, as well as liberals and progressives. The effectiveness of such technologies makes it inevitable that it will spread beyond ICE to the FBI, IRS, and other agencies, as the government works to break down the traditional data silos between agencies. They are sure to be used against Americans by administrations of both parties. Webloc and Tangles cost only a few million dollars – a rounding error for the federal government. As these capabilities expand and become daily practice, the constitutional balance of government by the consent of the governed – based on the Fourth Amendment’s requirement for a probable cause warrant – will inevitably give way to authoritarian control. Only Congress can stop this. As the surveillance debate heats up ahead of the reauthorization of FISA Section 702 in April, Congress must urgently use that debate to pass a bill or an amendment that will restrict the currently unrestricted purchasing of Americans’ data by the government. As an old Kenny Loggins rock song put it, “make no mistake where you are, your back’s to the corner … stand up and fight.” Let Congress know it is not acceptable for federal agencies to buy our private and sensitive data without a warrant. Michael Moore is a retired public-school teacher living in San Francisco. Nearly every day, as he drives to the store, to his sons’ schools, or to meet friends and family, his movements are watched and recorded at every turn. But he is not being tailed by a private detective or by the police. Moore, like every other driver in San Francisco, is being tracked because he must navigate through the city’s network of almost 500 automated license plate readers (ALPRs). These devices, operated by the San Francisco Police Department (SFPD), constitute a major link in the national surveillance network that the vendor Flock Safety is providing to state and local law enforcement. Moore has had enough. At the end of December, he filed a class action lawsuit in a federal courtroom on his behalf and on behalf of his fellow San Franciscans against the city and its police department over this continuous violation of their Fourth Amendment rights. In his suit, Moore states that Flock ALPRs “make it functionally impossible to drive anywhere in the City without having one’s movement tracked, photographed, and stored in an AI-assisted database that enables the warrantless surveillance of one’s movements.” Here are some of the topline revelations from Moore’s lawsuit: Suspiciousness surveillance: Of the over 1 billion license plate scans collected by 82 agencies nationwide in 2019, “99.9 percent of this surveillance data was not actively related to any criminal investigation when it was collected.” Creates “vehicle fingerprints”: “When Flock Cameras capture an image of a car, Flock’s software uses machine learning to create what Flock calls a ‘Vehicle Fingerprint.’ The ‘fingerprint’ includes the color and make and model of the car and any distinctive features, like an anti-Trump bumper sticker or roof rack. Flock’s software converts each of those details into text and stores them into an organized database.” Tracks social networks: “Flock provides advanced search and artificial intelligence functions that SFPD officers can use to output a list of locations a car has been captured, create lists of cars that have visited specific locations, and even track cars that are seen together.” Data stored indefinitely: “The data that Flock Cameras collect belong to the SFPD but Flock retains data on a rolling 30-day basis. Nothing, however, prevents the SFPD or its officers from downloading and saving the data for longer than SFPD’s 365-day retention period.” Flock doesn’t just see and record – it thinks and analyzes: “ALPR technology is a powerful surveillance tool that is used to invade the privacy of individuals and violate the rights of entire communities. ALPR systems collect and store location about drivers whose vehicles pass through ALPR cameras’ fields of view, which, along with the date and time of capture, can be organized by a database that develops a driver profile revealing sensitive details about where individuals work, live, associate, worship, protest and travel.” Moore’s lawsuit poses a profound constitutional question: Can a city turn every resident into a perpetual suspect simply for driving on public roads? The Fourth Amendment was written to forbid dragnet surveillance untethered to suspicion, warrants, or individualized cause. Yet San Francisco has quietly constructed a system that records nearly every movement of its citizens, not because they are suspected of wrongdoing, but because technology makes it easy. If this practice is allowed to stand, the right to move freely without government monitoring may become a relic – honored in theory, but surrendered in practice to cameras, algorithms, and convenience. If you got a Roomba for Christmas, we have good news and bad news. The good news is that your product will likely continue to be supported despite the company’s recent bankruptcy filing. The bad news: this Massachusetts-based brand may soon be just another piece of Chinese-owned spy tech. Amazon tried to buy iRobot, the maker of Roomba, in 2021, but that deal was ultimately nixed by the Federal Trade Commission on antitrust grounds. Now, if a judge approves the pending sale of iRobot to Shenzhen Picea Robotics, Roomba will join numerous brands under the ever-expanding surveillance umbrella that many Chinese products represent. Not that China is the sole problem when it comes to protecting the privacy of American consumer data. The United States has no robust privacy laws apart from a few state initiatives, and the data practices of companies like Amazon are a mixed bag. But the Chinese Communist Party doesn't even pretend to care about privacy, instead marketing highly functional (and affordable) electronics capable of gathering all manner of personal information. This ill-fated combination has created a veritable Wild West when it comes to the consumer electronics market. iRobot says Roomba will remain an American brand, a claim that means little when no one is minding the privacy store in the first place. So you can either trust that your data will be treated with care (good luck) or you can try to protect yourself just a bit. According to experts, disconnecting from Wi-Fi and Bluetooth will likely disable any advanced features but will not prevent Roomba models from actually cleaning. “Advanced features” in this context mostly mean updates to the app, which Roombas can operate without. And it certainly refers to a data pipeline that goes straight to who-knows-where, replete with maps of your home’s layout and eye-level images of your pets and you playing on the floor. Remember, any connected devices, including vacuum cleaners, can be (and have been) hacked. Apps are black holes for data and privacy anyway. So just press “Clean” and forget it. There are few spaces meant to be more private than the bedroom. But that, writes Wired’s Chloe Valentine, may be about to change. In a trend that gives a twisted new meaning to the concept of the “Internet of Things,” sex toys are joining the ranks of app-connected devices. As they do, the adult toy industry has found a way to breach one of privacy’s few remaining sanctums. Who knew there was an app for that? But here’s the thing about apps: users see them as a way to interact with devices. Companies, however, view them as something much more valuable – collectors of data that can be monetized. And what better place to collect personal information than the boudoir? As if data privacy wasn’t already teetering on the brink, along comes a new – and deeply invasive – set of variables to track and mine for insights. Think of it this way: If it’s a setting on the device, it’s measurable. And if it’s measurable, it has value to the company that markets it. Behavioral data is especially valuable but was long notoriously difficult to obtain until about a decade ago, when the consumer IoT market began to proliferate. Thanks to the rise of connected devices, companies can now acquire behavioral data about their consumers in the most accurate and intimate way possible – by observing them in the act. For those who are comfortable with sex toy companies gathering their behavioral data, that’s their prerogative. But sexual behavior data potentially includes many things: location information, usage frequency, which toy a consumer is using, even which functions and intensity settings they choose. When combined with purchase records and demographic data, this amounts to an expansive – and intensely personal – profile. Moreover, there is no way to truly guarantee anonymity, despite what organizations may claim. Meanwhile, the potential actions of hackers or other bad actors remain an ever-present threat. And in the end consumer data is just as likely as not to end up in the hands of brokers who won’t hesitate to sell it to any interested parties (whether obtained legally or not, the rotten practice of data brokering remains perfectly legal). If you add cameras and Wi-Fi to the mix, then you’ve got another layer of “What could possibly go wrong?” Here one need only recall the sordid tale of the Savkom Siime Eye, an early entrant in the field of IoT adult toys. If you get one of the new generation of adult toys, start by checking permission settings in the product’s app – and on your smart phone more generally. Most smartphones eagerly assist apps in sharing information, so you might be shocked to learn just how much your data gets around. As a reminder, check the app settings for your other connected devices, including: Appliances, smart glasses, security cameras, vehicles, doorbells, wearables, children’s toys, small electrics, TVs, thermostats, plugs and switches, lightbulbs, speakers, navigation systems, locks, motion detectors, smoke alarms, air purifiers, humidifiers, blinds, garage door openers, irrigation systems, solar panels, rechargeable batteries, carbon monoxide detectors, projectors, soundbars, gaming consoles, rings, hearing aids, scales, bikes, scooters, conference systems, printers, lighting panels, pet feeders, litter boxes, aquariums, and birdhouses. Plus your toothbrush. And don’t forget your mattress. Feeling safe now? VICE recently interviewed privacy expert Jason Bassler about the many ways that surveillance has crept into our daily lives and become more or less normalized. Jason is the co-founder of the Free Thought Project, whose site you might not want to visit if you’re already paranoid about being watched. Among the observations that Jason offered VICE were the following. Think of them as a “State of Our Privacy” report: Smartphones are the well-connected spies in our hand: “Today’s mobile tech goes far beyond anything we saw even five years ago. Our phones constantly ping GPS satellites, Wi-Fi networks, and cell towers to triangulate our location, whether or not you’re using a map app. Apps quietly harvest this data and sell it to data brokers, who in turn sell it to agencies like ICE, the FBI, and even the U.S. military.” If it’s a border, it’s biometric: “TSA is expanding biometric surveillance across nearly all U.S. airports as part of a $5.5 billion modernization push. Airports nationwide will be utilizing facial recognition software, and over 250 airports will be accepting digital ID verification. It’s a similar situation with the U.S. Customs and Border Protection. Biometric data collected at borders is often retained indefinitely, and it’s increasingly shared with law enforcement and intelligence agencies, raising concerns about lack of oversight. Border control isn’t just about fences anymore. It’s about fingerprints, facial scans, and AI predictions.” License plate readers are nearly ubiquitous: “They’re designed to capture, analyze, and store vehicle data in real time. Think of them as a cop on the corner of your street, taking notes about every car that passes – its color, its make, its year, where it’s going, how often it goes there, how long it stays, and much more. Now, imagine an army of cops on every corner of your city doing that. This is what Flock [Safety brand] cameras are, except they are mounted on poles and traffic lights.” Bassler also recommends the following ways to fight back against what he calls the growing “ecosystem” of surveillance and its normalizing influence:
Finally, Bassler reminds us to push back politically and let our voices be heard. One way to do that is to remind Congress to finish passing the Fourth Amendment Is Not For Sale Act and send it to the president’s desk. For Vice’s interview with Bassler go here. Axios contributors Christine Clarridge and Russell Contreras recently assessed the increasingly ominous role artificial intelligence is playing in cybercrime. Deepfakes, ransomware, identity hijacks, and infrastructure hacks are all newly elevated threats – widely varied acts that previously required specialized expertise and massive organizations. But not anymore. Now, they write: “Off-the-shelf AI lowers the skill level and cost of carrying out attacks, enabling small crews to execute schemes that previously required nation-state resources.” Here's what else their snapshot revealed:
When it comes to cybercrime, these stats suggest that it pays to be more than a little paranoid. Security consulting firm Koi recently published an exposé about a new online privacy threat, one with the unforgettable name of “ShadyPanda.” The scheme allowed browser extensions to infect 4.3 million Chrome and Edge users. In this case, “infect” means sit there quietly, take control whenever it wants, then pretty much do whatever it pleases, including:
ShadyPanda’s extensions often worked legitimately for years before being activated and turned into full-blown spyware – making it an especially effective tool for keeping tabs on businesses. Some of the extensions were simple wallpaper galleries or productivity tools, and many had been marked as “trusted” or “verified” by the marketplaces that hosted them. One of the key vulnerabilities this research exposed was the whole “trust and verify” approach. Once approved by various marketplaces, extensions were never re-verified. And because most users opt for “auto-updating,” the extensions could continue to build up a large user base and then be activated as spy tools when needed. Koi reports: “Chrome and Edge's trusted update pipeline silently delivered malware to users. No phishing. No social engineering. Just trusted extensions with quiet version bumps that turned productivity tools into surveillance platforms.” And where is all that collected data going? To surveillance-obsessed China, of course. Worried that you might be infected? Check out The Hacker News’ partial list of the culprits. Infosecurity Magazine recommends you also check your browser extensions and remove anything you don’t recognize or no longer use. And turn off auto-updating while you’re at it. It is a dispiriting truth of modern life that we are – and likely always will be – in a footrace against hackers and thieves, whose tools will grow even more dangerous as AI evolves. But we don’t have to be helpless. At least we can take satisfaction in knowing that by embracing best practices, we can at least be a step ahead and leave the ShadyPandas of the world empty-handed. San Jose, California, has 474 cameras tracking license plates – more than enough to create a network whose primary use seems to be mass invasions of privacy rather than criminal investigations. A new lawsuit against the city reveals that from June 2024 to June 2025, the police department conducted more than 250,000 warrantless searches of its license plate database. City officials say the plate readers help solve serious crimes, including homicides, a claim the lawsuit does not dispute. But there aren't anywhere near 250,000 felonies in San Jose each year – which means those warrantless searches are being used for something else. The plaintiffs see two possibilities: 1) dragnet surveillance or 2) an outright tracking system. If it is a tracking system that San Jose wants, it has the makings of one that is truly Orwellian. The city’s cameras apparently capture data points that include “vehicle, bumper stickers with political or other messages, make, model, color, and other details, depending on the camera's position, as well as GPS coordinates and date and time information.” Even in camera-crazy, data-obsessed California, that’s pushing the envelope. What’s more, San Jose retains the data for a year, while the typical retention period in the state is 30 days. Few other jurisdictions use as many cameras, either per capita or in total. Beyond the sheer scale, it’s the level of intimacy this data represents that rankles privacy advocates. Did you go to the gym last Tuesday morning before work? Did you go out on a date Friday night – and with whom? Did you go to a worship service or political rally? Or something else? Who knows what peccadilloes lurk in the hearts of citizens? San Jose knows. When your identity is confirmed by a string of numbers in a computer, are you still yourself if the algorithm determines you (the person) are not you (the digital ID)? One state, Utah, is leading the nation in answering this question with policies that safeguard humans, while Washington, D.C. is heading down the path of reducing humans to algorithms. Consider ACLU’s Jay Stanley, who praised Utah for its “State-Endorsed Digital Identity” (SEDI), the state’s new framework for digital ID systems. In an approach that should be the norm rather than the notable exception, the Beehive State puts privacy first. Utah begins with the conviction that identity “is not something bestowed by the state, but that inherently belongs to the individual; the state merely ‘endorses’ a person’s ID.” In other words, our identities belong to us. We are born with them. We own them. With that realization comes new-found respect for privacy and other forms of personal freedom. This view of identity stands in sharp contrast to the definition Stanley found in the data-driven world of federal law enforcement. With the feds, identity is becoming something only the state can grant, defaulting to incomplete or faulty digital verification of citizenship. To be clear, both Utah’s SEDI platform and the federal approach utilize digital ID systems, but one is a case study in digital due diligence while the other illustrates the dangers of slapdash digital recklessness. The federal system is based on incomplete databases, poorly designed architecture, evolving (meaning, far from perfect) technology, and an utter disregard for the constitutional rights of individuals. Utah’s approach differs from the federal approach in very important ways:
Stanley goes on to quote the Ranking Member of the House Homeland Security Committee, who reports that an app (called Mobile Fortify) used by Immigration and Customs Enforcement (ICE) now constitutes “definitive” determination of a person’s status “and that an ICE officer may ignore evidence of American citizenship – including a birth certificate.” That’s bad enough on its own of course, but along the way, the government now sweeps up Americans’ biometric identifiers en masse. The databases Mobile Fortify accesses contain not only our photographs but enough records to constitute a permanent digital dossier. Congress did not get to review, much less approve, any of this. The American people never voted on it. In fact, the whole thing leaves us wondering what happened to the Privacy Act, signed into law by President Ford in 1974. It has been described as “the American Bill of Rights on data.” By declaring that identity is solely digital, determined by stealthy algorithms and policies, and deniable to those whose data is non-existent, incomplete or inaccurate, the federal standard – in sharp contrast to Utah’s – subverts 250 years of traditional, constitutional practice. Remember: Our founders built the world’s most vibrant democracy on pieces of parchment copied by hand. In any truly free society, identities are personal possessions (to help secure individual rights and facilitate their voluntary participation in society). Identities bestowed by the state ultimately serve only the state. That we even need to ponder the nature of identity reveals the absurdity of these abuses our personhood and privacy. Nevertheless, here we are. Without transparent conversations and healthy debate, we face a future in which we are whomever the state says we are, made of malleable 0s and 1s, with nothing grounded in the physical world. It's a discussion that, as of now, Utah alone seems committed to having. Imagine being targeted for surveillance because of your race – not with facial recognition or government inspection of your personal digital data, but through your electric meter. If you lived in parts of Sacramento, this is exactly what happened, as a decade-long scheme quietly bled Americans’ privacy one kilowatt hour at a time. Sacramento’s Municipal Utility District (SMUD) and local police zeroed in on Asian-American customers, flagging those deemed to be using “too much” electricity. Many were assumed to be growing marijuana illegally – and police eagerly requested bulk data on entire ZIP codes to feed their suspicions. The Electronic Frontier Foundation in July joined the Asian American Liberation Network to ask the Sacramento County Superior Court to end the local utility district’s illegal dragnet surveillance program. Last week, the court agreed, finding that routine, ZIP-code-wide data dumps had nothing to do with “an ongoing investigation.” The court wrote: “The process of making regular requests for all customer information in numerous city ZIP codes, in the hopes of identifying evidence that could possibly be evidence of illegal activity, without any report or other evidence to suggest that such a crime may have occurred, is not an ongoing investigation.” The response from EFF was even sharper: “Investigations happen when police try to solve particular crimes and identify particular suspects. The dragnet that turned all 650,000 SMUD customers into suspects was not an investigation.” The court recognized the obvious danger – dragnets turn vast numbers of innocent citizens and entire communities into suspects. Still, it wasn’t a clean sweep. The court stopped short of ruling that SMUD’s practice violated the “seizure and search” clause in California’s Constitution. But even a qualified victory is still a victory. We are reminded that privacy wins do happen – one dragged-into-the-sunlight surveillance program at a time. This win is something to be thankful for as we count our blessings this week. Another in a long line of privacy-busting apps is making headlines. Anthony Kimery of Biometric Update reports that Immigration and Customs Enforcement (ICE) has an app that allows an officer to photograph a license plate, run it through commercial platforms and “instantly retrieve a vehicle’s historical sightings.” The data that can be called up includes a vehicle’s “travel history, ownership records, and associated personal data.” In other words, portfolio building. In the old days, the feds mostly kept extensive files on criminals, suspects, and witnesses. Now merely driving a vehicle is reason enough to assemble a dossier that includes almost everything there is to know about someone. The tech is powered by Motorola and Thomson Reuters among others. Privacy advocates have previously called out Motorola for license-plate privacy breaches. A 2022 Georgetown University report identified this firm as a go-to seller for agencies in search of consumer data, including utility records and driver’s license information. In 2019, Vice reported that the company’s contracts with ICE were lucrative, which perhaps is why “The Answer Company” wouldn’t respond with details about those dealings when Privacy International pressed for details in 2018. With this latest reporting, Kimery makes clear that ICE has found the perfect partners in its quest to build a national surveillance infrastructure: “The scale is enormous. With billions of detections stored in Motorola’s network and deep identity datasets flowing from Thomson Reuters, the mobile app gives ICE a level of situational awareness that previously required specialized investigative teams and large analytic centers.” The newly invigorated shift toward a national scale is an ominous one. Whereas agencies like ICE previously focused on border regions, ABC News notes: “Border Patrol has built a surveillance system stretching into the country’s interior that can monitor ordinary Americans’ daily actions and connections for anomalies instead of simply targeting wanted suspects. Started about a decade ago to fight illegal border-related activities and the trafficking of both drugs and people, it has expanded over the past five years.” Thomson Reuters previously got into trouble for selling personal data, a fact that the City of Denver recalled this summer when it put the brakes on an extension of its police contract with the company. Thoughtful objections by municipalities like Denver are admirable. But without robust constitutional guardrails installed by Congress and the states, there's no stopping invasive juggernauts like this one. As we concluded the last time we shared news about Motorola’s involvement in license plate surveillance: “The need for lawmakers in Congress and the state capitals to set guardrails on these integrating technologies is growing more urgent by the day. Perhaps the best solution to many of these 21st century problems is to be found in a bit of 18th century software – the founders’ warrant requirement in the Fourth Amendment to the Constitution.” |
Categories
All
|
RSS Feed