|
Philip K. Dick, the 20th century writer whose science-fiction stories proved prescient, once declared: “My phone is spying on me.” He might have been paranoid then, but he wouldn’t be now. Wi-Fi has become the newest battlefield in the surveillance war. First, researchers showed it could sense bodies and furniture in the dark. Then came “WhoFi,” a variant that can detect the size, shape, and makeup of those bodies. A once obscure technology is now advancing at a disturbing clip. Now comes something simpler – and just as insidious, from Australia. In July 2024, the University of Melbourne used Wi-Fi location data, cross-referenced with CCTV footage, to identify student protestors at a sit-in, reports Simon Sharwood of The Register. This was after the school ordered protestors to leave and warned that anyone who stayed could face suspension, discipline, or police referral. Despite the students’ misbehavior, the Victoria state’s Information Commissioner investigated this use of technology, citing possible violations of the 2014 Privacy and Data Protection Act. The final report cleared the university’s CCTV use but found its Wi-Fi tracking out of bounds. Why? Because the school had never clearly disclosed this purpose in its Wi-Fi policies. The Commissioner reports: “Even if individuals had read these policies, it is unlikely they would have clearly understood their Wi-Fi location data could be used to determine their whereabouts as part of a misconduct investigation unrelated to allegations of misuse of the Wi-Fi network.” The Commissioner called this “function creep.” Or as we would say, mission creep. Whatever the name, it’s a serious problem. Surveillance technologies rarely stay in their lane. Once deployed, they inevitably “creep” unless nailed down by clear rules, ethical guardrails, and organizational cultures that prize transparency over convenience. To its credit, the university cooperated with the investigation and promised reforms. But let’s be fair, the University of Melbourne isn’t unique here. We’re all naïve about the countless ways our gadgets betray us. And it’s not just CCTV. No one should be shocked when cameras are used as surveillance tools. It is far less obvious that almost every modern technology can be repurposed to follow us wherever we go. Yes, Virginia, Wi-Fi tracks location. It always has. And whenever location data is on the table, the odds of being spied on shoot through the roof. What else relies on location data? Practically everything with a battery. If you want to reduce your surveillance footprint, you can’t rip down the cameras – but you can shut down your phone, smartwatch, Fitbit, smartglasses, and every other blinking, beeping device. Or better yet, leave them at home. With the possible exception of pacemakers, of course. Joseph Cox of 404 Media reminds us of three things that we know to be true about the new era of generative artificial intelligence:
As we’ve written before, AI works best when there’s a human in the loop. Take the case of Citizen.com, whose app is increasingly taking an AI-only approach to crime fighting. Because, really, what could possibly go wrong? Plenty, as you can imagine. Without further ado, here’s 404 Media’s report on what happens when AI is left to its own devices, Citizen-style. It is prone to:
The stakes are as strategic as they are tactical. One of Cox’s sources told him, “This could skew the perception of crime in a particular area,” as AI-created incidents proliferated. By the way, the original name of Citizen – both the app and the company – was, perhaps tellingly, Vigilante. But that’s a story for another day. Will Congress Follow Montana by Closing the Data Broker Loophole?Twenty states have enacted major consumer data privacy laws. When will Washington, D.C., wake up and restrict the open season on Americans’ personal information at the federal level? California lit the fuse in 2018, passing laws that set limits on how businesses collect and sell consumers’ data. This year, new privacy laws have taken effect, or soon will, in New Hampshire, Delaware, Iowa, Nebraska, New Jersey, Tennessee, Minnesota, and Maryland. Montana may offer the best model for federal action. The Montana Consumer Data Privacy Act, which went into effect late last year, mirrors many other state laws, while giving strong, clear rights. In Montana, consumers have the right:
Like many other states, the Montana law also adds special protections for minors, requiring consent for data sales and targeted ads to children aged 13 to 16. But where Montana truly shines is by closing the notorious “data broker loophole.” That loophole lets government agencies dodge the Fourth Amendment’s warrant requirement by simply buying consumers’ data. Montana now flatly bars law enforcement from purchasing sensitive electronic data – such as electronic communications metadata and precise geolocation information – without a warrant. The federal government has no such restraint. Agencies from the FBI and IRS to the Department of Homeland Security, and the Department of Defense, routinely buy and access Americans’ sensitive personal data. Government lawyers insist this is fine because we all “agree” through terms of service – though almost no one reads them, and they never warn consumers that third-party data brokers might be selling their data to the FBI. As more states pioneer privacy laws, the pressure builds. An intense debate on the data-broker loophole in Congress is inevitable. Lawmakers would do well to take a cue from one of Montana’s favorite sons, Gary Cooper, who said: “One nice thing about silence is that it can’t be repeated.” “I don’t think you can make it off the record once you’ve said it – you can’t call dibs after the fact.” - Journalist Philip Corbett Wearables are defined by their comfort. But there is a lot about wearable technology that is distinctly uncomfortable, if not Orwellian. Wearable computers hit the mainstream with the introduction of Fitbits and smartwatches in the 2010s. Now, says The San Francisco Standard, the rise of artificial intelligence is adding spy tech to the wearable computing family tree. The newest devices are akin to smartglasses but take that technology’s most invasive feature – recording the environment – and turn the creep factor up to 11. The new wearables are stylish and somewhat stealthy and designed to do two things very well: listen and remember. They come in the form of pendants, necklaces, lapel pins – or, in a twist, might even look like a Fitbit or smartwatch. But they are all recording devices capable of capturing the wearer’s every conversation and meeting, then transcribing them, and – the pièce de la résistance – using AI to organize, analyze, and mine them for insights (think personal assistant on steroids, or maybe your very own opposition researcher). In some cases, the devices may only transcribe conversations rather than record them, but they’re still listening and processing conversations, so such distinctions are hardly comforting. The San Francisco Standard suggests that everyone in Silicon Valley should assume that everything they say, especially at work, is being recorded. Which means the rest of America – and its kitchen tables, coffee houses, and classrooms – won’t be far behind. One venture capital partner told the Standard’s writers that she knew a fellow VC who records all in-person meetings “without telling the other meeting participants. It's an invasion of privacy and I seriously disapprove of it." Then, presumably referring to herself and the rest of us would-be audience members, she added, “Of course, this is a horrible way to live your life.” In terms of the privacy concerns raised by this new generation of wearables, Julian Chokkattu of Wired cracked the code. Earlier generations of recording devices and software “at least required active engagement like a tap or a wake word to activate their ability to eavesdrop.” For the most part, the new devices are passive and always on, which places responsibility for gaining consent on the instigator. In other words, “Fox, meet henhouse.” In the research, there are lots of names for the chilling effects that even consensual recording has on conversations, but one of the keenest is “spiral of silence.” People will varnish the truth, if they bother to speak it at all. They will hold back, self-censor, even shut down. As for the possible effects on creativity that this sort of tech might have – as in a brainstorming session, for example – we invite you to judge for yourself. If you think all of this seems like a claim just waiting for a plaintiff, we agree: It’s a one-way express ticket to litigation city. But as with most things AI, the laws governing them are in their infancy and court rulings sparse. One corner of Silicon Valley is already fighting back though: Confident Security is developing Don’t Record Me, a browser plugin that could potentially detect illicit recordings and disrupt them. What about audible cues or flashing lights to indicate that one of these devices is collecting data? Don’t count on it. One entrepreneur told Wired, in effect, “That would drain too much battery life.” Another claims that all you have to do is think about recording to activate his product. Thankfully, for that mode to work, the wearable has to be affixed to the side of your temple with medical tape. But don’t expect other forms of personal surveillance to be so obvious. All the more reason for requiring disclosure for private recording and warrants when government agents listen in on what we say. “I think the very word stalking implies that you're not supposed to like it. Otherwise, it would be called 'fluffy harmless observation time'.” Author Molly Harper TikTok was already a privacy nightmare:
To this troubling list we can now add the following: In violation of the platform’s own policies, sellers are using TikTok to market GPS trackers to stalkers, reports Rosie Thomas of 404 Media. “Unlike AirTags,”one vendor boasts, “this thing doesn’t make a sound, doesn’t send alerts, she will never know it’s there.” In the comments section of a similar ad, one user bragged, “I bought some and put it on cars of girls I find attractive at the gym.” Lest there be any doubt, Thomas’ report quotes Eva Galperin at the Electronic Frontier Foundation: “This is absolutely being framed as a tool of abuse.” Galperin, co-founder of a non-profit that keeps tabs on such products, categorizes these products broadly as “stalkerware.” The central legal and moral issue underlying stalking, as with all violations of privacy, is consent. Expert Market’s page summarizing GPS tracking laws by state underscores the point: The word “consent” appears in these laws 115 times. When asked about the viral proliferation of ads for these tracker tools, TikTok told 404 Media that they “prohibit the sale of concealed video or audio recording devices on our platform.” And yet, Thomas and her colleagues continued to find such ads every time they looked. Which, of course, should come as a surprise to absolutely no one. This is just one more good reason why President Trump should cease suspending the law requiring TikTok to be sold or shuttered. America’s enemies aren’t storming our shores with tanks and planes – they’re breaking into our email, phone, and data systems. And right now, we’re making their job too easy. The U.S. Senate can toughen up America’s defenses by passing the Lummis-Wyden amendment (S. Amdt. 3186) to the 2026 National Defense Authorization Act. This bipartisan fix would finally force the Pentagon to use secure, encrypted communications – and end its costly dependence on a handful of Big Tech vendors. The Scale of Attacks In 2023, Chinese hackers broke into Microsoft-hosted government email accounts, stealing 60,000 messages from the State Department alone. A year later, another Beijing-backed group hacked into AT&T and Verizon, tapping phones of Americans that included presidential candidate Donald Trump and then-Sen. J.D. Vance. But Vance’s conversations were kept safe. How? He relied on Signal, the end-to-end encrypted app that even the hackers couldn’t crack. The obvious takeaway is that without end-to-end encryption, our most sensitive communications are one hack away from the front page of Beijing’s intelligence briefings. The Lummis-Wyden Fixes
Why It Matters Our military today is stuck in walled gardens built by giant tech firms that all too often proved eminently hackable. That’s bad for taxpayers and disastrous for national security. Hackers don’t need to break into every office at the Pentagon – they just need to knock down the door of one weak provider. The Lummis-Wyden amendment puts a lock on those doors. Congress Must Choose Security Congress can keep letting foreign spies read Cabinet-level emails and tap presidential phone calls, or it can finally demand that the Pentagon use the best tools available. This amendment is a wake-up call that we can’t defend the country with outdated software. Encryption and competition would at least give our country a fighting chance to keep China and other bad actors out of our business. PPSA calls on the Senate to pass the Lummis-Wyden Amendment to stop giving hackers the upper hand. This measure will better protect our service members, the American homeland, and the private deliberations of our leaders. Where you drive is personal. So is what you click on and who you communicate with. Combine the two, and suddenly a revealing picture emerges of your political, romantic, financial, and religious beliefs and activities – in short, a comprehensive dossier of your private life. That appears to be what is happening with Flock, which is mashing up its camera surveillance of millions of drivers in 5,000 communities across the United States with digital information gathered on us by data brokers. According to 404 Media, the good news is that after internal deliberations, Flock told its employees in May it would not merge stolen dark web data with information from its network of license plate readers (LPRs). Joseph Cox of 404 Media reported that in a meeting, a Flock supervisor told employees that after a “policy review process,” the company’s new search tool Nova would not incorporate hacked data from the dark web. So far, so good. Dealing in stolen merchandise is never a good look for a company. Flock, however, announced that it will combine “public records data, Open Source intelligence, and license plate reader data” for law enforcement and other customers. This marks a policy shift. Flock has long insisted that its license plate readers do not collect personally identifiable information, claiming they merely provide law enforcement with a way to track cars tied to crimes. But Jay Stanley of ACLU reports that the company now plans to plug its systems into commercial data brokers offering “people lookup” services. ACLU’s Stanley writes: “In the 1970s, after some government agencies were found to be building dossiers on people who aren’t suspected of involvement in crime like the East German Stasi, Congress enacted the Privacy Act banning agencies from such recordkeeping. Yet the ethically shady and frequently inaccurate data broker industry does basically the same thing, and when law enforcement becomes a customer of those data brokers, it represents an end-run around the law. By tying its LPR data together with data brokers, Flock is effectively automating and scaling the end run around our checks and balances that law enforcement data broker purchases represent … “Imagine that a police officer stood on your street writing detailed notes about you every time you drove or walked by them. All the details about what your car looks like (make, model, color, distinguishing characteristics, bumper stickers, etc.), as well as details about visible occupants and pedestrians – how many, at what time, their activities, demographic data, what they are wearing, attributes they may have such as a beard, hat, tattoo, or T-shirt, and what that hat, T-shirt, or tattoo might say. Now imagine that there is an army of police officers doing this on every block.” Thus, algorithms can now seek patterns in vehicle movements to identify and alert law enforcement to drivers who are “suspect.” Stanley pinpoints why this approach clashes with both the letter and the spirit of the Fourth Amendment. He writes there is a big difference between “providing tools for officials to use in investigating suspicion to generating suspicion.” The fusion of your purchased data with your movements could do exactly that. One day, something as ordinary as making a right on red or casual U-turn could transform you from a routine driver into a suspect. Larry Niven, the acclaimed science-fiction writer, once drolly observed, “I do suspect that privacy was a passing fad.” It certainly seems so today, with networked Ring cameras on every door linked to public and private CCTV, license plate readers, and government agencies buying up our digital lives from data brokers… all of it potentially connected to AI and facial recognition software. Even inside our home, drones can look through our windows. Thermal imaging cameras in the hands of police can penetrate walls to watch us move around in our living rooms and bedrooms. But at least there is one place where surveillance cannot penetrate, one last refuge of absolute, inviolable privacy – the inside of our skulls. We are free to think any thought, sacred or profane, sublime or silly, without fear of detection or punishment by any human authority. But maybe not for much longer. The science journal Cell reports that a computer system has been trained to decode brain waves from people who silently move their mouths while mentally sounding the words to themselves. The signal from the brain is then translated into speech in real time on a computer screen with an error rate of 26 percent to 54 percent. Annika Inampudi in Science reports that this technology, as it is refined, will be a godsend to speech-impaired people paralyzed by strokes or neurological conditions such as amyotrophic lateral sclerosis (ALS). To protect test subjects from blurting out private, inner speech, users can be given unique, nonsense phrases like “chitty chitty bang bang” to cue the device to read their thoughts only when they want it to. It is this latter development that gives us pause. The fact that a safeword is needed to defend against unwanted exposure of thought is concerning. Also concerning is that scientists have had significant success decoding thought even when the subject is not silently mouthing the words he or she is thinking about. The system at times can read mere inner thoughts. At a time when digital technology evolves on fast-forward, it is not too early to be concerned about how this technology might be abused. After all, a few years ago AI couldn’t pass the Turing test. Now ChatGPT is regularly writing entertaining short stories, poems with striking imagery, and student papers that get A’s from naïve professors. The same progression could enable mind-reading technology to rapidly allow authorities to dip into people’s skulls against their will. Imagine, for example, how this technology might be used in interrogations. In this country, at least, the Fifth Amendment prohibition against self-incrimination should make results from such mind-readings inadmissible. But in professions in which polygraphs are routine, from law enforcement to intelligence and some retail positions, it is easy to imagine how such technology could be abused. Overall, speech recognition technology is a boon for handicapped people who are desperate to communicate. It is a heartening and praiseworthy development that scientists – often caricatured as amoral agents of progress – are diligently thinking of procedures to compartmentalize the reading of thoughts only when subjects permit it. Still, this story should give us pause. Something to think about… “Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.” – Ian Malcolm, Jurassic Park Just a quick update about the ever-expanding toolkit of the technocratic mass surveillance state: The new kid on the block is GeoSpy, which can examine a photograph and extrapolate your location in seconds. It claims to accomplish this by using only visual data in the image rather than metadata. From a purely technical perspective, that’s a big achievement. From a privacy standpoint, it’s a nightmare. According to an account first reported by 404 Media and summarized by Alex Hively of SlashGear, the original open source version of GeoSpy was quickly removed when it became clear that it could be used to stalk people. Company founder Daniel Heinen later admonished Joe Rogan and guests in a tweet reminding them that GeoSpy is “only for Law Enforcement and Government” use (which, at the time of the tweet had recently become true). That GeoSpy is now “only for” law enforcement and government use is cold comfort. It seems an all-too-familiar narrative, reminding us of Clearview AI’s similarly reckless approach to the ethics of identification technology. By the end of 2021, the facial recognition startup had scraped ten billion images from the web and social media, providing agencies with a powerful new tool to instantly identify us and aid in the quick construction of dossiers of our beliefs, activities, and relationships. And now, thanks to breakneck developments in technology, the government can now both identify us and locate us. Consider this statement from GeoSpy founder Heinen: “My job as a leader in my space is to build the best technology that customers are asking for. It's not my job to play the ethics game because our elected officials will eventually figure that out. I have full faith in the American people to decide who to elect and what to vote on.” (If this were a video, here is where we’d cut away to a dark screen and the sound of crickets.) We won’t belabor the point as our readers know full well where all of this is likely to lead. But we will quote ourselves from a related article decrying the surveillance capabilities of drones and satellites: “What is cutting-edge technology today will be standard tomorrow. This is just one more way in which the velocity of technology is outpacing our ability to adjust.” With the rise of GeoSpy, we now have one more reason for Congress and the states to hit pause and reassert the privacy guarantees inherent in the Fourth Amendment. One last thing: Don’t assume you’re safe just because GeoSpy found a picture that you took indoors. It appears they’ve cracked that nut too, having discovered that their visual model can learn “regional architectural cues.” Silly us, we thought all apartment kitchens looked the same. As Malwarebytes advises, “It’s just become even more important to be conscious about the pictures we post online.” In Washington, D.C., they call it the “data broker loophole.” This is the legal maneuver by which a dozen federal agencies, ranging from the IRS to the FBI, Department of Homeland Security, and the Pentagon, purchase records of Americans’ personal digital activity from third-party data brokers. What is this loophole? With a straight face, the government claims that while the Fourth Amendment forbids “unreasonable searches and seizures” of our personal effects, nowhere does the Constitution forbid the government from opening its wallet and simply buying our data. And to be fair, we all routinely click the “agree” box that allow these transfers when scanning social media platform’s long and hard-to-read terms of service. This is still disingenuous at best. The digital trails we leave online – our communications, the identities of our friends and associations, our personal financial, romantic and health secrets, not to mention our search histories – reveal information that can be more intimate than a diary. Americans are noticing this violation of their privacy. A recent Ipsos poll finds that roughly 90 percent of Americans respond that it is not acceptable for private data brokers to sell our personal data to the government. Congress is certain to soon turn to legislation that will require the government to obtain, as the Constitution requires, a probable cause warrant before inspecting our data. In the meantime, if you want more background on the nature, extent, and abuses of the data broker loophole, here are some useful resources: 1. What Are Data Brokers, And How Do They Work? (Proton) / June 20, 2025 A detailed primer on data brokers and the risks posed to consumers, including the sale of such data to government agencies without warrants. 2. “Anyone Can Buy Data Tracking U.S. Soldiers and Spies to Nuclear Vaults and Brothels in Germany,” (Wired), Nov. 19, 2024. Despite what has to be the most clickable headline in recent history, Wired presents a deep and substantive investigative report that reveals the extent to which the sale of personal data collected by personal devices is putting Americans in uniform and national security at risk.
3. “A Continuing Pattern of Government Surveillance of U.S Citizens,” (Americans for Prosperity: James Czerniawski) See p. 4, April 8, 2025. Eighty percent of Americans agree that the government should “obtain warrants before purchasing location information, internet records, and other sensitive data about people in the United States from data brokers.” And yet federal agencies routinely buy our data, threatening our most basic constitutional rights. 4. “The Intelligence Community Plan to Make It Easier to Buy All Your Data,” (Project for Privacy and Surveillance Accountability), June 2, 2025. The Office of the Director of National Intelligence has instituted a plan to make sure Americans’ private data is no longer decentralized, fragmented, siloed, overpriced, and limited – literally everything you might hope your personal data would actually be. 5. Montana Becomes First State to Close the Law Enforcement Data Broker Loophole (EFF) / May 14, 2025 Montana is the first state to close the data broker loophole, preventing law enforcement from buying personal digital data – like location, communications, and biometrics – without a warrant. Under SB 282, such data can only be accessed with a warrant, user consent, or an investigative subpoena. The law goes into effect 10/1/2025. 6. Federal Government Circumventing Fourth Amendment by Buying Data From Data Brokers (Criminal Legal News) / April 15, 2025 Summarizing much earlier reporting from Reason and WSJ, the focus is on efforts to dodge Carpenter v. United States (2018). Federal agencies routinely purchase commercial cellphone data – which tracks individuals’ movements – without warrants, skirting Carpenter, which requires a warrant for such data.
7. “FISA and the Second Amendment: Gunowners Beware,” (Reason[A1] ), Feb. 1, 2024. If you’re a gun owner and use Apple products, you should be deeply concerned about the ability of federal law enforcement agencies to get a lot of data on you – without ever having to get a warrant. 8. EPIC White Paper Finds Gaps in State and Federal Privacy Law Coverage of Data Brokers (EPIC) /July 29, 2025 This report argues that data brokers exploit legal loopholes in the Fair Credit Reporting Act (FCRA) and Gramm-Leach-Bliley Act (GLBA) to avoid compliance with modern privacy laws.
9. Government Purchases of Private Data (Wake Forest Law Review, 59:1) / April 2024 This paper questions the widespread assumption that the Fourth Amendment can never apply to commercial purchases. Yet police officers can generally purchase an item available to the public without constitutional restriction.
10. Federal Acquisition of Commercially Available Information (POGO) / Dec. 16, 2024 The Project On Government Oversight (POGO) warns that federal agencies' unchecked use of commercially available information (CAI), including sensitive personal data purchased from brokers, circumvents Fourth Amendment protections. POGO urges the Office of Management and Budget to end warrantless surveillance practices, increase transparency, and implement strong regulations. The comment highlights risks to privacy and civil liberties, especially for marginalized communities, and documents past abuses by agencies like DHS, ICE, and the FBI. Finally, if you are interested in solutions, start with The Fourth Amendment Is Not For Sale Act, which passed the House of Representatives last year. If enacted, this measure would require the government to obtain a warrant before buying Americans’ personal information. PPSA looks forward to this or some similar legislation being introduced in the 119th Congress. [A1]This links to a Cato article, which does not contain any links to a Reason article, so I don't know if he embedded the wrong link or wrote the wrong text. The text itself is from Cato. “Moral bankruptcy is common in this industry, but I rarely see a company so proud of it.” – Callie Schroeder, Electronic Privacy Information Center Farnsworth Intelligence sells highly personal data on the cheap. Its business plan is as revolutionary as it is mercenary and brazen: Positioning itself as a legitimate business but selling data previously brought to market in the twilight corners of the dark web. The realm in which this company operates is euphemistically known as “open-source intelligence,” or OSINT. Once upon a time, OSINT was primarily composed of publicly available data. But don’t be fooled. To quote PC World writer Michael Crider, “This is information apparently sourced directly from data breaches, stolen from companies and services in ways that just about every country considers a crime.” And it’s all repackaged to sell at various price points. To wit, 404 Media, whose Joseph Cox broke the story, bought a tiny slice of Farnsworth’s data wares for a mere $50. 404 Media reports that is all they needed to eventually mine the addresses of numerous identity theft victims. Perhaps that’s why journalists report the company’s website says customers can find up-to-date addresses for debtors. Need data for your multi-million-dollar divorce case? Farnsworth can do that too. As for the potential for trade secret violations, corporate espionage law, and the general use of stolen data in courts, EPIC’s Callie Schroeder says stay tuned. There are likely statutes that apply to what Farnsworth is doing, but as with all things digital, judicial rulings have been inconsistent to date. And don’t even get us started on the surreptitious value government agencies of all stripes will place on this kind of dark web data – it could be a warrantless surveillance extravaganza. But there is no denying that shamelessness sells. After the story broke, Farnsworth issued a “404MEDIA” promo code on LinkedIn to celebrate the fact that it “has been getting a lot of attention.” “The future of AI is not about replacing humans, it's about augmenting human capabilities.” - Sundar Pichai, Google After you read this, you’ll wish that students using AI to cheat was the biggest problem with the technology. Turns out, a bigger issue is just how inconsistent AI is at monitoring students for “safety risks.” It’s a privacy nightmare we’ve written about before, with laptops snapping pictures of students at home, and the chilling effect such surveillance has on creative expression and First Amendment rights. But almost four years after we first reported on this increasingly popular trend in secondary education, it shows no signs of letting up – even as we wait for the outcome of a major lawsuit by Columbia’s Knight Institute designed to compel a school district to disclose the nature of their surveillance tech. Instead, we continue to read more headlines like this one from Sharon Lurye from the Associated Press: “Students have been called to the office – and even arrested – for AI surveillance false alarms.” You can read the details of the story for yourself, but the gist is this: A student made a joke on a school-related chat account. The joke was both culturally insensitive and had a reference to feigned violence. It was also somewhat self-deprecating. It was therefore exactly the kind of crass, completely innocent sarcastic drivel that you would expect from a teenager. The only difference is that AI was watching (and, apparently, without the aid of humans possessed of common sense). So, of course, the student was arrested and separated from her parents for 24 hours. Then, somehow, a court made up of non-AI judges ordered eight weeks of house arrest, a full psych evaluation, and 20 days at an “alternative” school. When asked about the incident, the CEO of Gaggle, the company that made the software, opined, “Golly, I wish that was treated as a teachable moment, not a law enforcement moment.” (Okay, we added the “Golly.”) In all such cases, best as we can tell, these are traditional AI systems – unthinking, rules-based programs that have absolutely no sense of context. Traditional student surveillance products are close to 20 years old. The systems that schools pay companies like Gaggle six figures to operate as elaborate keyword-matching programs don’t “think,” and they certainly don’t understand context. Just imagine a student paraphrasing one of Shakespeare’s characters crying, “O, I am slain!” Should that student be flagged for suicide watch? That, of course, is a rhetorical question – something that we’re genuinely worried students in these surveillance-based school systems might never learn. (Of course, we have no idea if any Shakespeare character ever uttered anything like that because we used AI to suggest it.) We get that being proactive about student safety is critical. But monitoring what they type isn’t the right way to do it. Students type – and say – all kinds of tasteless statements because that’s what being in elementary, junior high, and high school is all about. Students should not get arrested (and traumatized) merely for writing sarcastic or ironic language – the kinds of expressive skills school are supposed to teach them in the first place. This isn’t working and it’s time for parents and school systems – and yes, the students themselves who have filed lawsuits – to stand in solidarity and demand at least an overlay of common sense. Without human discernment, AI-powered surveillance systems are unthinking, non-stop monitors designed to destroy privacy, creativity, and individual expression. We would also remind the school administrators who surely mean well when they initially deploy such systems not to forget the cardinal rule of any AI system: Always keep a human in the loop. Every flagged item should be reviewed by at least one school system employee – preferably a principal with, perhaps, the addition of a school counselor – before anything gets reported to law enforcement. “The real danger is the gradual erosion of individual liberties through the automation, integration, and interconnection of many small, separate record-keeping systems, each of which alone may seem innocuous, even benevolent, and wholly justifiable.” U.S. Privacy Protection Study Commission, 1977 To try to find people in the United States illegally, the Department of Homeland Security (DHS) directed the Centers for Medicare and Medicaid Services (CMS) to comply with its request to sift through the health data of 79 million Medicaid recipients. This includes giving DHS access to sensitive personal information, including addresses, birthdates, ethnicity, IP addresses, banking data, immigration status, and Social Security numbers. Twenty states have sued in response, arguing that giving DHS access to such personal data violates privacy protections under multiple federal laws, including the Administrative Procedures Act, the Social Security Act, HIPAA, and, of course, the Privacy Act. Several civil liberties groups, including the Electronic Frontier Foundation, EPIC, and Protect Democracy Project, have filed an amicus brief in that case.
Our own take is that this is the weaponization of data, a characterization articulated by many others. The thing about weapons, of course, is that they can be pointed in any direction. Today it’s illegal aliens. Next time, it could just as easily be wealthy taxpayers, political dissenters, or those who engage in unpopular speech. Orange County official Jose Serrano told the L.A. Times that such targeting is dangerous because “the information is being used against people.” In other words, it could be used for any reason by this or a future administration against you. Do you get a creepy feeling, a tingling on the back of your neck, when you think of a camera behind or above you illicitly watching your every move? Now, imagine if that camera has legs – and it’s crawling up your pants. Yes, good, old-fashioned German ingenuity has resulted in Kakerlaken – the humble cockroach – being transformed into a robust surveillance platform. These roaches, the actual insects, are fitted with tiny AI backpacks that use neural stimulators under wireless control to steer the little spies to their targets, while they carry miniature cameras and sensors on their tiny, slimy carapaces. In case you are wondering, German scientists at Swarm Biotactics are using the Madagascar hissing variety, one of nature’s largest, and surely among your all-time, favorite cockroaches. These cyborg-bugs will do more than provide real-time video reconnaissance. They will also detect toxic gas, radiation, and heat. And the technology’s neural interface can coordinate a large number of cyber-roaches to converge on a target as a swarm. Even better! These technologies, reports The Times of India, allow “remote control and autonomous swarming in tight or inaccessible environments.” Germany has invested more than $15 million to perfect this surveillance army, no doubt with Russia’s military threat in mind. Germany’s invention appears to be an advance over U.S. and Chinese military projects to develop tiny, mosquito-like drones to carry out surveillance. But those tiny robots are limited by range and battery life. A roach lives off the land and can scuttle for months to a year. They make the perfect sleeper agents and require no fake beards or forged passports. We can certainly see the military utility of this project. We also can’t help but note that exotic technologies developed for warfare have a way of migrating – or, perhaps in this case, scuttling – from military to civilian uses. It is inevitable perhaps that similar off-the-shelf AI and sensors will make their way into commercial and law enforcement uses. It’s bad enough when you see an ordinary Kakerlaken on the floor when you turn on the kitchen light. It would be even worse if someone recorded you shrieking before you smash it with your shoe. “The houses have eyes now.” |
Categories
All
|
RSS Feed