Columbia’s Knight Institute Goes to Court to Find Out As we’ve noted, a veritable gaggle of organizations (including a service called Gaggle) are helping schools to monitor student activity on district-issued devices – tracking every website, every keystroke (and potentially snapping pictures of students’ private lives). These arrangements lack transparency. Parents are only told it’s necessary to ensure “public safety” or some version of “safeguarding student mental health.” In the meantime, school districts and taxpayers are shelling out millions to the ed tech industry. And all that collected data? Surveillance companies like GoGuardian and Gaggle have signed a Student Privacy Pledge that they will not sell students’ personally identifying information. Despite pledges from school districts and tech companies, more clarity is needed about who can access students’ information and why. This inscrutable practice of student monitoring is about to get a little more attention – in the form of a lawsuit aimed at unearthing the facts. Attorney Jennifer Jones of the Knight First Amendment Institute describes the student surveillance industry in detail and makes the legal case against it in the Teen Vogue online newsletter. The Knight Institute’s lawsuit isn’t the first of its kind, but its timing amid the cultural chaos of artificial intelligence suggests it could be a tipping point for transparency. This lawsuit is also not about specific privacy violations alleged by individuals, so it won’t be settled for damages as some previous cases have been. On paper, student surveillance systems sound great: The monitoring is designed to prevent self-harm, cyberbullying, and violence. And yet, as Jones points out, the standard list of related keywords and websites the software provides can be customized – making it capable of going far beyond universal safety concerns to serve the political or cultural agenda du jour. What happens if a student tries to access a banned book, for example? Should that be reported? This is all just one search word away from a dystopian episode of the Twilight Zone. As has been reported from multiple quarters, there is scant and merely anecdotal evidence that any of these systems accomplish what they purport to – but evidence of plenty of misfires. Moreover, the law on which this burgeoning surveillance apparatus is based, the Children’s Internet Protection Act of 2000, requires no measures beyond basic obscenity filters. The ed tech industry has done a bait and switch to take advantage of well-intentioned school administrators who are desperate to solve some of the most heartbreaking problems of our time. It would be nice if AI-powered surveillance was the quick fix, but it’s not. It is a blunt force instrument with chilling implications up and down the Bill of Rights. We don’t need to normalize an educational-corporate-juridical surveillance state. The answers to the problems of school violence and self-harm are not easy, and they won’t be solved by technology alone. They must be mitigated through connection and relationships: Talking not stalking. So it’s time for a reckoning, and a conversation that brings all of us to the table. We hope the Knight First Amendment Institute’s lawsuit makes that candid and open conversation happen. Here’s some suggested further reading: Superman Isn’t the Only One with X-Ray Vision: Apparently, Your Wi-Fi Can See Through Walls Too4/11/2025
We are reminded of a Vice story in 2023 that should have received much more attention than it did. Then again, it can be challenging to keep up when new threats to privacy seem to emerge daily. That story, with a very recent twist, is thus: It turns out that Wi-Fi is capable of sensing human presence, potentially even pinpointing location, determining posture/position, and tracking movement. Unsurprisingly, the underlying technology is courtesy of Facebook’s AI team, using optical methods like cameras. Now, Carnegie-Mellon researchers have realized that Wi-Fi is the perfect vehicle for solving “limitations” with the original optical approach, limitations such as not being able to see people in the dark or behind furniture. And that’s not creepy at all, is it? Call us old-fashioned but we just have the feeling that terrible things could come from being able to spy on people in the dark. And we unequivocally declare that the rudimentary nature of what Carnegie-Mellon has “accomplished” won’t remain that way for long. Believe us when we say, technologists will figure this out and in very short order. Because, wouldn’t you know it, Carnegie-Mellon’s iteration is just the latest in a long line of “Wi-Fi sensing” advancements. According to MIT, it’s a broad field with the potential to “usher in new forms of monitoring.” They predict that in effect it’s the future of motion detection technology. Only that future is now. Verizon, Origin Wireless, Cognitive Systems Corporation, AXIS, and Infineon all have services on offer that use some form of Wi-Fi sensing. The goals – “health metrics,” “elder safety,” “home security” – sound commendable, as is always the case with privacy incursions and surveillance overreach. The commercial and social justifications are also appealing, even compelling. But what happens when someone with less-than-wholesome intentions gains access? To say that we need robust guardrails around such technology is the epitome of understatement. And the time to build those guardrails for private and public use of this technology? Probably 2023. Is What the Supposed Terror-Watch Program Is Really Being Used for If this were a political thriller, “Quiet Skies” might be Russia’s clandestine government surveillance program being used to eliminate enemies of the state by poisoning their tea with polonium every time they take a flight. In reality, “Quiet Skies” is the Transportation Security Administration’s secret spying program for the Air Marshal Service. First outed by the Boston Globe in 2018, Quiet Skies singles out potentially dangerous flyers for close attention and inspection (“enhanced observation”). Enhanced observation is a 45-minute process that squeezes every inch of clothing, inspects the lining of suitcases, and requires a live review of every electronic device (meaning take it out, turn it on, and hand it over). Two bomb-sniffing canine teams and a plainclothes TSA supervisor may also be involved and, in the sky, up to three Air Marshals are tasked with watching these suspected passengers’ every move. “SSSS” is TSA’s boarding pass designation for this treatment, which suggests that no focus groups or historians were consulted beforehand. Such inspections in many cases are undoubtedly necessary to track bad actors intent on doing harm to the United States. As people who fly often with our family members, we are glad the government is on the lookout for the next potential shoe-bomber. Whistleblowers have indicated that the program, however, is also being abused as a means of targeting political opponents rather than as a $400-million-dollar anti-terrorist safety net. Just ask Tulsi Gabbard, who was targeted in 2024 after returning from Rome with her husband. By then, of course, the Iraq War veteran and former Democratic representative had become the Biden Administration’s persona non grata du jour after she endorsed and campaigned for Donald Trump. With Gabbard now the Director of National Intelligence, we hope that Rep. Tim Burchett’s (R-TN) request for answers as to why Gabbard was targeted will now see the light of day. Was she simply unlucky in being randomly chosen for this treatment, which has happened to one of us? If politics is involved in any way, that would be a very serious misuse of security policy. You don’t have to be a fan of Director Gabbard to see how such an authority could be misused by any administration in any direction. Employing such tools to surveil political opponents is how republics fall. As facial recognition and biometric scanning systems expand to 400 U.S. airports, Sen. Jeff Merkley (D-OR) is asking if this could be the beginning of a U.S. surveillance state. In a video interview with Philip Wegman of RealClearPolitics, Sen. Merkley said: “I'm concerned about the way facial recognition is used to encroach upon freedom and privacy around the world. We see China enslaving a million Uyghurs, and a tool they use is facial recognition software. It's so inexpensive and pervasive; if you put that power in the hands of a government, you can't know where it's going to go. “This is not the kind of tool you want to give to the government in a free country. You would never know you have the ability to opt out at any airport where they're doing this program." FBI PSA: The Safe Bet Is to Assume It’s Fake Remember when the only person you worried might fall prey to scammers was your favorite aunt, who had only her Welsh Corgi at home with her during the day? “Now, Trixie,” you’d say, “don’t agree to anything and always call me first.” Those days are over. Forget your late aunt Trixie. Worry about yourself. Imagine if you received a phone call from a close friend, family member, even your spouse that was actually an utterly-convincing AI-generated version of that person’s voice – urgently begging you to provide a credit card number to spring her out of a filthy jail in Veracruz or pay an emergency room hospital bill. The age of AI augers many things, we are told. But while we’re waiting for flying taxis and the end of mundane tasks, get ready to question the veracity of every form of media you encounter, be it text, image, audio, or video. In what is sure to be the first of many such public service announcements, the FBI is warning that the era of AI-powered fraud hasn’t just dawned, it is fully upon us. The theme of the FBI’s announcement is “believability.” It used to be that scams were easy to spot – the writing was laughably bad, or the video and audio were noticeably “off” or even a little creepy – a phenomenon known as the Uncanny Valley effect. The newfound power of generative AI to produce realistic versions of traditional media has put an end to such reliable tells. Anyone who thinks they’re immune to such trickery misunderstands the nature of generative AI. Consider:
Whenever a friend or family member sends a video that clearly shows him or her in need of help (stranded on vacation or having their wallet stolen at a nightclub perhaps), don’t automatically assume it’s real no matter how convincing it looks. And thanks to generative AI’s “vocal cloning” ability, a straight-up phone call is even easier to fake. So, what can we do? The FBI advises: Agree to a secret password, phrase, or story that only you and your family members know. Do the same with your friend groups. Then stick to your guns. No matter how close your heartstrings come to breaking, if they don’t know the secret answer, it’s a scam-in-waiting. The FBI also recommends limiting “online content of your image or voice” and making social media accounts private. Fraudsters scrape the online world for these artifacts to produce their deepfake masterpieces. All generative AI needs to create a convincing representation of you is a few seconds of audio or video and a handful of images. Rest in peace, Aunt Trixie. We miss her and the good old days when all we had to do was warn her not to give her personal information to a caller who said he was from the Corgi Rescue Fund. Today, if an AI scamster wanted to, he could now have Aunt Trixie call you from the grave, needing money, of course. Imagine a law enforcement agent – an FBI agent, or a detective in a large police department – who wants to track people passing out leaflets. Current technology might use facial recognition to search for specific people who are known activists, prone to such activity. Or the agent could try not to fall asleep while watching hours of surveillance video to pick out leaflet-passers. Or, with enough time and money, the agent could task an AI system to analyze endless hours of crowds and human behavior and to eventually train it to recognize the act of leaflet passing, probably with mixed results. A new technology, Vision Language Models (VLMs), are a game-changer for AI surveillance, as a modern fighter jet is to a biplane. In our thought experiment, all the agent would have to do is simply instruct a VLM system, “target people passing out leaflets.” And she could go get a cup of coffee while it compiled the results. Jay Stanley, ACLU Senior Policy Analyst, in a must-read piece, says that a VLM – even if it had never been trained to spot a zebra – could leverage its “world knowledge (that a zebra is like a horse with stripes.)” As this technology becomes cheaper and commercialized, Stanley writes, you could simply tell it to look out for kids stepping on your lawn, or to “text me if the dog jumps on the couch.” “VLMs are able to recognize an enormous variety of objects, events, and contexts without being specifically trained on each of them,” Stanley writes. “VLMs also appear to be much better at contextual and holistic understandings of scenes.” They are not perfect. Like facial recognition technology, VLMs can produce false results. Does anyone doubt, however, that this new technology will only become more accurate and precise with time? The technical flaw in Orwell’s 1984 is that each of those surveillance cameras watching a target human required another human to watch that person eat, floss, sleep – and try not to fall asleep themselves. But VLMs make those ever-watching cameras watch for the right things. In 1984, George Orwell’s Winston Smith ruminated that: “It was terribly dangerous to let your thoughts wander when you were in a public place or within range of a telescreen. The smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself – anything that carried with it the suggestion of abnormality, of having something to hide." Thanks to AI – and now to VLMs – the day is coming when a government official can instruct a system, “show me anyone who is doing anything suspicious.” Coming soon, to a surveillance state near you … Is It a Felony to Ask for Pictures of Your License Plate? Here's a philosophical question for you: If no one searches for the information stored in a database, does that mean the information doesn't exist? It may be right there – where Column 32 meets Row 743 – but if no one has executed a search, has it been “found” or “seen” yet? Does it even exist? Now hang on to that curious idea for a moment and we’ll circle back. Recall that we recently commended the nonprofit periodical Cardinal News for publishing an investigative series on the growing use of surveillance technology by local police in Southwestern and South Central Virginia. As part of their investigation, Cardinal News drove through nearly 20 cities, towns, and counties, then used Virginia’s Freedom of Information Act (FOIA) to request the video surveillance data of their vehicle. And what was the result of these FOIA requests?
The city of Roanoke and the Botetourt County Sheriff want the City Circuit Court to rule whether they “really have to” provide the data Cardinal News requested. In their complaint, Roanoke and the Botetourt Sheriff make three less-than-compelling arguments:
A final note: As Cardinal News points out, Virginia law says computers can’t be used to gather identifying information – i.e., account numbers, credit card numbers, biometric data, fingerprints, passwords, or other truly private information. “That’s what the statute is protecting,” the newspaper argues. In other words, the law is not meant to protect you from your own license plate number. Where does such chutzpah come from? This FOIA response perhaps shows that local government is learning from the mental gymnastics and rhetorical sleights-of-hand that federal agencies have mastered in fobbing off lawful requests. We look forward to seeing how these too-clever-by-half arguments will fly in front of a Virginia judge. Stay tuned. EFF Touts New Rayhunter Detector We’ve long followed reliance on stingrays by federal, state, and local law enforcement. These are devices that simulate cell phone service towers to fool nearby devices into connecting and giving up everything – texts, calls, emails, and more, along with the location of the cellphone and information about the user/owner. Law enforcement uses stingrays to target specific criminals, but the problem is – as is so often the case with surveillance technologies – the data of everyone in the vicinity gets swept up, including that of peaceful protesters. These sweeps pose a direct threat to the most precious rights Americans have – the First Amendment rights to free speech and to petition the government for a redress of grievances. Protests are not some Sixties-style fad that never went away. The right to protest is as home-grown as the Boston Tea Party, the Million Mom March, and the March for Life. Yet there are numerous reports of stingrays and similar technologies being used by authorities to clandestinely spy on large-scale public protests. Most disturbing is the insistence by the FBI to keep any use of a stingray in specific cases a state secret. Based on documents obtained through PPSA Freedom of Information Act requests, we know that the FBI has used nondisclosure agreements to force local jurisdictions to hide the fact whenever stingrays are used, even in open court. Now, thankfully, the Electronic Frontier Foundation has gone beyond protesting and filing court briefs to work with technologists willing to roll up their sleeves and get out the soldering iron. EFF is presenting an open-source tool to help detect stingray use. The aptly named Rayhunter will set you back only about $30, which is the cost of the hardware, the Orbic RC400L hotspot you’ll need (check Amazon, eBay, or any of your geeky uncles). Once in hand, simply follow the instructions on EFF’s open-source Rayhunter website. As the Rayhunter gets out into the market, protesters of all stripes will be able to know if their First Amendment-protected activities are being surveilled – and to livestream the results. Other steps should be taken by FBI Director Kash Patel or by Congress. Director Patel or Congress should mandate full disclosure about the origin of all evidence collected by a stingray and presented in court against a criminal defendant. Every American has the right to face his or her accuser and be confronted with the evidence against them, even when that evidence is digital and the result of proprietary technology. For now, let us applaud the Electronic Frontier Foundation for giving Americans the all-too-rare chance to answer the question, “Am I being surveilled?” At the very least, Americans engaging in their First Amendment-protected right to protest can know if the government is turning their own phones against them. United States v. Rolando Williamson It is always refreshing to thumb through a court opinion that reads like an Elmore Leonard novel. For example, in a recent opinion of the Eleventh Circuit Court of Appeals, one defendant is also known as “a.k.a. Baldhead, a.k.a. Ball Head.” And the opinion contains numerous references to whether “a cup of ice” is code for an ounce of meth, and to extensive evidence presented in court – guns, money, dope, a gold necklace seized from a home – that could provide props from Netflix’s Narcos. Our guess is that the several defendants in this case, whose convictions were mostly upheld by the court, did not earn enough merit badges to become Eagle Scouts. But they are still Americans with constitutional rights. And, for the good of us all, they should get the same protections of the Fourth Amendment as the rest of us. Did they? Here are the facts: The home of one Rolando Williamson in Birmingham, Alabama, was persistently surveilled by pole cameras from October 2018 through August 2019. The cameras warrantlessly recorded the comings and goings of Williamson and his visitors nonstop, including his front and back yards – the area often referred to in Fourth Amendment law as the home’s “curtilage.” On the basis of this persistent recording of a home, the government performed a sting operation and followed up with warrants to search Williamson’s home. We agreed with three out of six judges on the First Circuit Court in a similar case, Moore v. United States, that a “reasonable expectation of privacy” was violated when the government placed a pole camera in front of a woman’s home for eight months. In this case, the Eleventh Circuit ruled that similarly persistent surveillance did not violate the Fourth Amendment. The court reasoned that, because one of the cameras overlooked the public street in front of Williamson’s home, and the other recorded the exposed and publicly viewable backyard, the cameras “could view only what was visible from the public streets in front of the house and the public alley behind it.” The court rejected the defense’s comparisons to the U.S. Supreme Court’s Carpenter v. United States (2018), which found a Fourth Amendment violation in law enforcement’s seizure of a suspect’s location history from a cellphone tower. The court also asserted that this case did not resemble United States v. Jones (2012), in which the Supreme Court held that attaching a GPS device to a vehicle amounted to a search requiring a warrant. “By contrast, a pole camera does not track movement,” the Eleventh Circuit found. “It does not track location. It is stationary – and therefore does not ‘follow’ a person like a GPS attached to his vehicle.” Moreover, “the Carpenter decision concerned a technology that is meaningfully different than pole cameras. Pole cameras are distinct both in terms of the information they mine and the degree of intrusion necessary to do so.” We question the court’s conclusion about the narrowness of data mined by a pole camera. A persistent camera does track movement of residents and their visitors in and out of a home. It potentially reveals a target’s political, religious, and romantic interests. Watching the movements for months around the curtilage of a home – which is highly protected in Fourth Amendment law – is in fact very intrusive. These are ripe questions for future cases. As for the Eleventh Circuit, it declared that it is not making a general rule on the constitutionality of pole cameras. State and federal courts remain divided on that question. And it is a question that will not go away. From pole cameras to drones, aerial panoramas from balloons that can loiter for months, and other persistent forms of surveillance, the courts – and likely, the Supreme Court – will need to set a rule on these forms of outside-in surveillance. To see that they do, PPSA will be looking to provide legal support in cases that present the best fact patterns. “Flock Safety” has nothing to do with birds. It is a corporation that is a $3.5 billion-dollar pillar of the burgeoning surveillance industry. Flock’s particular surveillance niche is automated license plate recognition, and their cameras currently operate in more than 40 states, 4,000 cities, and 5,000 separate law enforcement agencies. The influence of this corporation is growing. In a recent Virginian-Pilot article, Peter Dujardin calls attention to a bill being considered by the state’s General Assembly. Put forth by House Majority Leader Charniele Herring, the legislation would authorize expanded reliance on Flock Safety cameras on state highways, representing a massive expansion of this surveillance technology (important given that Virginia has the nation’s third-largest state highway system). To her credit, Rep. Herring’s bill also includes important privacy protections. These include:
The Virginia bill is to be lauded for including these guardrails. Yet it still lacks what is arguably the most important safeguard – requiring a search warrant to access the database, as required by the Fourth Amendment to the Constitution. So it remains the case that in places like Norfolk, where a staggering 172 Flock Safety cameras easily track the city’s 238,000 residents, authorities can still freely access the image database, no warrant or other valid justification required. We are fast approaching a tipping point between Fourth Amendment privacy rights and an unfettered technocratic surveillance state in the mold of Xi’s China – called a panopticon in which a citizen’s every move is monitored just in case. As we have written before, there is a proper place for surveillance systems. The targeted use of surveillance – with probable cause and a court-issued warrant – is necessary, productive, and constitutional. But what’s happening in Virginia and elsewhere is tantamount to general surveillance because of its scale and accessibility. Using license plate readers on a vast statewide highway network is casting an enormously wide net, and with that comes the risk of overreach. Good intentions mean little to someone whose life is torn apart when they are unjustly accused of a crime – all because authorities trusted a still-developing and largely unrestrained technology to make the call. The constitutional right to privacy inherently includes the right not to be constantly watched and continuously tracked, by default, and for no just reason. The U.S. Fourth Circuit Court of Appeals in Richmond, Virginia, heard oral arguments Thursday in United States v. Chatrie, a case that poses an important question at the heart of a dramatic split with the Fifth Circuit: Do geofence warrants violate the Fourth Amendment? In that hearing, Judge James Andrew Wynn had a message for law enforcement, that warrants “don’t mean you can’t do your job. It means you need probable cause with particularity to be able to get information. And when we as a court begin to rewrite the Constitution so that we can allow law enforcement officers to do that which the Supreme Court has already told us they cannot do – that’s a problem.” This case started in 2019 when a bank robber absconded with $200,000 from Call Federal Credit Union in Midlothian, Virginia. With no leads to speak of, investigators turned to Google, requesting location information for everyone within a 150-meter radius of the bank at roughly the time of the crime. Police eventually landed on Okello Chatrie as the prime suspect, but only after searching the location information of 19 people, some dining at a Ruby Tuesday’s and some staying at a nearby Hampton Inn. Chatrie attorney Michael Price told the court that the government might as well have searched the apartments of anyone within that given area – which should be flatly prohibited under the particularized warrant requirements of the Fourth Amendment. Chatrie eventually reached a plea agreement with the government but appealed a federal district court ruling denying his motion to suppress the geofenced information. A Fourth Circuit panel initially rejected Chatrie’s argument based, in part, on his voluntary exposure of data to the tech giant. He did this via the opt-in function on his phone, which – let’s be frank – is a legal vulnerability that most consumers fail to understand. PPSA filed an amicus brief in support of an en banc rehearing, which the court agreed to and held on Thursday. Judge Harvie Wilkinson described far-reaching ramifications of taking away the geofencing tool. He said: “Next time, it's not going to be just a bank robber. It could be a murder. It could be a terrorism attack. I don't think you realize just how much you're taking off the table in terms of the tools that law enforcement can use in the most serious of situations." But Judge Wynn disagreed, noting: “The result does not drive the means. If we are going to do the police's job, then let's just declare the Fourth Amendment nonexistent and just say anytime you want to do a search, just do it.” PPSA has demonstrated that the non-particularized digital dragnets across a 17.5-acre swath of land and the subsequent search of the private data of those within that area is the technological descendant of the “general warrants” of the colonial era. In last year’s United States v. Jamarr Smith, the Fifth Circuit came to a similar conclusion, writing that the “use of geofence warrants – at least as described herein – is unconstitutional under the Fourth Amendment.” It will likely be up to the Supreme Court to bridge this chasm between the Fourth and Fifth Circuits. The Court’s leanings are clear. In Carpenter v. United States (2018), the Supreme Court found that the government must obtain a probable cause warrant before reviewing a suspect’s location history emitted by a cellphone. Why shouldn’t similar reasoning apply when it comes to geofence warrants that cover millions of innocent people? PPSA will continue to echo Judge Wynn’s pointed critique, who concluded: “If we do this, we’re the ones who are going to broaden it. And the broadness is not just on Mr. Chatrie – it’s on every citizen who is under the Constitution of the United States. You just deprived them of an individual right that exists in the Fourth Amendment. You, every one of you sitting here right now, can have your location data ... and it can be put in a great pool and then you have no privacy whatsoever.” PPSA urges the Fourth Circuit to join the Fifth Circuit in upholding the constitutional rights of all Americans. United States v. Chatrie A detective in Midlothian, Virginia, in 2019 asked Google to ping cellphone locations of everyone who passed within a circumscribed area within one hour of the robbery of nearly $200,000 from a credit union. That order led to a sweep through a Ruby Tuesday restaurant, a Hampton Inn, an apartment complex, and a nursing home within the prescribed area. The Gordian knot of issues raised by this wide-ranging search will be examined in oral arguments in United States v. Chatrie in an en banc hearing to be held by the Fourth Circuit Court of Appeals in Richmond, Virginia, at 9 a.m. Thursday. The court will consider: Does the wholesale expropriation of the cellphone and location data of a large number of people in a geofenced area amount to a modern version of the “general warrants” of the agents of the British Crown during the colonial era? A lower court judge, Hannah Lauck, took her guidance from the U.S. Supreme Court in Carpenter v. United States (2018), which held that the search of a suspect’s location history from a cellphone tower came under the Fourth Amendment’s requirement for a warrant. Judge Lauck wrote “it is difficult to overstate the breadth of this [geofence] warrant” and that an “innocent individual would seemingly have no realistic method to assert his or her privacy rights tangled within the warrant. Geofence warrants thus present the marked potential to implicate a ‘right without a remedy.’” And, as every law student knows, a right without a remedy is no right at all. The Fourth Circuit panel, however, reversed that ruling, holding that no warrant at all was required in this case. The court reasoned that the limits on location tracking from Carpenter applied only to longer-term tracking. The Eleventh Circuit in Atlanta, in a similar case, agreed. Then the Fifth Circuit in New Orleans held – correctly in our view – that not only is there an expectation of privacy in location data, but broad geofence warrants are inherently unconstitutional. As a result, the appellate courts are not just split, they look like the spaghetti tangle of tracks in a railway yard. Such tangles are usually untangled by the U.S. Supreme Court. But after PPSA filed an amicus brief in favor of an en banc hearing by the full Fourth Circuit court, that court agreed to allow all the judges to weigh the constitutional equities in this case. We asked the court to consider that if the government can request the location of all the individuals within a geofenced area. For example, could it request all photos in the cloud that were taken within that same area? After all, AI can now estimate, with astonishingly high accuracy, the location of a photograph. Invoking Carpenter, we asked the court if we have to leave the public’s Fourth Amendment rights to “the mercy of advancing technology.” To hear the court’s oral argument, go to the court’s calendar and search for “Chatrie.” Or just wait and we will give you a digest of answers to the judges’ questions and their apparent leanings. This is an exceptionally important case for the Fourth Amendment. Stay tuned. Endorses “Appropriate Safeguards” for Section 702 John Ratcliffe slid though his confirmation hearing for his nomination as Director of the Central Intelligence Agency on a greased toboggan. Along the way, he offered encouraging glimpses into his thinking about surveillance reform. Sen. James Lankford (R-OK) spoke up for Section 702, the Foreign Intelligence Surveillance Act authority that allows federal agencies to surveil foreign threats on foreign soil. John Ratcliffe said that Section 702 is “an indispensable national security tool” and noted that information gleaned from programs authorized by that law often comprises half of the president’s daily intelligence briefing. But Ratcliffe also acknowledged that Section 702 “can be abused and that we must do everything we can to make sure it has appropriate safeguards.” Ratcliffe told the Senate Select Committee on Intelligence that surveillance “can’t come at the expense of Americans’ civil liberties.” Sen. John Cornyn (R-TX) said that Ratcliffe in a private conversation had observed that surveillance authorities are somewhat like steak knives in the kitchen, useful but dangerous in the wrong hands. The problem in the past, the senator from Texas said, was a “lack of trust in people who’ve had access to those tools.” That seemed to be a reference to the FBI, which in the past had used Section 702 powers to vacuum up the communications of more than 3.4 million Americans. There were also some irritating moments for surveillance reformers in the hearing. Several senators alluded to all critics of Section 702 as wanting to repeal that authority and expose Americans to terrorists and spies. They did so without acknowledging that it is possible to criticize and reform that law without ending it. Under questioning from Sen. Michael Bennet (D-CO), John Ratcliffe spoke of his unique experience as a former House Member who sat on the Judiciary Committee and later the House Intelligence Committee and then served in the executive branch as Director of National Intelligence (DNI). Ratcliffe said that he was surprised that despite having served in the legislative branch on an oversight committee of the intelligence community “there was so much intelligence I learned for the first time as a DNI that I knew no Member of Congress was aware of. And I think that sort of speaks to my approach and understanding that I take seriously the obligation that I will have to keep this committee fully informed on intelligence issues.” John Ratcliffe told the oversight committee point blank that there is much it does not know but should. Perhaps that admission will spur senators to dig deeper and conduct stronger supervision of the intelligence community. The proliferation of automated license plate recognition systems (ALPRs) is a boon for safer roadways. These networked cameras can help police spot a stolen car or track fleeing bank robbers with just a few clicks. These systems are growing in capability as the sheer numbers of these watchers, generating data networked and analyzed by artificial intelligence, seamlessly track anyone who drives or rides in a car. Now a privacy advocate has demonstrated that ALPRs systems are leaky, easily accessed on private networks without authentication – and even prone to allow a stalker to stream someone’s travels online. Jason Koebler of 404 Media reports that privacy advocate Matt Brown of Brown Fine Security easily turned license plate readers into streaming video. Without any logins or credentials, Brown was able to join the private networks collecting the video and data these cameras collect. Worse, he found that many of these cameras are misconfigured in a way that an Internet of Things (IoT) search engine can access them for online streaming – a dream-come-true for stalkers, creeps, corporate espionage artists, and perhaps government agencies. Will Freeman, who created an open-source map of U.S. ALPRs, told Koebler that he can write a script to map vehicles to set times and precise locations. “So when a police department says there’s nothing to worry about unless you’re a criminal, there is,” Freeman told 404 Media. Koebler reports that Motorola, the camera’s manufacturer, promised a fix when informed of these vulnerabilities. Given the liability risk, it is likely this particular technological vulnerability will soon be patched. The longer-term threat pertains to the ubiquity of ALPRs systems, which brings to mind Jospeh Stalin’s famous quip about his tanks – “quantity has a quality all its own.” The same is true with camera surveillance. The first few cameras allowed police to catch scofflaws who ran red lights. Many cameras can be used to track people as they drive to political, religious, romantic, or journalistic encounters. Add AI into the mix, and you take the labor out of following journalist Alice on her way to meet with government insider and whistleblower Bob, or to determine which political donor is meeting with which advocacy group, or which public figure is providing the watcher with kompromat. This capability will only grow more robust, reports Paige Gross of the Florida Phoenix, as IoT technologies create “smart cities” with interconnected webs to make roadways and sidewalks safer and the flow of vehicles and people more efficient. We may feel like we’re in a zone of privacy when we’re in our cars. But the Internet of Things is also transforming cities into places where anonymity and privacy are evaporating. “As the technology becomes increasingly denser in our communities, and at a certain point you have like three of them on every block, it becomes the equivalent to tracking everybody by using GPS,” Jay Stanley of the ACLU told Gross. “That raises not only policy issues, but also constitutional issues.” License plate readers are just one element of a surveillance state being knitted together, day by day. From purchases of our digital data by government agencies and corporations, to the self-reporting we make of our movements by carrying our cellphones, to our cars – which themselves are GPS devices – there is a growing integration of a network of networks to follow our movements, posts, and communications … in the land of the free and the thoroughly surveilled. The need for lawmakers in Congress and the state capitals to set guardrails on these integrating technologies is growing more urgent by the day. Perhaps the best solution to many of these 21st century problems is to be found in a bit of 18th century software – the founders’ warrant requirement in the Fourth Amendment to the Constitution. Readers of a certain vintage will remember a 1980s Motown hit by Rockwell, with backup vocals from the Jacksons, called Somebody’s Watching Me. The music video of that song is creepy, showing a young man stumbling around his house in fear, agitated by hidden cameras in stuffed animals, actors on television who appear to be watching him, and strangers popping up in his shower. What seemed like paranoia in the age of big hair, shoulder pads, and acid-washed jeans is increasingly commonplace in the third decade of the 21st century. In the People’s Republic of China, 1.4 billion people live under constant surveillance by networked facial recognition cameras, the monitoring of their social media posts, and the mapping of their contacts through texts and emails. Armed with this ocean of data, AI is ready to flag anyone who says or does something slightly at odds with the regime. Even in our democracy, about a dozen federal intelligence agencies buy and inspect the personal and geolocation data of Americans – exposing our private lives, beliefs, religious, and political practices – without resorting to the Fourth Amendment requirement for a warrant. The focus of this blog has long been on this breach of Americans’ constitutional rights, with all of its social and political implications for our democracy. But now a new study raises a different question – what does surveillance do to our brains? And what are the implications for public health? Suppose I told you not to turn around, but to just take my word that there is a man standing in the window behind you watching your every move. Does the feeling that thought engenders make your body stiffen? Does it make the skin on the back of your neck tingle? Is your every move suddenly self-conscious? Now imagine feeling this all the time. A report in SciTechDaily details the findings of an Australian professor of neuroscience, Kiley Seymour, on the effect of surveillance on the brain function of 54 participants in his experiment. “We know CCTV changes our behavior, and that’s the main driver for retailers and others wanting to deploy such technology to prevent unwanted behavior,” Seymour said. “However, we show it’s not only overt behavior that changes – our brain changes the way it processes information.” The study found that people who know they are being surveilled become hyperaware of faces, recognizing others faster than a control group. Though the study’s participants are unaware of it, they are jumpy, always on the lookout to categorize someone as benign or a potential threat. Seymour told SciTechDaily that his study found the same “hypersensitivity to eye gaze in mental health conditions like psychosis and social anxiety disorder where individuals hold irrational beliefs or preoccupations with the idea of being watched.” One can imagine how this might make people in China jittery and anxious. On the other hand, we doubt this effect is being generated in the United States by our government’s gathering and reviewing of our data, even when it exposes the most personal and intimate aspects of our lives. Many Americans are unaware of this breach of their privacy. And for those that are aware, that creepy feeling of being watched is probably not associated with the abstract idea of purchased data in a server somewhere. If so, this is a shame. The review of our data by the FBI, IRS, Department of Homeland Security, and other agencies should give you that creepy feeling, like that man standing behind you right now. The Eyes of Luigi Mangione and a McDonald’s Employee Shortly after the vicious public murder of Brian Thompson, CEO of United Healthcare, Juliette Kayyem of Atlantic wrote a perceptive piece about the tech-savviness of the gunman, who mostly succeeded in hiding his face behind a mask and a hood. “The killer is a master of the modern surveillance environment; he understands the camera,” Kayyem wrote. “Thompson’s killer seems to accept technology as a given. Electronic surveillance didn’t deter him from committing murder in public, and he seems to have carefully considered how others might respond to his action.” At this writing, police in Pennsylvania are holding Ivy League grad Luigi Mangione as a “person of interest” in relation to the murder. Despite many media reports of incriminating details, Mangione is, of course, entitled to a presumption of innocence. But enough of the killer’s face had been shown in social media for a McDonald’s employee to call the police after seeming to recognize Mangione in those images. Whoever killed Thompson, he made a mistake – as Kayyem noted – in showing his smile while flirting with someone. This allowed a significant slice of his profile to be captured. But even when the killer was careful, his eyes and upper face were captured by a camera in a taxicab. The lesson seems to be that a professional criminal cannot fully evade what Kayyem calls a “surveillance state” made up of ubiquitous cameras. We applaud the use of this technology to track down stone-cold killers and other violent criminals. Another example: CCTV technology was put to good use in the UK in 2018 when Russian agents who tried to kill two Russian defectors with the nerve agent Novichok were identified on video. The defectors survived, but a woman who came across a perfume bottle containing the toxin sprayed it on her wrist and died. When the images of the Russian operatives surfaced, they claimed they were tourists who traveled to Salisbury, England, to see its medieval cathedral. These are, of course, excellent uses of cameras and facial recognition technology. Danger to a civil society arises when such technology is used routinely to track law-abiding civilians going about their daily tasks or engaged in peaceful protests, religious services, the practice of journalism, or some other form of ordinary business or free speech. This is why a search warrant should be required to access the saved product of such surveillance to ensure it is used for legitimate purposes – catching killers, for example – and not to spy on ordinary citizens. Far from showing that the urban networks of comprehensive surveillance are riddled with holes, recent events show that they are tighter than ever. That is a good thing, until it is not. Hence the need for safeguards, starting with the Fourth Amendment. You are probably not old enough to remember the hit 1960s television series The Prisoner, in which Patrick McGoohan played a secret agent being held for interrogation in a dystopian resort on a nameless island. Whenever McGoohan’s character made it to the beach to find a small boat to row to freedom, the mysterious powers-that-be unleashed the Rover – a giant white balloon capable of blocking escapees, knocking them down, or even suffocating them. No idea, it seems, is too lurid for the Chinese Communist Party to render into reality in the service of its surveillance state. Pedestrians in China are now watching in amazement as the streets are patrolled by RG-T robots – essentially a metal ball surrounded by a tire – that subjects people to facial recognition scans, possible arrest and worse. (See it in action here.) The U.S. military toyed with a prototype, but considered it for warfare, not for civilian use. The Sun tabloid calls, without exaggeration, “all terrain, spherical robocops.” They are resistant to attack, even from a man wielding a baseball bat. The robots, produced by China’s Logon Technologies, are not passive observers. They are equipped with artificial intelligence that decides when and how to deploy net guns, tear gas sprayers, grenades, loudspeakers, and sound wave devices. The lethal potential of robots is not theoretical. In the United States, police routinely use robots and drones for surveillance to assess the danger of a situation. In one instance in 2018, a gunman in Dallas suspected of shooting five policemen and who exchanged gunfire with police was killed by a police robot. The use of the Dallas robot was deployed to protect the police and nearby citizens. Moreover, it was fully under human control. When AI is combined with new inventions as it is with the RT-G bots, however, the decision to use force, even lethal force, is up to an algorithm. A lot of bad ideas are becoming reality in China. But don’t expect them to stay there. Should you be reading this blog? If you’re at work, on a computer provided for you by your employer, is the content of this blog sufficiently work-related for you to justify to your employer the time you’ve spent reading it? Following your search history and the time you spend on particular websites during your working hours are just some of the most obvious ways employers track employees. Now a research paper from Cracked Labs, a non-profit based in Austria, with help from other non-governmental organizations and an Oxford scholar, have mapped out dozens of technologies that allow companies to track employees’ movements and activities at the office. In “Tracking Indoor Location, Movement, and Desk Occupancy in the Workplace,” Cracked Labs demonstrates how vendors are selling technology that pairs wireless networking with Bluetooth technology to follow employees in their daily movements. The former can pinpoint the location of smartphones, laptops, and other devices employees use and often carry. Bluetooth beacons can link to badges, security cameras, and video conferencing systems to track employee behavior. Quoting marketing literature from Cisco, Cracked Labs writes: “Companies can get a ‘real time view of the behavior of employees, guests, customers and visitors’ and ‘profile’ them based on their indoor movements in order to ‘get a detailed picture of their behavior.’” Tracking 138 people with 11 Wi-Fi points, Cisco claims, generated several million location records. Not to be outdone, a European vendor, Spacewell, installs sensors in ceilings, next to doors, and even under desks to track “desk attendance.” Nicole Kobie of ITPro reports that one in five office workers are now being monitored by some kind of activity tracker. She also reports surveys that tracked employees are 73 percent more likely to distrust their employer, and twice as likely to be job hunting as those who are not tracked in their workplace. Cracked Labs concludes: “Once deployed in the name of ‘good,’ whether for worker safety, energy efficiency, or just improved convenience, these technologies normalize far-reaching digital surveillance, which may quickly creep into other purposes.” It is not difficult to imagine that such surveillance could be used by a rogue manager for stalking, to find out who is gathering around the water cooler or kitchen, or to find something to embarrass an office rival. Even when these technologies are used for their stated purposes, we all lose something when privacy is degraded to this extent. Now, how was that for work-related content? Investigative journalist Ronan Farrow delves into the Pandora’s box that is Israel’s NSO Group, a company (now on a U.S. Commerce Department blacklist) that unleashes technologies that allow regimes and cartels to transform any smartphone into a comprehensive spying device. One NSO brainchild is Pegasus, the software that reports every email, text, and search performed on smartphones, while turning their cameras and microphones into 24-hour surveillance devices. It’s enough to give Orwell’s Big Brother feelings of inadequacy. Farrow covers well-tread stories he has long followed in The New Yorker, also reported by many U.S. and British journalists, and well explored in this blog. Farrow recounts the litany of crimes in which Pegasus and NSO are implicated. These include Saudi Arabia’s murder of Jamal Khashoggi, the murder of Mexican journalists by the cartels, and the surveillance of pro-independence politicians in Catalonia and their extended families by Spanish intelligence. In the latter case, Farrow turns to Toronto-based Citizen Lab to confirm that one Catalonian politician’s sister and parents were comprehensively surveilled. The parents were physicians, so Spanish intelligence also swept up the confidential information of their patients as well. While the reality portrayed by Surveilled is a familiar one to readers of this blog, it drives home the horror of NSO technology as only a documentary with high production values can do. Still, this documentary could have been better. The show is marred by too many reaction shots of Farrow, who frequently mugs for the camera. It also left unasked follow-up questions of Rep. Jim Himes (D-CT), Ranking Member of the House Intelligence Committee. In his sit-down with Farrow, Himes made the case that U.S. agencies need to have copies of Pegasus and similar technologies, if only to understand the capabilities of bad actors like Russia and North Korea. Fair point. But Rep. Himes seems oblivious to the dangers of such a comprehensive spyware in domestic surveillance. Rep. Himes says he is not aware of Pegasus being used domestically. It was deployed by Rwandan spies to surveil the phone of U.S. resident Carine Kanimba in her meetings with the U.S. State Department. Kanimba was looking for ways to liberate her father, settled in San Antonio, who was lured onto a plane while abroad and kidnapped by Rwandan authorities. Rep. Himes says he would want the FBI to have Pegasus at its fingertips in case one of his own daughters were kidnapped. Even civil libertarians agree there should be exceptions for such “exigent” and emergency circumstances in which even a warrant requirement should not slow down investigators. The FBI can already track cellphones and the movements of their owners. If the FBI were to deploy Pegasus, however, it would give the bureau redundant and immense power to video record Americans in their private moments, as well as to record audio of their conversations. Rep. Himes is unfazed. When Farrow asks how Pegasus should be used domestically, Rep. Himes replies that we should “do the hard work of assessing that law enforcement uses it consistent with our civil liberties.” He also spoke of “guardrails” that might be needed for such technology. Such a guardrail, however, already exists. It is called the Fourth Amendment of the Constitution, which mandates the use of probable cause warrants before the government can surveil the American people. But even with probable cause, Pegasus is too robust a spy tool to trust the FBI to use domestically. The whole NSO-Pegasus saga is just one part of much bigger story in which privacy has been eroded. Federal agencies, ranging from the FBI to IRS and Homeland Security, purchase the most intimate and personal digital data of Americans from third-party data brokers, and review it without warrants. Congress is even poised to renege on a deal to narrow the definition of an “electronic communications service provider,” making any office complex, fitness facility, or house of worship that offers Wi-Fi connections to be obligated to secretly turn over Americans’ communications without a warrant. The sad reality is that Surveilled only touches on one of many crises in the destruction of Americans’ privacy. Perhaps HBO should consider making this a series. They would never run out of material. Catastrophic ‘Salt Typhoon’ Hack Shows Why a Backdoor to Encryption Would be a Gift to China11/25/2024
Former Sen. Patrick Leahy’s Prescient Warning It is widely reported that the breach of U.S. telecom systems allowed China’s Salt Typhoon group of hackers to listen in on the conversations of senior national security officials and political figures, including Donald Trump and J.D. Vance during the recent presidential campaign. In fact, they may still be spying on senior U.S. officials. Sen. Mark Warner (D-VA), Chairman of the Senate Intelligence Committee, on Thursday said that China’s hack was “the worst telecom hack in our nation’s history – by far.” Warner, himself a former telecom executive, said that the hack across the systems of multiple internet service providers is ongoing, and that the “barn door is still wide open, or mostly open.” The only surprise, really, is that this was a surprise. When our government creates a pathway to spy on American citizens, that same pathway is sure to be exploited by foreign spies. The FBI believes the hackers entered the system that enables court-ordered taps on voice calls and texts of Americans suspected of a crime. These systems are put in place by internet service providers like AT&T, Verizon, and other telecoms to allow the government to search for evidence, a practice authorized by the 1994 Communications Assistance for Law Enforcement Act. Thus the system of domestic surveillance used by the FBI and law enforcement has been reverse-engineered by Chinese intelligence to turn that system back on our government. This point is brought home by FBI documents PPSA obtained from a Freedom of Information Act request that reveal a prescient question put to FBI Director Christopher Wray by then-Sen. Patrick Leahy in 2018. The Vermont Democrat, now retired, anticipated the recent catastrophic breach of U.S. telecom systems. In his question to Director Wray, Sen. Leahy asked: “The FBI is reportedly renewing a push for legal authority to force decryption tools into smartphones and other devices. I am concerned this sort of ‘exceptional access’ system would introduce inherent vulnerabilities and weaken security for everyone …” The New York Times reports that according to the FBI, the Salt Typhoon hack resulted from China’s theft of passwords used by law enforcement to enact court-ordered surveillance. But Sen. Leahy correctly identified the danger of creating such domestic surveillance systems and the next possible cause of an even more catastrophic breach. He argued that a backdoor to encrypted services would provide a point of entry that could eventually be used by foreign intelligence. The imperviousness of encryption was confirmed by authorities who believe that China was not able to listen in on conversations over WhatsApp and Signal, which encrypt consumers’ communications. While China’s hackers could intercept text messages between iPhones and Android phones, they could not intercept messages sent between iPhones over Apple’s iMessage system, which is also encrypted. Leahy asked another prescient question: “If we require U.S. technology companies to build ‘backdoors’ into their products, then what do you expect Apple to do when the Chinese government demands that Apple help unlock the iPhone of a peaceful political or religious dissident in China?” Sen. Leahy was right: Encryption works to keep people here and abroad safe from tyrants. We should heed his warning – carving a backdoor into encrypted communications creates a doorway anyone might walk through. A suspicious husband or wife can now examine the route history of a family car or the location data of a smartphone to track a spouse’s movements. We tend to think of location history surveillance as a uniquely 21st century form of snooping. In an amusing article in the MIT Press Reader, Dartmouth scholar Jacqueline D. Wernimont writes that such surveillance is older than we think. For example, The Hartford Daily Courant in 1879 reported: “A Boston wife softly attached a pedometer to her husband when, after supper, he started to ‘go down to the office and balance the books.’ On his return, fifteen miles of walking were recorded. He had been stepping around a billiard table all evening.” In a twist worthy of today’s spy agencies, Wernimont also reports that a U.S. admiral in 1895 gave junior watch officers common pocket watches with pedometers hidden inside. The results showed that the ensigns had been asleep or resting most of the night. A night watchman at a railroad yard was given a pedometer to track his movements. It was later discovered that the night watchman evaded his responsibilities by sleeping while the pedometer was attached to a moving piston rod. The use of pedometers was an early precursor of surveillance tools used today by employers to track the movements, browsing, communications, and daily routines of their workers. Wernimont writes: “As the pedometer became a vector for surveillance by those in power, people who were able quickly developed hacks designed to frustrate such efforts.” The problem with modern technology is that it is much harder to thwart, or even anticipate when and how one is being watched. No piston rod will save us. Vice presidential candidate J.D. Vance (R-OH) told Joe Rogan over the weekend that backdoor access to U.S. telecoms likely allowed the Chinese to hack American broadband networks, compromising the data and privacy of millions of Americans and businesses. “The way that they hacked into our phones is they used the backdoor telecom infrastructure that had been developed in the wake of the Patriot Act,” Sen. Vance told Rogan on his podcast last weekend. That law gave U.S. law enforcement and intelligence agencies access to the data and operations of telecoms that manage the backbone of the internet. Chris Jaikaran, a specialist in cybersecurity policy, added in a recently released Congressional Research Service report about a cyberattack from a group known as Salt Typhoon: “Public reporting suggests that the hackers may have targeted the systems used to provide court-approved access to communication systems used for investigations by law enforcement and intelligence agencies. PRC actors may have sought access to these systems and companies to gain access to presidential candidate communications. With that access, they could potentially retrieve unencrypted communication (e.g., voice calls and text messages).” Thus, the Chinese were able to use algorithms developed for U.S. law enforcement and intelligence agencies to see to any U.S. national security order and presumably any government extraction of the intercepted communications of Americans and foreign targets under FISA Section 702. China doesn’t need a double agent in the style of Kim Philby. Our own Patriot Act mandates that we make it easier for hostile regimes to find the keys to all of our digital kingdoms – including the private conversations of Vice President Kamala Harris and former President Donald Trump. As alarming as that is, it is hard to fully appreciate the dangers of such a penetration. The Chinese have chosen not to use their presence deep in U.S. systems to “go kinetic” by sabotaging our electrical grid and other primary systems. The possible consequences of such deep hacking are highlighted in a joint U.S.-Israel advisory that details the actions against Israel that were enabled when an Iranian group, ASA, wormed its way into foreign hosting providers. ASA hackers allowed the manipulation of a dynamic, digital display in Paris for the 2024 Summer Olympics to denounce Israel and the participation of Israeli athletes on the eve of the Games. ASA infiltrated surveillance cameras in Israel and Gaza, searching for weak spots in Israeli defenses. Worst of all, the hack enabled Hamas to contact the families of Israeli hostages in order to “cause additional psychological effects and inflict further trauma.” The lesson is that when our own government orders companies to develop backdoors into Americans’ communications, those doors can be swung open by malevolent state actors as well. Sen. Vance’s comments indicate that there is a growing awareness of the dangers of government surveillance – an insight that we hope increases Congressional support for surveillance reform when FISA Section 702 comes up for renewal in 2026. Why Signal Refuses to Give Government Backdoor Access to Americans’ Encrypted Communications11/4/2024
Signal is an instant messenger app operated by a non-profit to enable private conversations between users protected by end-to-end encryption. Governments hate that. From Australia, to Canada, to the EU, to the United States, democratic governments are exerting ever-greater pressure on companies like Telegram and Signal to give them backdoor entry into the private communications of their users. So far, these instant messaging companies don’t have access to users’ messages, chat lists, groups, contacts, stickers, profile names or avatars. If served with a probable cause warrant, these tech companies couldn’t respond if they wanted to. The Department of Justice under both Republican and Democratic administrations continue to press for backdoors to breach the privacy of these communications, citing the threat of terrorism and human trafficking as the reason. What could be wrong with that? In 2020, Martin Kaste of NPR told listeners that “as most computer scientists will tell you, when you build a secret way into an encrypted system for the good guys, it ends up getting hacked by the bad guys.” Kaste’s statement turned out to be prescient. AT&T, Verizon and other communications carriers complied with U.S. government requests and placed backdoors on their services. As a result, a Chinese hacking group with the moniker Salt Typhoon found a way to exploit these points of entry into America’s broadband networks. In September, U.S. intelligence revealed that China gained access through these backdoors to enact surveillance on American internet traffic and data of millions of Americans and U.S. businesses of all sizes. The consequences of this attack are still being evaluated, but they are already regarded as among of the most catastrophic breaches in U.S. history. There are more than just purely practical reasons for supporting encryption. Meredith Whittaker, president of Signal, delves into the deeper philosophical issues of what society would be like if there were no private communications at all in a talk with Robert Safian, former editor-in-chief of Fast Company. “For hundreds of thousands of years of human history, the norm for communicating with each other, with the people we loved, with the people we dealt with, with our world, was privacy,” Whittaker told Safian in a podcast. “We walk down the street, we’re having a conversation. We don’t assume that’s going into some database owned by a company in Mountain View.” Today, moreover, the company in Mountain View transfers the data to a data broker, who then sells it – including your search history, communications and other private information – to about a dozen federal agencies that can hold and access your information without a warrant. When it comes to our expectations of privacy, we are like the proverbial frogs being boiled by degrees. Whittaker says that this is a “trend that really has crept up in the last 20, 30 years without, I believe, clear social consent that a handful of private companies somehow have access to more intimate data and dossiers about all of us than has ever existed in human history.” Whittaker says that Signal is “rebuilding the stack to show” that the internet doesn’t have to operate this way. She concludes we don’t have to “demonize private activity while valorizing centralized surveillance in a way that’s often not critical.” We’re glad that a few stalwart tech companies, from Apple and its iPhone to Signal, refuse to cave on encryption. And we hope there are more, not fewer, such companies in the near future that refuse to expose their customers to hackers and government snooping. “We don’t want to be a single pine tree in the desert,” Whittaker says, adding she wants to “rewild that desert so a lot of pine trees can grow.” We’re all resigned to the need to go through security at high-profile sporting and cultural events, just as we do at the airport. The American Civil Liberties Union is raising the question – will that level of scrutiny be the new normal at the mall, at open-air tourist attractions, outdoor concerts, and just plain walking around town? The Department of Homeland Security (DHS) is investing in research and development to “assess soft targets and address security gaps” with new technology to track people in public places. It is funding SENTRY, the Soft Target Engineering to Neutralize the Threat Reality. SENTRY will combine artificial intelligence from the “integration of data from multiple sources,” which no doubt will include facial recognition scans of everyone in a given area to give them a “threat assessment.” We do not dismiss DHS’s concern. The world has no lack of violent people and our country is full of soft targets. Just hark back to the deranged shooter in 2017 who turned the Route 91 Harvest music festival in Las Vegas into a shooting gallery. He killed 60 people and wounded more than 400. A similar act by a terrorist backed by a malevolent state could inflict even greater casualties. But we agree with ACLU’s concern that such intense inspection of Americans going about their daily business could lead to the “airportization” of America, in which we are always in a high-security zone whenever we gather. ACLU writes that “security technology does not operate itself; people will be subject to the petty authority of some martinet guards who are constantly stopping them based on some AI-generated flag of suspicion.” We would add another concern. Could SENTRY be misused, just as FISA Section 702 and other surveillance authorities have been misused? What is to keep the government from accessing SENTRY data for warrantless political surveillance, whether against protestors or disfavored groups targeted by biased FBI agents? If this technology is to be deployed, guardrails are needed. PPSA seconds ACLU’s comment to the watchdog agency, the Privacy and Civil Liberties Oversight Board (PCLOB), that asks it to investigate AI-based programs as they develop. Congress should watch the results of PCLOB’s efforts and follow up with legal guardrails to prevent the misuse of SENTRY and similar technologies. Doxing – the practice of exposing a person’s location and home address – can have deadly consequences. This lesson was brought home in July 2020 when a deranged man with a grudge against federal judge Esther Salas went to her New Jersey home dressed as a deliveryman, carrying a gun. The judge’s 20-year-old son, Daniel Anderl, a Catholic University student, opened the door only to be shot dead as he moved forward to shield his parents. Out of this tragedy came Daniel’s Law, a New Jersey statute advocated by Judge Salas to allow law enforcement, government personnel, judges and their families to have their information completely removed from commercial data brokers. We’re accustomed to the idea that ad-selling social media platforms and government can track us. Now Krebs on Security is reporting that a new digital service neuters this law and exposes potentially any American to location tracking by any subscriber. This tracking service is enabled by Babel Street, which has a core product that Krebs writes “allows customers to draw a digital polygon around nearly any location on a map of the world, and view a . . . time-lapse history of the mobile devices coming in and out of the specified area.” Krebs reports that a private investigator demonstrated the danger of this technology by discreetly using it to determine the home address and daily movements of mobile devices belonging to multiple New Jersey police officers whose families have already faced significant harassment and death threats. This is just one more sign that in-depth surveillance that was once the province of giant social media companies and state actors is falling into the hands of garden-variety stalkers, snoops, and criminals. PPSA calls on New Jersey legislators, who are ideally positioned to lead a national response to this technology, to develop laws and policy solutions that continue to protect law enforcement, judges, and everyday citizens in their daily rounds and in their homes. |
Categories
All
|