The proliferation of automated license plate recognition systems (ALPRs) is a boon for safer roadways. These networked cameras can help police spot a stolen car or track fleeing bank robbers with just a few clicks. These systems are growing in capability as the sheer numbers of these watchers, generating data networked and analyzed by artificial intelligence, seamlessly track anyone who drives or rides in a car. Now a privacy advocate has demonstrated that ALPRs systems are leaky, easily accessed on private networks without authentication – and even prone to allow a stalker to stream someone’s travels online. Jason Koebler of 404 Media reports that privacy advocate Matt Brown of Brown Fine Security easily turned license plate readers into streaming video. Without any logins or credentials, Brown was able to join the private networks collecting the video and data these cameras collect. Worse, he found that many of these cameras are misconfigured in a way that an Internet of Things (IoT) search engine can access them for online streaming – a dream-come-true for stalkers, creeps, corporate espionage artists, and perhaps government agencies. Will Freeman, who created an open-source map of U.S. ALPRs, told Koebler that he can write a script to map vehicles to set times and precise locations. “So when a police department says there’s nothing to worry about unless you’re a criminal, there is,” Freeman told 404 Media. Koebler reports that Motorola, the camera’s manufacturer, promised a fix when informed of these vulnerabilities. Given the liability risk, it is likely this particular technological vulnerability will soon be patched. The longer-term threat pertains to the ubiquity of ALPRs systems, which brings to mind Jospeh Stalin’s famous quip about his tanks – “quantity has a quality all its own.” The same is true with camera surveillance. The first few cameras allowed police to catch scofflaws who ran red lights. Many cameras can be used to track people as they drive to political, religious, romantic, or journalistic encounters. Add AI into the mix, and you take the labor out of following journalist Alice on her way to meet with government insider and whistleblower Bob, or to determine which political donor is meeting with which advocacy group, or which public figure is providing the watcher with kompromat. This capability will only grow more robust, reports Paige Gross of the Florida Phoenix, as IoT technologies create “smart cities” with interconnected webs to make roadways and sidewalks safer and the flow of vehicles and people more efficient. We may feel like we’re in a zone of privacy when we’re in our cars. But the Internet of Things is also transforming cities into places where anonymity and privacy are evaporating. “As the technology becomes increasingly denser in our communities, and at a certain point you have like three of them on every block, it becomes the equivalent to tracking everybody by using GPS,” Jay Stanley of the ACLU told Gross. “That raises not only policy issues, but also constitutional issues.” License plate readers are just one element of a surveillance state being knitted together, day by day. From purchases of our digital data by government agencies and corporations, to the self-reporting we make of our movements by carrying our cellphones, to our cars – which themselves are GPS devices – there is a growing integration of a network of networks to follow our movements, posts, and communications … in the land of the free and the thoroughly surveilled. The need for lawmakers in Congress and the state capitals to set guardrails on these integrating technologies is growing more urgent by the day. Perhaps the best solution to many of these 21st century problems is to be found in a bit of 18th century software – the founders’ warrant requirement in the Fourth Amendment to the Constitution. Readers of a certain vintage will remember a 1980s Motown hit by Rockwell, with backup vocals from the Jacksons, called Somebody’s Watching Me. The music video of that song is creepy, showing a young man stumbling around his house in fear, agitated by hidden cameras in stuffed animals, actors on television who appear to be watching him, and strangers popping up in his shower. What seemed like paranoia in the age of big hair, shoulder pads, and acid-washed jeans is increasingly commonplace in the third decade of the 21st century. In the People’s Republic of China, 1.4 billion people live under constant surveillance by networked facial recognition cameras, the monitoring of their social media posts, and the mapping of their contacts through texts and emails. Armed with this ocean of data, AI is ready to flag anyone who says or does something slightly at odds with the regime. Even in our democracy, about a dozen federal intelligence agencies buy and inspect the personal and geolocation data of Americans – exposing our private lives, beliefs, religious, and political practices – without resorting to the Fourth Amendment requirement for a warrant. The focus of this blog has long been on this breach of Americans’ constitutional rights, with all of its social and political implications for our democracy. But now a new study raises a different question – what does surveillance do to our brains? And what are the implications for public health? Suppose I told you not to turn around, but to just take my word that there is a man standing in the window behind you watching your every move. Does the feeling that thought engenders make your body stiffen? Does it make the skin on the back of your neck tingle? Is your every move suddenly self-conscious? Now imagine feeling this all the time. A report in SciTechDaily details the findings of an Australian professor of neuroscience, Kiley Seymour, on the effect of surveillance on the brain function of 54 participants in his experiment. “We know CCTV changes our behavior, and that’s the main driver for retailers and others wanting to deploy such technology to prevent unwanted behavior,” Seymour said. “However, we show it’s not only overt behavior that changes – our brain changes the way it processes information.” The study found that people who know they are being surveilled become hyperaware of faces, recognizing others faster than a control group. Though the study’s participants are unaware of it, they are jumpy, always on the lookout to categorize someone as benign or a potential threat. Seymour told SciTechDaily that his study found the same “hypersensitivity to eye gaze in mental health conditions like psychosis and social anxiety disorder where individuals hold irrational beliefs or preoccupations with the idea of being watched.” One can imagine how this might make people in China jittery and anxious. On the other hand, we doubt this effect is being generated in the United States by our government’s gathering and reviewing of our data, even when it exposes the most personal and intimate aspects of our lives. Many Americans are unaware of this breach of their privacy. And for those that are aware, that creepy feeling of being watched is probably not associated with the abstract idea of purchased data in a server somewhere. If so, this is a shame. The review of our data by the FBI, IRS, Department of Homeland Security, and other agencies should give you that creepy feeling, like that man standing behind you right now. The Eyes of Luigi Mangione and a McDonald’s Employee Shortly after the vicious public murder of Brian Thompson, CEO of United Healthcare, Juliette Kayyem of Atlantic wrote a perceptive piece about the tech-savviness of the gunman, who mostly succeeded in hiding his face behind a mask and a hood. “The killer is a master of the modern surveillance environment; he understands the camera,” Kayyem wrote. “Thompson’s killer seems to accept technology as a given. Electronic surveillance didn’t deter him from committing murder in public, and he seems to have carefully considered how others might respond to his action.” At this writing, police in Pennsylvania are holding Ivy League grad Luigi Mangione as a “person of interest” in relation to the murder. Despite many media reports of incriminating details, Mangione is, of course, entitled to a presumption of innocence. But enough of the killer’s face had been shown in social media for a McDonald’s employee to call the police after seeming to recognize Mangione in those images. Whoever killed Thompson, he made a mistake – as Kayyem noted – in showing his smile while flirting with someone. This allowed a significant slice of his profile to be captured. But even when the killer was careful, his eyes and upper face were captured by a camera in a taxicab. The lesson seems to be that a professional criminal cannot fully evade what Kayyem calls a “surveillance state” made up of ubiquitous cameras. We applaud the use of this technology to track down stone-cold killers and other violent criminals. Another example: CCTV technology was put to good use in the UK in 2018 when Russian agents who tried to kill two Russian defectors with the nerve agent Novichok were identified on video. The defectors survived, but a woman who came across a perfume bottle containing the toxin sprayed it on her wrist and died. When the images of the Russian operatives surfaced, they claimed they were tourists who traveled to Salisbury, England, to see its medieval cathedral. These are, of course, excellent uses of cameras and facial recognition technology. Danger to a civil society arises when such technology is used routinely to track law-abiding civilians going about their daily tasks or engaged in peaceful protests, religious services, the practice of journalism, or some other form of ordinary business or free speech. This is why a search warrant should be required to access the saved product of such surveillance to ensure it is used for legitimate purposes – catching killers, for example – and not to spy on ordinary citizens. Far from showing that the urban networks of comprehensive surveillance are riddled with holes, recent events show that they are tighter than ever. That is a good thing, until it is not. Hence the need for safeguards, starting with the Fourth Amendment. You are probably not old enough to remember the hit 1960s television series The Prisoner, in which Patrick McGoohan played a secret agent being held for interrogation in a dystopian resort on a nameless island. Whenever McGoohan’s character made it to the beach to find a small boat to row to freedom, the mysterious powers-that-be unleashed the Rover – a giant white balloon capable of blocking escapees, knocking them down, or even suffocating them. No idea, it seems, is too lurid for the Chinese Communist Party to render into reality in the service of its surveillance state. Pedestrians in China are now watching in amazement as the streets are patrolled by RG-T robots – essentially a metal ball surrounded by a tire – that subjects people to facial recognition scans, possible arrest and worse. (See it in action here.) The U.S. military toyed with a prototype, but considered it for warfare, not for civilian use. The Sun tabloid calls, without exaggeration, “all terrain, spherical robocops.” They are resistant to attack, even from a man wielding a baseball bat. The robots, produced by China’s Logon Technologies, are not passive observers. They are equipped with artificial intelligence that decides when and how to deploy net guns, tear gas sprayers, grenades, loudspeakers, and sound wave devices. The lethal potential of robots is not theoretical. In the United States, police routinely use robots and drones for surveillance to assess the danger of a situation. In one instance in 2018, a gunman in Dallas suspected of shooting five policemen and who exchanged gunfire with police was killed by a police robot. The use of the Dallas robot was deployed to protect the police and nearby citizens. Moreover, it was fully under human control. When AI is combined with new inventions as it is with the RT-G bots, however, the decision to use force, even lethal force, is up to an algorithm. A lot of bad ideas are becoming reality in China. But don’t expect them to stay there. Should you be reading this blog? If you’re at work, on a computer provided for you by your employer, is the content of this blog sufficiently work-related for you to justify to your employer the time you’ve spent reading it? Following your search history and the time you spend on particular websites during your working hours are just some of the most obvious ways employers track employees. Now a research paper from Cracked Labs, a non-profit based in Austria, with help from other non-governmental organizations and an Oxford scholar, have mapped out dozens of technologies that allow companies to track employees’ movements and activities at the office. In “Tracking Indoor Location, Movement, and Desk Occupancy in the Workplace,” Cracked Labs demonstrates how vendors are selling technology that pairs wireless networking with Bluetooth technology to follow employees in their daily movements. The former can pinpoint the location of smartphones, laptops, and other devices employees use and often carry. Bluetooth beacons can link to badges, security cameras, and video conferencing systems to track employee behavior. Quoting marketing literature from Cisco, Cracked Labs writes: “Companies can get a ‘real time view of the behavior of employees, guests, customers and visitors’ and ‘profile’ them based on their indoor movements in order to ‘get a detailed picture of their behavior.’” Tracking 138 people with 11 Wi-Fi points, Cisco claims, generated several million location records. Not to be outdone, a European vendor, Spacewell, installs sensors in ceilings, next to doors, and even under desks to track “desk attendance.” Nicole Kobie of ITPro reports that one in five office workers are now being monitored by some kind of activity tracker. She also reports surveys that tracked employees are 73 percent more likely to distrust their employer, and twice as likely to be job hunting as those who are not tracked in their workplace. Cracked Labs concludes: “Once deployed in the name of ‘good,’ whether for worker safety, energy efficiency, or just improved convenience, these technologies normalize far-reaching digital surveillance, which may quickly creep into other purposes.” It is not difficult to imagine that such surveillance could be used by a rogue manager for stalking, to find out who is gathering around the water cooler or kitchen, or to find something to embarrass an office rival. Even when these technologies are used for their stated purposes, we all lose something when privacy is degraded to this extent. Now, how was that for work-related content? Investigative journalist Ronan Farrow delves into the Pandora’s box that is Israel’s NSO Group, a company (now on a U.S. Commerce Department blacklist) that unleashes technologies that allow regimes and cartels to transform any smartphone into a comprehensive spying device. One NSO brainchild is Pegasus, the software that reports every email, text, and search performed on smartphones, while turning their cameras and microphones into 24-hour surveillance devices. It’s enough to give Orwell’s Big Brother feelings of inadequacy. Farrow covers well-tread stories he has long followed in The New Yorker, also reported by many U.S. and British journalists, and well explored in this blog. Farrow recounts the litany of crimes in which Pegasus and NSO are implicated. These include Saudi Arabia’s murder of Jamal Khashoggi, the murder of Mexican journalists by the cartels, and the surveillance of pro-independence politicians in Catalonia and their extended families by Spanish intelligence. In the latter case, Farrow turns to Toronto-based Citizen Lab to confirm that one Catalonian politician’s sister and parents were comprehensively surveilled. The parents were physicians, so Spanish intelligence also swept up the confidential information of their patients as well. While the reality portrayed by Surveilled is a familiar one to readers of this blog, it drives home the horror of NSO technology as only a documentary with high production values can do. Still, this documentary could have been better. The show is marred by too many reaction shots of Farrow, who frequently mugs for the camera. It also left unasked follow-up questions of Rep. Jim Himes (D-CT), Ranking Member of the House Intelligence Committee. In his sit-down with Farrow, Himes made the case that U.S. agencies need to have copies of Pegasus and similar technologies, if only to understand the capabilities of bad actors like Russia and North Korea. Fair point. But Rep. Himes seems oblivious to the dangers of such a comprehensive spyware in domestic surveillance. Rep. Himes says he is not aware of Pegasus being used domestically. It was deployed by Rwandan spies to surveil the phone of U.S. resident Carine Kanimba in her meetings with the U.S. State Department. Kanimba was looking for ways to liberate her father, settled in San Antonio, who was lured onto a plane while abroad and kidnapped by Rwandan authorities. Rep. Himes says he would want the FBI to have Pegasus at its fingertips in case one of his own daughters were kidnapped. Even civil libertarians agree there should be exceptions for such “exigent” and emergency circumstances in which even a warrant requirement should not slow down investigators. The FBI can already track cellphones and the movements of their owners. If the FBI were to deploy Pegasus, however, it would give the bureau redundant and immense power to video record Americans in their private moments, as well as to record audio of their conversations. Rep. Himes is unfazed. When Farrow asks how Pegasus should be used domestically, Rep. Himes replies that we should “do the hard work of assessing that law enforcement uses it consistent with our civil liberties.” He also spoke of “guardrails” that might be needed for such technology. Such a guardrail, however, already exists. It is called the Fourth Amendment of the Constitution, which mandates the use of probable cause warrants before the government can surveil the American people. But even with probable cause, Pegasus is too robust a spy tool to trust the FBI to use domestically. The whole NSO-Pegasus saga is just one part of much bigger story in which privacy has been eroded. Federal agencies, ranging from the FBI to IRS and Homeland Security, purchase the most intimate and personal digital data of Americans from third-party data brokers, and review it without warrants. Congress is even poised to renege on a deal to narrow the definition of an “electronic communications service provider,” making any office complex, fitness facility, or house of worship that offers Wi-Fi connections to be obligated to secretly turn over Americans’ communications without a warrant. The sad reality is that Surveilled only touches on one of many crises in the destruction of Americans’ privacy. Perhaps HBO should consider making this a series. They would never run out of material. Catastrophic ‘Salt Typhoon’ Hack Shows Why a Backdoor to Encryption Would be a Gift to China11/25/2024
Former Sen. Patrick Leahy’s Prescient Warning It is widely reported that the breach of U.S. telecom systems allowed China’s Salt Typhoon group of hackers to listen in on the conversations of senior national security officials and political figures, including Donald Trump and J.D. Vance during the recent presidential campaign. In fact, they may still be spying on senior U.S. officials. Sen. Mark Warner (D-VA), Chairman of the Senate Intelligence Committee, on Thursday said that China’s hack was “the worst telecom hack in our nation’s history – by far.” Warner, himself a former telecom executive, said that the hack across the systems of multiple internet service providers is ongoing, and that the “barn door is still wide open, or mostly open.” The only surprise, really, is that this was a surprise. When our government creates a pathway to spy on American citizens, that same pathway is sure to be exploited by foreign spies. The FBI believes the hackers entered the system that enables court-ordered taps on voice calls and texts of Americans suspected of a crime. These systems are put in place by internet service providers like AT&T, Verizon, and other telecoms to allow the government to search for evidence, a practice authorized by the 1994 Communications Assistance for Law Enforcement Act. Thus the system of domestic surveillance used by the FBI and law enforcement has been reverse-engineered by Chinese intelligence to turn that system back on our government. This point is brought home by FBI documents PPSA obtained from a Freedom of Information Act request that reveal a prescient question put to FBI Director Christopher Wray by then-Sen. Patrick Leahy in 2018. The Vermont Democrat, now retired, anticipated the recent catastrophic breach of U.S. telecom systems. In his question to Director Wray, Sen. Leahy asked: “The FBI is reportedly renewing a push for legal authority to force decryption tools into smartphones and other devices. I am concerned this sort of ‘exceptional access’ system would introduce inherent vulnerabilities and weaken security for everyone …” The New York Times reports that according to the FBI, the Salt Typhoon hack resulted from China’s theft of passwords used by law enforcement to enact court-ordered surveillance. But Sen. Leahy correctly identified the danger of creating such domestic surveillance systems and the next possible cause of an even more catastrophic breach. He argued that a backdoor to encrypted services would provide a point of entry that could eventually be used by foreign intelligence. The imperviousness of encryption was confirmed by authorities who believe that China was not able to listen in on conversations over WhatsApp and Signal, which encrypt consumers’ communications. While China’s hackers could intercept text messages between iPhones and Android phones, they could not intercept messages sent between iPhones over Apple’s iMessage system, which is also encrypted. Leahy asked another prescient question: “If we require U.S. technology companies to build ‘backdoors’ into their products, then what do you expect Apple to do when the Chinese government demands that Apple help unlock the iPhone of a peaceful political or religious dissident in China?” Sen. Leahy was right: Encryption works to keep people here and abroad safe from tyrants. We should heed his warning – carving a backdoor into encrypted communications creates a doorway anyone might walk through. A suspicious husband or wife can now examine the route history of a family car or the location data of a smartphone to track a spouse’s movements. We tend to think of location history surveillance as a uniquely 21st century form of snooping. In an amusing article in the MIT Press Reader, Dartmouth scholar Jacqueline D. Wernimont writes that such surveillance is older than we think. For example, The Hartford Daily Courant in 1879 reported: “A Boston wife softly attached a pedometer to her husband when, after supper, he started to ‘go down to the office and balance the books.’ On his return, fifteen miles of walking were recorded. He had been stepping around a billiard table all evening.” In a twist worthy of today’s spy agencies, Wernimont also reports that a U.S. admiral in 1895 gave junior watch officers common pocket watches with pedometers hidden inside. The results showed that the ensigns had been asleep or resting most of the night. A night watchman at a railroad yard was given a pedometer to track his movements. It was later discovered that the night watchman evaded his responsibilities by sleeping while the pedometer was attached to a moving piston rod. The use of pedometers was an early precursor of surveillance tools used today by employers to track the movements, browsing, communications, and daily routines of their workers. Wernimont writes: “As the pedometer became a vector for surveillance by those in power, people who were able quickly developed hacks designed to frustrate such efforts.” The problem with modern technology is that it is much harder to thwart, or even anticipate when and how one is being watched. No piston rod will save us. Vice presidential candidate J.D. Vance (R-OH) told Joe Rogan over the weekend that backdoor access to U.S. telecoms likely allowed the Chinese to hack American broadband networks, compromising the data and privacy of millions of Americans and businesses. “The way that they hacked into our phones is they used the backdoor telecom infrastructure that had been developed in the wake of the Patriot Act,” Sen. Vance told Rogan on his podcast last weekend. That law gave U.S. law enforcement and intelligence agencies access to the data and operations of telecoms that manage the backbone of the internet. Chris Jaikaran, a specialist in cybersecurity policy, added in a recently released Congressional Research Service report about a cyberattack from a group known as Salt Typhoon: “Public reporting suggests that the hackers may have targeted the systems used to provide court-approved access to communication systems used for investigations by law enforcement and intelligence agencies. PRC actors may have sought access to these systems and companies to gain access to presidential candidate communications. With that access, they could potentially retrieve unencrypted communication (e.g., voice calls and text messages).” Thus, the Chinese were able to use algorithms developed for U.S. law enforcement and intelligence agencies to see to any U.S. national security order and presumably any government extraction of the intercepted communications of Americans and foreign targets under FISA Section 702. China doesn’t need a double agent in the style of Kim Philby. Our own Patriot Act mandates that we make it easier for hostile regimes to find the keys to all of our digital kingdoms – including the private conversations of Vice President Kamala Harris and former President Donald Trump. As alarming as that is, it is hard to fully appreciate the dangers of such a penetration. The Chinese have chosen not to use their presence deep in U.S. systems to “go kinetic” by sabotaging our electrical grid and other primary systems. The possible consequences of such deep hacking are highlighted in a joint U.S.-Israel advisory that details the actions against Israel that were enabled when an Iranian group, ASA, wormed its way into foreign hosting providers. ASA hackers allowed the manipulation of a dynamic, digital display in Paris for the 2024 Summer Olympics to denounce Israel and the participation of Israeli athletes on the eve of the Games. ASA infiltrated surveillance cameras in Israel and Gaza, searching for weak spots in Israeli defenses. Worst of all, the hack enabled Hamas to contact the families of Israeli hostages in order to “cause additional psychological effects and inflict further trauma.” The lesson is that when our own government orders companies to develop backdoors into Americans’ communications, those doors can be swung open by malevolent state actors as well. Sen. Vance’s comments indicate that there is a growing awareness of the dangers of government surveillance – an insight that we hope increases Congressional support for surveillance reform when FISA Section 702 comes up for renewal in 2026. Why Signal Refuses to Give Government Backdoor Access to Americans’ Encrypted Communications11/4/2024
Signal is an instant messenger app operated by a non-profit to enable private conversations between users protected by end-to-end encryption. Governments hate that. From Australia, to Canada, to the EU, to the United States, democratic governments are exerting ever-greater pressure on companies like Telegram and Signal to give them backdoor entry into the private communications of their users. So far, these instant messaging companies don’t have access to users’ messages, chat lists, groups, contacts, stickers, profile names or avatars. If served with a probable cause warrant, these tech companies couldn’t respond if they wanted to. The Department of Justice under both Republican and Democratic administrations continue to press for backdoors to breach the privacy of these communications, citing the threat of terrorism and human trafficking as the reason. What could be wrong with that? In 2020, Martin Kaste of NPR told listeners that “as most computer scientists will tell you, when you build a secret way into an encrypted system for the good guys, it ends up getting hacked by the bad guys.” Kaste’s statement turned out to be prescient. AT&T, Verizon and other communications carriers complied with U.S. government requests and placed backdoors on their services. As a result, a Chinese hacking group with the moniker Salt Typhoon found a way to exploit these points of entry into America’s broadband networks. In September, U.S. intelligence revealed that China gained access through these backdoors to enact surveillance on American internet traffic and data of millions of Americans and U.S. businesses of all sizes. The consequences of this attack are still being evaluated, but they are already regarded as among of the most catastrophic breaches in U.S. history. There are more than just purely practical reasons for supporting encryption. Meredith Whittaker, president of Signal, delves into the deeper philosophical issues of what society would be like if there were no private communications at all in a talk with Robert Safian, former editor-in-chief of Fast Company. “For hundreds of thousands of years of human history, the norm for communicating with each other, with the people we loved, with the people we dealt with, with our world, was privacy,” Whittaker told Safian in a podcast. “We walk down the street, we’re having a conversation. We don’t assume that’s going into some database owned by a company in Mountain View.” Today, moreover, the company in Mountain View transfers the data to a data broker, who then sells it – including your search history, communications and other private information – to about a dozen federal agencies that can hold and access your information without a warrant. When it comes to our expectations of privacy, we are like the proverbial frogs being boiled by degrees. Whittaker says that this is a “trend that really has crept up in the last 20, 30 years without, I believe, clear social consent that a handful of private companies somehow have access to more intimate data and dossiers about all of us than has ever existed in human history.” Whittaker says that Signal is “rebuilding the stack to show” that the internet doesn’t have to operate this way. She concludes we don’t have to “demonize private activity while valorizing centralized surveillance in a way that’s often not critical.” We’re glad that a few stalwart tech companies, from Apple and its iPhone to Signal, refuse to cave on encryption. And we hope there are more, not fewer, such companies in the near future that refuse to expose their customers to hackers and government snooping. “We don’t want to be a single pine tree in the desert,” Whittaker says, adding she wants to “rewild that desert so a lot of pine trees can grow.” We’re all resigned to the need to go through security at high-profile sporting and cultural events, just as we do at the airport. The American Civil Liberties Union is raising the question – will that level of scrutiny be the new normal at the mall, at open-air tourist attractions, outdoor concerts, and just plain walking around town? The Department of Homeland Security (DHS) is investing in research and development to “assess soft targets and address security gaps” with new technology to track people in public places. It is funding SENTRY, the Soft Target Engineering to Neutralize the Threat Reality. SENTRY will combine artificial intelligence from the “integration of data from multiple sources,” which no doubt will include facial recognition scans of everyone in a given area to give them a “threat assessment.” We do not dismiss DHS’s concern. The world has no lack of violent people and our country is full of soft targets. Just hark back to the deranged shooter in 2017 who turned the Route 91 Harvest music festival in Las Vegas into a shooting gallery. He killed 60 people and wounded more than 400. A similar act by a terrorist backed by a malevolent state could inflict even greater casualties. But we agree with ACLU’s concern that such intense inspection of Americans going about their daily business could lead to the “airportization” of America, in which we are always in a high-security zone whenever we gather. ACLU writes that “security technology does not operate itself; people will be subject to the petty authority of some martinet guards who are constantly stopping them based on some AI-generated flag of suspicion.” We would add another concern. Could SENTRY be misused, just as FISA Section 702 and other surveillance authorities have been misused? What is to keep the government from accessing SENTRY data for warrantless political surveillance, whether against protestors or disfavored groups targeted by biased FBI agents? If this technology is to be deployed, guardrails are needed. PPSA seconds ACLU’s comment to the watchdog agency, the Privacy and Civil Liberties Oversight Board (PCLOB), that asks it to investigate AI-based programs as they develop. Congress should watch the results of PCLOB’s efforts and follow up with legal guardrails to prevent the misuse of SENTRY and similar technologies. Doxing – the practice of exposing a person’s location and home address – can have deadly consequences. This lesson was brought home in July 2020 when a deranged man with a grudge against federal judge Esther Salas went to her New Jersey home dressed as a deliveryman, carrying a gun. The judge’s 20-year-old son, Daniel Anderl, a Catholic University student, opened the door only to be shot dead as he moved forward to shield his parents. Out of this tragedy came Daniel’s Law, a New Jersey statute advocated by Judge Salas to allow law enforcement, government personnel, judges and their families to have their information completely removed from commercial data brokers. We’re accustomed to the idea that ad-selling social media platforms and government can track us. Now Krebs on Security is reporting that a new digital service neuters this law and exposes potentially any American to location tracking by any subscriber. This tracking service is enabled by Babel Street, which has a core product that Krebs writes “allows customers to draw a digital polygon around nearly any location on a map of the world, and view a . . . time-lapse history of the mobile devices coming in and out of the specified area.” Krebs reports that a private investigator demonstrated the danger of this technology by discreetly using it to determine the home address and daily movements of mobile devices belonging to multiple New Jersey police officers whose families have already faced significant harassment and death threats. This is just one more sign that in-depth surveillance that was once the province of giant social media companies and state actors is falling into the hands of garden-variety stalkers, snoops, and criminals. PPSA calls on New Jersey legislators, who are ideally positioned to lead a national response to this technology, to develop laws and policy solutions that continue to protect law enforcement, judges, and everyday citizens in their daily rounds and in their homes. Supreme Court Justice Oliver Wendell Holmes observed that anyone “who respects the spirit as well as the letter of the Fourth Amendment would be loath to believe that Congress intended to authorize one of its subordinate agencies to sweep all our traditions into the fire to direct fishing expeditions into private papers on the possibility that they may disclose evidence of crime.” A century after Justice Holmes delivered that warning, the U.S. Securities and Exchange Commission is doing just that. This agency is methodically sweeping all our traditions into the fire to direct fishing expeditions that treat every investor as a criminal suspect. The good news is that the constitutionality of the SEC’s program is on trial in a case now before a federal judge in Waco, Texas. Here’s the background: Historically, when the SEC has suspected someone of insider trading, it had to issue an investigative subpoena. Then in 2010, the market suffered the “flash crash” – a trillion-dollar decline caused by technical glitches that lasted for 36 minutes. The SEC responded to this technical glitch by proposing Rule 613, which established the Consolidated Audit Trail (CAT), a database that collects not just investors’ trades, but also their privately identifiable information. This “solution” had nothing to do with the crash, but it perfectly illustrates former Chicago Mayor Rahm Emmanuel’s dictum that “you never want a serious crisis to go to waste.” Rule 613 requires self-regulatory organizations, like private stock exchanges, to collect every detail about trades in securities on a U.S. exchange. It also includes confidential data on more than 100 million private investors, making it the largest database outside of the National Security Agency. This database includes investors’ names, dates of birth, taxpayer identification numbers, Social Security numbers, and more. Now two Texas investors, in affiliation with the National Center for Public Policy Research, are suing the SEC for this massive violation of privacy. Their lawsuit, represented by the New Civil Liberties Alliance, could be required reading for law students seeking to understand the application of our constitutional rights, beginning with the Fourth Amendment. This lawsuit makes the case:
The lawsuit makes a convincing case that the U.S. Supreme Court’s 2018 Carpenter decision – which held that the government violates the Fourth Amendment whenever it seeks a suspect’s cellphone location history without a warrant – should make this case against CAT a slam-dunk. After all, the plaintiffs assert that unlike the issue in Carpenter, “with Rule 613 SEC does not need an investigative predicate, much less a court order, to obtain and analyze private information, nor is the information limited to any particular person or time frame.” Even if a federal judge declares CAT to be unconstitutional, however, it will only strike down one of many intrusive violations of Americans’ financial privacy by federal agencies. These include a new requirement of all business owners to file “beneficial ownership” forms, for which any American business owner can face two years in prison for a clerical mistake, and the U.S. Treasury’s Financial Crimes Enforcement Networks snooping into Americans’ financial transactions with the coerced cooperation of 650 private financial institutions. Once the election is over, Congress should pass the “Protecting Investors' Personally Identifiable Information Act,” introduced by Sen. John Kennedy, (R-LA), and Rep. Barry Loudermilk, (R-Ga.), which would allow the SEC to obtain personally identifiable information only by requesting it on a case-by-case basis. As the risks of the SEC’s reckless program become clearer, more Members of Congress should embrace another Holmes dictum: “State interference is an evil, where it cannot be shown to be a good.” Police Chief: “A Nice Curtain of Technology”We’ve long followed the threat to privacy from the proliferation of automated license plate readers (ALPRs). Now the Institute for Justice has filed a lawsuit against the Norfolk, Virginia, police department for its use of this Orwellian technology. More than 5,000 communities across the country have installed the most popular ALPR brand, Flock, which records and keeps the daily movements of American citizens driving in their cars. Norfolk is an enthusiastic adopter of Flock technology, with a network of 172 advanced cameras that make it impossible for citizens to go anywhere in their city without being followed and recorded. Flock applies artificial intelligence software to its national database of billions of images, adding advanced search and intelligence functions. “This sort of tracking that would have taken days of effort, multiple officers, and significant resources just a decade ago now takes just a few mouse clicks,” the Institute for Justice tells a federal court in its lawsuit. “City officers can output a list of locations a car has been seen, create lists of cars that visited specific locations, and even track cars that are often seen together.” No wonder the Norfolk police chief calls Flock’s network “a nice curtain of technology.” The Institute for Justice has a different characterization, calling this network “172 unblinking eyes.” Americans are used to the idea of being occasionally spotted by a friend or neighbor while on the road, but no one expects to have every mile of one’s daily movements imaged and recorded. The nefarious nature of this technology is revealed in the concerns of the two Norfolk-area plaintiffs named in the lawsuit.
“If the Flock cameras record Lee going straight through the intersection outside his neighborhood, for example, the NPD (Norfolk Police Department) can infer that he is going to his daughter’s school. If the cameras capture him turning right, the NPD can infer that he is going to the shooting range. If the cameras capture him turning left, the NPD can infer that he is going to the grocery store […] “Lee finds all of this deeply intrusive. Even if ordinary people see him out and about from time to time, Lee does not expect and does not want people – much less government officials – tracking his every movement over 30 days or more and analyzing that data the way the Flock cameras allow the NPD and other Flock users to do.”
“As a healthcare worker, Crystal is legally and ethically required to protect her clients’ privacy,” the filing states. “She also understands that her clients expect her to maintain their confidentiality … If she failed to live up to those expectations, her business would suffer.” Both plaintiffs are concerned another Flock user, perhaps a commercial entity, might misuse the records of their movements. They are also worried about “the potential that Defendants, Flock users, or third-party hackers could misuse her information.” No warrants or permissions are needed for Norfolk officers to freely access the system. The Institute for Justice was shrewd in its selection of venues. Norfolk is in the jurisdiction of the federal Fourth Circuit Court of Appeals, which in 2021 struck down the use of drone images over the city in a case called Beautiful Struggle v. Baltimore Police Department. “The Beautiful Struggle opinion was about a relatively, comparatively, crude system, just a drone that was flying in the air for 12 hours a day that at most had a couple of pixels that made it hard to identify anyone,” Institute for Justice attorney Robert Frommer told 404 Media. “By contrast, anyone with the Flock cameras has a crystal-clear record of your car, a digital fingerprint that can track anywhere you go. The police chief even said you can’t really go anywhere in Norfolk without being caught by one of these cameras.” The consistent principle from the Fourth Circuit’s precedent should make it clear, in the words of the Institute for Justice, that tracking a driver “to church, to a doctor’s office, to a drug-abuse treatment clinic, to a political protest,” is unconstitutional. The Project for Privacy and Surveillance Accountability recently submitted a series of FOIA requests to law enforcement and intelligence agencies seeking critical information on how the agencies handle data obtained through the use of cell-site simulators, also known as Stingrays or Dirtboxes, which impersonate cell towers and collect sensitive data from wireless devices. Specifically, PPSA submitted requests to DOJ, CIA, DHS, NSA, and ODNI. These requests focus on what happens after the government collects this data. As PPSA’s requests state, PPSA “seeks information on how, once the agency obtains information or data from a cell-site simulator, the information obtained is used.” We are particularly interested in learning about the agencies’ policies for data retention, usage, and deletion, especially for data collected from individuals who are not the target of surveillance. PPSA has long been concerned with the invasive nature of these surveillance tools, which capture not only targeted individuals' data but also data from anyone nearby. As we previously stated in a 2021 FOIA request, “this technology gives the government the ability to conduct sweeping dragnets of metadata, location, and even text messages from anyone within a geofenced area.” These FOIA requests specifically demand transparency about what happens after the government collects such data. We seek records regarding policies on data retention, use, and destruction, particularly for information unrelated to surveillance targets. As our requests state, “PPSA wishes to know what policies govern such use and what policies, if any, are in place to protect the civil liberties and privacy of those whose data might happen to get swept up in a cell-site simulator’s data collection activities.” As we previously highlighted, Stingrays represent a significant intrusion into personal privacy, and we are committed to holding the government accountable for its use of such tools. By pursuing these requests, we aim to inform the public about the scope and potential risks of the agencies’ surveillance activities, and to push for greater safeguards over Americans’ private information. PPSA will continue to push towards transparency, and we will keep the public informed of our efforts. License plate readers (LPRs), originally intended for traffic enforcement, are evolving into a powerful surveillance tool capturing far more than just vehicle data. As a WIRED exposé details, these AI-powered cameras are now recording political signs, personal bumper stickers, and even individuals outside their homes, all while logging precise locations. This data is stored in massive databases managed by private companies like DRN Data and shared with law enforcement and private entities, posing a significant privacy threat to citizens across the United States.
What was once a tool for tracking vehicles is now quietly tracking people, their views, and personal lives in disturbing detail. The expansion of LPR technology is a troubling example of how mass surveillance is becoming normalized, not just by governments but by private companies. DRN Data and its parent company, Motorola Solutions, have amassed over 15 billion vehicle sightings, recording as many as 250 million per month. These figures are staggering, yet they are framed as necessary for public safety—tracking stolen cars, for example, or assisting in Amber Alerts. However, what we are seeing is far from mere traffic monitoring. Lawn signs, bumper stickers, and even images of people wearing political messages are being captured, often without their knowledge or consent, and stored in vast databases. The real danger comes from the unchecked power that these private companies wield. LPR companies claim to comply with all applicable laws, but the scale and granularity of the data they collect far exceed what most people expect when they step outside their homes. This surveillance, driven by corporate profit motives, is largely happening without public oversight. Private companies are not held to the same standards as government agencies in terms of transparency and accountability, making it difficult to understand how, when, or by whom this data is being used. This raises the prospect of personal data being sold, commercialized, or misused by third parties. The public, meanwhile, has little to no recourse to challenge this form of surveillance or to opt out. The potential for abuse is vast. As the article notes, LPR data has already been misused by law enforcement and federal agencies like ICE, with some officers stalking or harassing individuals. The system is ripe for further exploitation, especially in today's politically charged environment. Imagine a database that allows anyone with access—whether police, private investigators, or corporations—to search for images of homes or vehicles displaying political messages, such as support for Planned Parenthood or Trump. This information could easily be weaponized to harass, intimidate, or target people for their political views. The idea that one's political affiliations could be logged and searched without consent is a violation of basic democratic principles. This situation blurs the line between public and private surveillance, creating a system where private companies can collect data traditionally reserved for law enforcement. It’s not just the government watching—private entities now have their own surveillance networks. People might accept the presence of CCTV cameras as a deterrent to crime, but few expect that their personal political signs, bumper stickers, or even their faces will be cataloged and available for search in national databases. Civil liberties groups like the ACLU have long warned that these technologies are far too invasive for the tasks they claim to perform, and their expansion into everyday life should concern us all. As we’ve previously stated, mass surveillance systems are creeping further into the private lives of citizens, often disguised as safety measures. LPR technology represents a major leap forward in this regard, allowing for an unprecedented level of data collection and surveillance that threatens not only privacy but also free expression. What started as a tool for monitoring traffic has become a tool for monitoring people, and unless there is more oversight, this technology will continue to erode the boundaries between public safety and personal freedom. A new study from Washington Post reveals that police routinely use facial recognition software to identify and arrest suspects, yet fail to disclose it to the defendants themselves. This, despite the fact that that the still-new technology has led to numerous documented false arrests. Washington Post spoke with 100 police departments across 15 states, although only 30 of them provided records from cases in which facial recognition was used. In fact, the investigation found that the police often overtly masked their use of the software, recording in reports, for example, that suspects were identified “through investigative means.” There’s reason for that; facial recognition software is notoriously fallible. The article references at least seven cases of wrongful arrests stemming from the use of the technology. Six of those seven were Black Americans. Washington Post reports, “[f]ederal testing of top facial recognition software has found the programs are more likely to misidentify people of color, women and the elderly because their faces tend to appear less frequently in data used to train the algorithm….” Last year, we wrote about the case of Randall Reid, a Black man from Georgia arrested for allegedly stealing handbags in Louisiana. The only problem: Reid had never even been to Louisiana. He was a victim of misidentification. And that was all the police needed to hold him for close to a week in jail. Generally speaking in the criminal context, facial recognition software works by comparing surveillance footage with publicly available photos online. Companies like Clearview AI contract with law enforcement agencies, providing access to billions of photos scraped from Facebook, X, Instagram and other social media platforms. And despite access to so much online material, the results are often faulty. Which is all the more reason that such evidence needs to be disclosed in an investigative context. Per the Post, “Clearview search results produced as evidence in one Cuyahoga County, Ohio, assault case included a photo of basketball legend Michael Jordan and a cartoon of a Black man.” Spoilers: neither image depicted the culprit. The real culprit in this case is a legal system that is decidedly behind the times on reacting and responding to technological shifts. Some are catching up; in 2022, the ACLU won a legal victory against Clearview mandating the company to adhere the Illinois Biometric Information Privacy Act (BIPA). The law requires companies that collect, capture, or obtain a biometric identifier of an Illinois resident to first notify that person and obtain his or her written consent. But we have a long way to go in establishing vigorous protections against the misuse and masking of “iffy” new technologies like facial recognition. Due process requires we do better. Hackers Demonstrate They Can Remotely Hack Kia Vehicles – Just By Scanning a License Plate10/8/2024
A small but determined group of security researchers revealed that they discovered a way to remotely hack Kia vehicles, shining a light on what has become a systemic problem for modern car manufacturers: web security. An article over at Wired details how the group of hackers exploited a web portal flaw which allowed them to “reassign control of the internet-connected features of most modern Kia vehicles,” granting them the ability to “track that car’s location, unlock the car, honk its horn, or start its ignition at will.” It’s the latest demonstration of just how lacking web security is for many modern vehicles. Back in 2023, the same group published extensive findings showing that they could, to some degree or another, hack cars manufactured by Honda, Infiniti, Nissan, Acura, Mercedes-Benz, Hyundai, Genesis, BMW, Rolls Royce, Ferrari, Ford, Porsche, Toyota and more. The way it works, broadly, is that by leveraging weaknesses in a car company’s web portal, a hacker can send direct commands to the site’s API, which is what allows programmers to manipulate online data. In Kia’s case, the hackers were able to essentially pretend to be dealers, who often manage connected car features remotely. Most alarmingly, they were able to do so just by reading a license plate and then looking up the associated VIN number via PlateToVin.com. The whole process takes about 30 seconds. Said one researcher, “Dealers have way too much power, even over vehicles that don’t touch their lot.” To say nothing of the possibilities for harassment and theft, the Kia debacle proves how easy it is to surveil drivers’ movements with just a little tech savvy and elbow grease. Virtually all modern cars have internet-connected devices, and it appears many of them also have lax security features. Kia wasn’t even checking whether a user of its web portal was a consumer or a dealer. Kia, as of August, had apparently not fixed the problem, which hardly constitutes “movement that inspires.” But the fact is, all car companies need to be thinking about this issue – before the real criminals catch on. For years, big tech companies – Meta and Google in particular – have developed but declined to commercialize facial recognition technology that could dox people in real time. The media is now agog about reports that two Harvard students conducted an experiment by rigging a pair of Meta’s Ray Ban smart glasses with facial recognition software. Combined with automated social media analysis, the glasses identified people on the subway. Joseph Cox in 404 Media writes such glasses can “identify a stranger on the street, where they work, where they went to school, where they live, and their contact information.” One of the students told Cox: “We would show people photos of them from kindergarten, and they had never even seen the photo before.” While we don’t expect any reputable company to offer this product, someone will undoubtedly combine these off-the-shelf technologies, just as vendors supply kits to turn semi-automatic weapons into fully automatic weapons or offer silencers for guns. Armed with this technology, your neighborhood creep could easily spot a woman walking down the street and be there when she arrives at her front doorstep. For all the valid concerns about how such creeps, perps, stalkers, and assorted snoops might use this technology, the current danger comes from our own government. By 2021, facial technology company Clearview AI had scraped 10 billion images off the internet to identify people in fulfilling contracts with more than 3,000 law enforcement organizations. This is enormous power that allows federal, state, and local agencies to instantly know everything about you at a glance. With a few clicks, an agent can know how you voted, who you date, where you’ve been, and where you’re going. Action is needed to keep this technology out of consumer’s hands, just as Congress has done with machine guns and other fully automatic weapons. Laws to restrict and punish facial doxing may not fully stop it, but such laws would be an important start. Beyond that, guardrails must be put in place to prevent government agents from doxing people at a glance. A probable cause warrant requirement would be a good start. Sen. Mike Lee (R-UT) is advancing his new Saving Privacy Act to protect Americans’ personal financial information from warrantless snooping by federal agencies.“The current system erodes the privacy rights of citizens, while doing little to effectively catch true financial criminals,” Sen. Lee said. The bill’s co-sponsor, Sen. Rick Scott (R-FL), added: “Big government has no place in law-abiding Americans’ personal finances. It is a massive overreach of the government and a gross violation of their privacy.”
Are these two senators paranoid? Or are they reacting to genuine “massive overreach” from a government that already illicitly spies on Americans’ personal finances? Consider what PPSA has reported in the last three years:
“Traditionally, Americans’ financial holdings are kept between them and their broker, not them, their broker, and a massive government database,” state auditors and treasurers wrote in a recent letter to House Speaker Mike Johnson. “The only exception has been legal investigations with a warrant.”
TRAC sucks in wire transfers within the United States between American citizens, as well as with those sending or receiving money from abroad. Sen. Wyden told The Wall Street Journal that TRAC lets the government “serve itself an all-you-can-eat buffet of Americans’ personal financial data while bypassing the normal protections for Americans’ privacy.”
Could that actually happen? It did across the border, when the Canadian government used emergency powers to debank truckers engaged in a political protest. At home, the tracking of Americans’ spending is a Fourth Amendment violation that inevitably leads to the degradation of the First Amendment.
Sen. Lee’s bill counters this financial surveillance state by repealing many of the reporting requirements of the Bank Secrecy Act. It also repeals the Corporate Transparency Act (which forces small businesses to reveal their ownership), closes the SEC’s database on Americans’ trades, prohibits the creation of a Central Bank Digital Currency, and requires congressional approval before any agency can create a database that collects personally identifiable information of U.S. citizens. Finally, Sen. Lee’s Saving Privacy Act would institute punishments for federal employees who release Americans’ protected financial information, while establishing a private right of action for Americans and financial institutions harmed when their privacy is compromised by the government. The Saving Privacy Act is a landmark bill that deserves to become the basis of debate and action in the next Congress. A whitepaper from social media company Meta presents a startling new reality in bland language. It claims that magnetoencephalography (MEG) neural imaging technology “can be used to decipher, with millisecond precision, the rise of complex representations generated in the brain.”
In layman’s terms, AI can crunch a person’s brainwaves and apply an image generator to create an astonishingly accurate representation of what a person has seen. Paul Simon was right, these really are the days of miracles and wonders – and also of new threats to personal privacy. (If you want to see this science-fictional sounding technology in action, check out these images from science.org to see how close AI is to representing images extrapolated from brain waves.) Until now, even in a total surveillance state such as North Korea or China, netizens could have their faces, movements, emails, online searches and other external attributes recorded throughout the day. But at least they could take comfort that any unapproved thoughts about the Dear Leader and his regime were theirs and theirs alone. That is still true. But the robustness of this new technology indicates that the ability of brain data to fully read minds is not far off. Researchers in China in 2022 announced technology to measure a person’s loyalty to the Chinese Communist Party. A number of non-invasive brain-wave reading helmets are on the U.S. market for wellness, education, and entertainment. The Members of the California State Assembly and Senate were sufficiently alarmed by these developments to follow the example of Colorado and regulate this technology. This new law amends the California Consumer Privacy Act to include “neural data” under the protected category of “personal sensitive information.” On Saturday, Gov. Gavin Newsom signed that bill into law. Under this new law, California citizens can now request, delete, correct, and limit what neural data is being collected by big tech companies. We know what you’re thinking, would I be sufficiently concerned about my privacy that I would register with a state-mandated database to make changes to my privacy profile? Actually, that was just our best guess about what you’re thinking. But give it a few years. In the 2002 Steven Spielberg movie Minority Report, Tom Cruise plays John Anderton, a fugitive in a dystopian, film-noir future. As Anderton walks through a mall, he is haunted by targeted ads in full-motion video on digital billboards. The boards read Anderton’s retinas and scan his face, identify him, and call out “Hey, John Anderton!” – look at this Lexus, this new Bulgari fragrance, this special offer from Guinness!
Anderton appears brutalized as he and other passersby walk briskly and look straight ahead to avoid the digital catcalls around them. What was sci-fi in 2002 is reality in 2024. You’ve probably seen a digital billboard with vibrant animation and high production values. What’s not immediately apparent is that they can also be interactive, based on face-scanning and the integration of mobile data exploited by the “out-of-home” advertising business. “Going about the world with the feeling that cameras are not just recording video but analyzing you as a person to shape your reality is an uncomfortable concept,” writes Big Brother Watch, a UK-based civil liberties and privacy organization in a white paper, The Streets Are Watching You. Some examples from Big Brother:
This tracking is enabled by cameras and facial recognition and enhanced by the synthesis of consumers’ movement data, spatial data, and audience data, collected by our apps and reported to advertisers by our smartphones. Audience data is collected by mobile advertising ID (MAIDS), which cross-references behavior on one app to others and matches those insights with tracking software to create a personal profile. While supposedly anonymized, MAIDS can be reverse engineered to work out someone’s actual identity. We have an additional concern about hyper-targeted advertising and advertising surveillance. This sector is raising billions of dollars in capital to build out an infrastructure of surveillance in the UK. If this practice also spreads across the United States, the data generated could easily be accessed by the U.S. federal government to warrantlessly surveil Americans. After all, about a dozen U.S. agencies – ranging from the FBI to the IRS – already purchase Americans’ digital data from third-party data brokers and access it without warrants. Congress can prevent this technology from being unfurled in the United States. The U.S. Senate can also take the next step by passing the Fourth Amendment Is Not For Sale Act, passed by the House, which forbids the warrantless collection of Americans’ most personal and sensitive data. In the meantime, go to p. 35 of Big Brother’s “The Streets Are Watching You” report to see how Apple iPhone and Android users can protect themselves from phone trackers and location harvesting. We wouldn’t want to do what John Anderton did – have a technician pluck out our eyes and replace them with someone else’s. Replacing one’s face would presumably take a lot more work. The Texas Observer reports that the Texas Department of Public Safety (DPS) signed a 5-year, nearly $5.3 million contract for the Tangles surveillance tool, originally designed by former Israeli military officers to catch terrorists in the Middle East.
In its acquisition plan, DPS references the 2019 murder of 23 people at an El Paso Walmart, as well as shooting sprees in the Texas cities of Midland and Odessa. If Tangles surveillance stops the next mass shooter, that will be reason for all to celebrate. But Tangles can do much more than spot shooters on the verge of an attack (assuming it can actually do that). It uses artificial intelligence to scrape data from the open, deep, and dark web, combining a privacy-piercing profile of anyone it targets. Its WebLoc feature can track mobile devices – and therefore people – across a wide geofenced area. Unclear is how DPS will proceed now that the Fifth Circuit Court of Appeals in United States v. Jamarr Smith ruled that geofence warrants cannot be reconciled with the Fourth Amendment. If DPS does move forward, there will be nothing to keep the state’s warrantless access to personal data from migrating from searches for terrorists and mass shooters, to providing backdoor evidence in ordinary criminal cases, to buttressing cases with political, religious, and speech implications. As the great Texas writer Molly Ivins wrote: “Many a time freedom has been rolled back – and always for the same sorry reason: fear.” When we’re inside our car, we feel like we’re in our sanctuary. Only the shower is more private. Both are perfectly acceptable places to sing the Bee Gee’s Staying Alive without fear of retribution.
And yet the inside of your car is not as private as you might think. We’ve reported on the host of surveillance technologies built into the modern car – from tracking your movement and current location, to proposed microphones and cameras to prevent drunk driving, to seats that report your weight. All this data is transmitted and can be legally sold by data brokers to commercial interests as well as a host of government agencies. This data can also be misused by individuals, as when a woman going through divorce proceedings learned that her ex was stalking her by following the movements of her Mercedes. Now another way to track our behavior and movements is being added through a national plan announced by the U.S. Department of Transportation called “vehicle-to-everything” technology, or V2X. Kimberly Adams of marketplace.org reports that this technology, to be deployed on 50 percent of the National Highway System and 40 percent of the country’s intersections by 2031, will allow cars and trucks to “talk” to each other, coordinating to reduce the risk of collision. V2X will smooth out traffic in other ways, holding traffic lights green for emergency vehicles and sending out automatic alerts about icy roads. V2X is also yet one more way to collect a big bucket of data about Americans that can be purchased and warrantlessly accessed by federal intelligence and law enforcement agencies. Sens. Ron Wyden (D-OR) and Cynthia Lummis (R-WY), and Rep. Ro Khanna (D-CA), have addressed what government can do with car data under proposed legislation, “Closing the Warrantless Digital Car Search Loophole Act.” This bill would require law enforcement to obtain a warrant based on probable cause before searching data from any vehicle that does not require a commercial license. But the threat to privacy from V2X comes not just from cars that talk to each, but also from V2X’s highway infrastructure that enables this digital conversation. This addition to the rapid expansion of data collection of Americans is one more reason why the Senate should follow the example of the House and pass the Fourth Amendment Is Not For Sale Act, which would end the warrantless collection of Americans’ purchased data by the government. We can embrace technologies like V2X that can save lives, while at the same time making sure that the personal information about us it collects is not retained and allowed to be purchased by snoops, whether government agents or stalkers. The phrase “national security” harks back to the George Washington administration, but it wasn’t until the National Security Act of 1947 that the term was codified into law. This new law created the National Security Council, the Central Intelligence Agency, and much of the apparatus of what we today call the intelligence community. But the term itself – “national security” – was never defined.
What is national security? More importantly, what isn’t national security? Daniel Drezner, a Fletcher School of Law and Diplomacy professor, writes in Foreign Affairs that it was the Bush-era “war on terror” that put the expansion of the national security agenda into overdrive. Since then, he writes, the “national security bucket has grown into a trough.” The term has become a convenient catch-all for politicians to show elevated concern about the issues of the day. Drezner writes: “From climate change to ransomware to personal protective equipment to critical minerals to artificial intelligence, everything is national security now.” He adds to this list the Heritage Foundation’s Project 2025’s designation of big tech as a national security threat, and the 2020 National Security Strategy document, which says the same for “global food insecurity.” We would add to that the call by politicians in both parties to treat fentanyl as a matter of national security. While some of these issues are clearly relevant to national security, Drezner’s concern is the strategic fuzziness that comes about when everything is defined as a national security priority. He criticizes Washington’s tendency to “ratchet up” new issues like fentanyl distribution, without any old issues being removed to keep priorities few and urgent. For our part, PPSA has a related concern – the expansion of the national security agenda has a nasty side effect on Americans’ privacy. When a threat is identified as a matter of national security, it also becomes a justification for the warrantless surveillance of Americans. It is one thing for the intelligence community to use, for example, FISA Section 702 authority for the purpose for which Congress enacted it – the surveillance of foreign threats on foreign soil. For example, if fentanyl is a national security issue, then it is appropriate to surveil the Chinese labs that manufacture the drug and the Mexican cartels that smuggle it. But Section 702 can also be used to warrantlessly inspect the communications of Americans for a crime as a matter of national security. Evidence might also be warrantlessly extracted from the vast database of American communications, online searches, and location histories that federal agencies purchase from data brokers. So the surveillance state can now dig up evidence against Americans for prosecution in drug crimes, without these American defendants ever knowing how this evidence was developed – surely a fact relevant to their defense. As the concept of national security becomes fuzzier, so too do the boundaries of what “crimes” can be targeted by the government with warrantless surveillance. “Trafficking” in critical minerals? Climate change violations? Repeating alleged foreign “disinformation”? When Americans give intelligence and law enforcement agents a probable cause reason to investigate them, a warrant is appropriate. But the ever-expanding national security agenda presents a flexible pretext for the intelligence community to find ever more reason to set aside the Constitution and spy on Americans without a warrant. Drezner writes that “if everything is defined as national security, nothing is a national security priority.” True. And when everything is national security, everyone is subject to warrantless surveillance. |
Categories
All
|