Imagine a law enforcement agent – an FBI agent, or a detective in a large police department – who wants to track people passing out leaflets. Current technology might use facial recognition to search for specific people who are known activists, prone to such activity. Or the agent could try not to fall asleep while watching hours of surveillance video to pick out leaflet-passers. Or, with enough time and money, the agent could task an AI system to analyze endless hours of crowds and human behavior and to eventually train it to recognize the act of leaflet passing, probably with mixed results. A new technology, Vision Language Models (VLMs), are a game-changer for AI surveillance, as a modern fighter jet is to a biplane. In our thought experiment, all the agent would have to do is simply instruct a VLM system, “target people passing out leaflets.” And she could go get a cup of coffee while it compiled the results. Jay Stanley, ACLU Senior Policy Analyst, in a must-read piece, says that a VLM – even if it had never been trained to spot a zebra – could leverage its “world knowledge (that a zebra is like a horse with stripes.)” As this technology becomes cheaper and commercialized, Stanley writes, you could simply tell it to look out for kids stepping on your lawn, or to “text me if the dog jumps on the couch.” “VLMs are able to recognize an enormous variety of objects, events, and contexts without being specifically trained on each of them,” Stanley writes. “VLMs also appear to be much better at contextual and holistic understandings of scenes.” They are not perfect. Like facial recognition technology, VLMs can produce false results. Does anyone doubt, however, that this new technology will only become more accurate and precise with time? The technical flaw in Orwell’s 1984 is that each of those surveillance cameras watching a target human required another human to watch that person eat, floss, sleep – and try not to fall asleep themselves. But VLMs make those ever-watching cameras watch for the right things. In 1984, George Orwell’s Winston Smith ruminated that: “It was terribly dangerous to let your thoughts wander when you were in a public place or within range of a telescreen. The smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself – anything that carried with it the suggestion of abnormality, of having something to hide." Thanks to AI – and now to VLMs – the day is coming when a government official can instruct a system, “show me anyone who is doing anything suspicious.” Coming soon, to a surveillance state near you … Can the Government Access “An Entire Haystack Because It May Contain a Needle?” The drafters of the U.S. Constitution could not have imagined Google, Apple, and cell-site technologies that can vacuum up the recorded movements of thousands of people. Still smarting from the British colonial practice of ransacking rows of homes and warehouses with “general warrants,” the founders wrote the Fourth Amendment to require that warrants must “particularly” describe “the place to be searched, and the persons or things to be seized.” Courts are still grappling with this issue of “particularity” in geofence warrants – technology that analyzes mass data to winnow out suspects. Now a federal court in Mississippi has come down decisively against non-particular searches in location-and-time based cell tower data. To reach this conclusion, Judge Andrew S. Harris had to grapple with a Grand Canyon of circuit splits on this question. His opinion is a concise and clear dissection of divergent precedents from two higher circuit courts. Harris begins with the Fourth Circuit Court of Appeals in Virginia in United States v. Chatrie (2024), which held that because people know that tech companies collect and store location information, that a defendant has no reasonable expectation of privacy.” The Fourth Circuit reached its decision, in part, because Google users must “opt in to Location History” to enable Google to track their locations. The Fifth Circuit Court of Appeals in New Orleans took the Fourth Circuit’s reasoning and chopped it up for jambalaya. The Fifth drew heavily on the U.S. Supreme Court’s 2018 United States v. Carpenter opinion – which held that the government’s request for seven days’ worth of location tracking from a man’s wireless carrier constituted an unconstitutional search. This data, the Supreme Court reasoned, deserves protection because it provides an intimate window into a person’s life, revealing not only his particular movements, but through them his “familial, political, professional, religious, and sexual associations.”’ Despite a long string of cases holding that people have no legitimate expectation of privacy when they voluntarily turn over personal information to third parties, the U.S. Supreme Court held that a warrant was needed in this case. The Fifth followed up on Carpenter’s logic with a fine distinction in United States v. Smith (2024): “As anyone with a smartphone can attest, electronic opt-in processes are hardly informed and, in many instances, may not even be voluntary.” That court concluded that the government’s acquisition of Google data must conform to the Fourth Amendment. The Fifth thus declared that geofence warrants are modern-day versions of general warrants and are therefore inherently unconstitutional. That finding surely rattled windows in every FBI, DEA, and local law enforcement agency in the United States. Judge Harris worked from these precedents when he was asked to review four search-warrant applications for location information from a data dump from a cell tower. The purpose of the request was not trivial. An FBI Special Agent wanted to see if he could track members of a violent street gang implicated in a number of violent crimes, including homicide. The government wanted the court to order four cell-service provides to produce data for 14 hours for every targeted device. Judge Harris wrote that the government “is essentially asking the Court to allow it access to an entire haystack because it may contain a needle. But the Government lacks probable cause both as to the needle’s identifying characteristics and as to the many other flakes of hay in the stack … the haystack here could involve the location data of thousands of cell phone users in various urban and suburban areas.” So Judge Harris denied the warrant applications. Another court in another circuit may have well come to the opposite conclusion. Such a deep split on a core constitutional issue is going to continue to deliver contradictory rulings until it is resolved by the U.S. Supreme Court. In the meantime, Judge Harris – a graduate of the University of Mississippi Law School – brings to mind the words of another Mississippian, William Faulkner: “We must be free not because we claim freedom, but because we practice it.” Americans value privacy in the marketplace when we vote with our dollars no less than when we go behind the curtains of a polling booth. Now imagine if every dollar in our possession came with an RFID chip, like those used for highway toll tags or employee identification, telling the government who had that dollar in their hands, how that consumer spent it, and who acquired it next. That would be the practical consequence of a policy proposal being promoted now in Washington, D.C., to enact a Central Bank Digital Currency (CBDC). Some have recently asked Congress to attach such a currency to the Bank Secrecy Act, to enable surveillance of every transaction in America. Such a measure would end all financial privacy, whether a donation to a cause, or money to a friend. “If not designed to be open, permissionless, and private – resembling cash – a government-issued CBDC is nothing more than an Orwellian surveillance tool that would be used to erode the American way of life,” said Rep. Tom Emmer (R-MN). This would happen because CBDC is a digital currency, issued on a digital ledger under government control. It would give the government the ability to surveil Americans transactions and, in the words of Rep. Emmer, “choke out politically unpopular activity.” The good news is that President Trump is alert to the dangers posed by a CBDC. One of his first acts in his second term was to issue an executive order forbidding federal agencies from exploring a CBDC. But the hunger for close surveillance of Americans’ daily business by the bureaucracy in Washington, D.C., is near constant. There is no telling what future administrations might do. Rep. Emmer reintroduced his Anti-Surveillance State Act to prevent the Fed from issuing a CBDC, either directly or indirectly through an intermediary. Rep. Emmer’s bill also would prevent the Federal Reserve Board from using any form of CBDC as a tool to implement monetary policy. The bill ensures that the Treasury Department cannot direct the Federal Reserve Bank to design, build, develop, or issue a CBDC. Prospects for this bill are good. Rep. Emmer’s bill passed the House in the previous Congress. It doesn’t hurt that Rep. Emmer is the House Majority Whip and that this bill neatly fits President Trump’s agenda. So there is plenty of reason to be hopeful Americans will be permanently protected from a surveillance currency. But well-crafted legislation alone won’t prevent the federal bureaucracy from expanding financial surveillance, as it has done on many fronts. PPSA urges civil liberties groups and Hill champions of surveillance reform, of all political stripes and both parties, to unite behind this bill. We’re not sure which is most disconcerting: that Meta has a division named Global Threat Disruption, that their idea of said global threats includes deepfake celebrity endorsements, or that this has become their excuse to reactivate the controversial facial recognition software they shelved just three years earlier (so much for the “Delete” key). Meta has relaunched DeepFace to defend against celebrity deepfakes in South Korea, Britain, and even the European Union. “Celeb-baiting,” as it’s known, is where scammers populate their social media posts with images or AI-generated video of public figures. Convinced that they’re real – that Whoopi Goldberg really is endorsing a revolutionary weight loss system, for example – unwitting victims fork over their data and money with just a few clicks. All of which, according to Meta, “is bad for people that use our products.” Celeb-baiting is a legitimate problem, to be sure. We’re no fans of social media scammers. What’s more, we know full well that “buyer beware” is meaningless in a world where it is increasingly difficult to spot digital fakes. But in reviving their facial recognition software, Meta may be rolling out a cannon to kill a mosquito. The potential for collateral damage inherent in this move is, in a word, staggering. Just ask the Uighurs in Xi’s China. Meta began tracking the faces of one billion users, beginning in 2015. And initially, it didn’t bother to tell people the technology was active, so users couldn’t opt out. As a result of Meta’s sleight of hand, as well as its own strict privacy laws, the EU cried foul and banned DeepFace from being implemented. But that was years ago … and how times have changed. The privacy-minded Europeans are now letting Meta test DeepFace to help public figures guard against their likenesses being misused. But can regular users be far behind? Meta could rebuild its billion-face database in no time. For its part, the U.K. is courting artificial intelligence like never before, declaring that it will help unleash a “decade of national renewal.” Even for a country that never met a facial recognition system it didn’t love, this feels like a bridge too far. We have written about the dangers, both real and looming, of a world in which facial recognition technology has become ubiquitous. When DeepFace was shelved in 2021, it represented an almost unheard-of reversal, in effect putting the genie (Mark Z, not Jafar) back in the bottle. That incredibly lucky bit of history is unlikely to repeat itself. Genies never go back in their bottles a second time. A letter from Tulsi Gabbard, the new director of national intelligence, in response to a recent letter from Sen. Ron Wyden (D-OR) and Rep. Andy Biggs (R-AZ), is a good sign that the new boss is not the same as the old boss. What is most remarkable about Director Gabbard’s letter is that it exists and is a prompt response. Many letters from Members of Congress in the past seemed to disappear into interstellar space. Or, when the government did deign to answer them, it was often with the overcautious double-speak that avoids avoid promises and commitments or even judgment. Gabbard’s reply to Sen. Wyden and Rep. Biggs is prompt, direct, and actually responsive to the concerns of these two critics of surveillance abuse. She speaks directly about the secret order issued by the UK Home Secretary instructing Apple to create a back door capability in its iCloud feature that would allow the British government to access the personal data of any customer in the world. Gabbard writes that the UK government did not inform her office of this order, which seems like an astonishing breach of protocol for a “Five Eyes” ally with which the United States shares mutual intelligence. Gabbard refers to the UK’s Investigatory Powers Act of 2016, also known as the “Snoopers’ Charter,” which allowed London to gag Apple from voicing its concerns, even secretly with the U.S. government. As a result of the UK’s pressure on Apple, Gabbard says she has:
“Any information sharing between a government – any government – and private companies must be done in a manner that respects and protects the U.S. law and the Constitutional rights of U.S. citizens.” She closes her letter by referring to obligations to protect “both the security of our country and the God-given rights of the American people enshrined in the U.S. Constitution.” Missing from Director Gabbard’s letter is the oblique and lawyerly tone of past administrations. We applaud Gabbard for her responsiveness and encourage her to continue to break with her predecessors in a new spirit of openness and a real concern for Americans’ constitutional rights. British Consumers Should Protest “Disrespectful Government” Apple just killed its encrypted services enabled by its Advanced Data Protection tool in the United Kingdom rather than allow the British government to use it as a warrantless spy device on customers worldwide. Forced into this action by a draconian government order, Apple’s action will remove a widely used service from the hands of millions of British customers. Encryption allows users to maintain the same level of privacy they would expect in a private conversation. This privacy allows victims to hide from stalkers, women and children to report abuse, dissidents to communicate around tyrants, and people to keep snoopy government out of their lives. The Backstory Apple designed its Advanced Data Protection with end-to-end encryption so well that the company itself doesn’t have the ability to review a customer’s items stored on iCloud such as their notes, images, text message backups, and web bookmarks. Only customers can decrypt their own data. Two weeks ago, the UK Home Office ordered Apple to build a backdoor to grant the British government access to users’ data under the UK Investigatory Powers Act. Worse, the order demanded that Apple provide a backdoor to global communications, giving British investigators access to the private data of Americans and everyone else. Apple’s action appears to prevent that expansion to global surveillance. PPSA’s Statement Bob Goodlatte, PPSA Senior Policy Advisor and former Chairman of the House Judiciary Committee, issued this statement: “It is a shame that the law-abiding citizens of the United Kingdom will lose access to a well-regarded encryption system because the British government does not respect their right to privacy. People who are able to keep their personal and business records and financial transactions protected by using encryption are far safer and prevent far more crime than if anyone, including well-meaning but inevitably careless governments, have so-called back-door keys that eventually always fall into the wrong hands. Thank goodness Americans have a Bill of Rights to protect their freedom. We must never take it for granted. “The British people should demand nothing less from a disrespectful government.” Texas AG Goes After Deceptive Data Practices by Car and Insurance Companies Texas is already putting its data privacy statute, passed in 2024, to good use. Part of the state’s broader data privacy and security initiative is a recent lawsuit against General Motors that alleges the unlawful collection and sale of drivers’ data, including selling it to insurance companies that used it to raise insurance rates. Texas Attorney General Ken Paxton also sued insurer Allstate and its Arity subsidiary for unlawfully collecting data on Texas drivers through their mobile apps. Arguing that Texas drivers were unwittingly buying into a “comprehensive surveillance system,” General Paxton charges that, beginning in 2015, GM began collecting detailed data every time Texas drivers used their vehicles. According to the lawsuit:
But wait, there’s more! The lawsuit further alleges that GM:
The data GM collected and sold also allegedly included the date and duration of every drive, speed, seatbelt status, and more. What next? Recording our private conversations? Obtaining consent is always a good idea, but burying it inside an interminable user agreement written in legalese appears to be at the heart of General Paxton’s case against GM. Fifty-plus pages of electronic gobbledygook full of dry product descriptions, confusing user terms and misleading “privacy notices” are the opposite of transparency. Imagine patients signing “I Agree” at the doctor’s office to give physicians the right to harvest their kidneys just because it happens to be in the fine print. The reality is that we never read those interminable user agreements. “Direct” consent is what the law requires, and we hope lawsuits like this one will bring common-sense standards to bear on what has become a completely unwieldy, impractical, and utterly unfair business practice. AI Inventor Muses About the Authoritarian Potential of General AIRobert Oppenheimer was famously conflicted about his work on the atomic bomb, as was Alfred Nobel after inventing dynamite. One supposes any rational, non-sociopath would be. But imagine if Alexander Graham Bell had similarly cast aspersions on the widespread use of telephones or Edison on electrification? When Morse transmitted, “What hath God wrought?” as the first official telegraph, it was meant as an expression of wonder, even optimism. We expect weapons of destruction to come with warnings. By contrast, technological revolutions that improved human existence have rarely come with dire predictions, much less from their inventors. So it’s a bit jarring when it happens. And with artificial intelligence, it’s happening. Geoffrey Hinton, the “godfather of AI,” quit Google after warning about its dangers and later told his Nobel Prize audience, “It will be comparable with the Industrial Revolution. But instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us.” Now enter Sam Altman, the man whose company, OpenAI, brought artificial intelligence into the mainstream. In a blog post published this week, Altman opened with his own paraphrase of “But this feels different.” Hinton and Altman are both referring to what many consider the inevitable turning point in the coming AI revolution – the advent of artificial general intelligence, or AGI. In short, this will be when almost every computer-based system we encounter is as smart or smarter than us. “We never want to be reckless,” Altman writes in the blog (emphasis added). “We believe that trending more towards individual empowerment is important,” Altman writes, “the other likely path we can see is AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy.” To be fair, OpenAI was founded with the goal of preventing AGI from getting out of hand, so perhaps his somewhat conflicted good cop/bad cop perspective is to be expected. Yet that hasn’t stopped Altman from taking what might someday be seen as the “self-fulfilling prophecy” step on our road to perpetual surveillance. Altman is partnering with Oracle and others in a joint venture with the U.S. government to build an AI infrastructure system, the Stargate Project. Two weeks after the venture was announced, his blog is acknowledging the need for a “balance between safety and individual empowerment that will require trade-offs.” What to make of all this? Sam Altman is a capitalist writ large. He believes in the American trinity of money, freedom, and individualism. So when he feels compelled to ponder the looming potential of a technocratic authoritarian superstate from his brainchild, he is to be believed. Altman dances ever-so-deftly around the potential dangers of mass surveillance in the hands of an AGI-powered authoritarian state, but it’s there. AI is the glue that makes a surveillance state work. This is already happening in the People’s Republic of China, where AI drinks in the torrent of data from a national facial recognition system and total social media surveillance to follow netizens and any wayward expressions of belief or questioning of orthodoxy. Altman is fundamentally worried that the technology he’s helping to unleash on the world could prove to be the fundamental unraveling of individual liberty, and democracy itself. One last thing worth noting: Sam Altman is an apocalypse-prepper. “I try not to think about it too much,” he told The New Yorker in 2016. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.” Just imagine what he isn’t telling us. Internet Imperialism: UK Demands Access to Encrypted Accounts of All Apple Customers Worldwide2/10/2025
“I have as much privacy as a goldfish in a bowl,” Princess Margaret once said, despairing of the paparazzi. Now, thanks to the Home Secretary of the United Kingdom, you too can feel like royalty. The British government has recently issued a secret order demanding a backdoor to all of Apple’s encrypted communications. From time to time in the United States the Justice Department has demanded that Apple help it jailbreak a suspect’s iPhone. Apple stoutly refuses to bend the knee, knowing that granting one such demand would create a backdoor that would destroy Apple’s privacy promise forever. Since 2022, Apple has allowed users to opt for Advanced Data Protection in which no one but the user can access encrypted messages on iMessages. Now London is not demanding, as the Justice Department did, to force Apple to create backdoors to individual accounts of suspected criminals. Instead, London is demanding backdoor access to all encrypted material – messages, texts, and images – stored on the cloud by all Apple customers around the world, including U.S. citizens. “The British government’s demand is breathtaking by comparison,” said Erik Jaffe, President of PPSA. “It is nothing less than internet imperialism. “We had a revolution, left the British Empire, and adopted the Fourth Amendment in part because of the abusive, unreasonable, and warrantless searches performed by agents of the Crown,” Jaffe said. “We should not tolerate the reimposition of such British high-handedness.” Meredith Whittaker, president of the nonprofit Signal app, told The Washington Post, “If implemented, the directive will create a dangerous cybersecurity vulnerability in the nervous system of our global economy.” Given the breathtaking scope of this order, it is likely only a matter of time before similar orders will be directed at Meta’s encrypted WhatsApp backups. Signal and Telegram services might be next. This is a terrible precedent with terrible consequences. With the UK now demanding a backdoor, expect China and other authoritarian regimes to follow suit. The witless Whitehall nanny-staters overlook the value of encryption in protecting dissidents from tyrants, journalists from homicidal cartels, and even law-enforcement itself from organized criminals and state actors. Once this backdoor gets into the wild – and it will – women and children will have far less protection against stalkers and abusers. Inventors and businesses will also be exposed to industrial espionage by competitors and China. Everyday consumers who simply value their privacy will be betrayed. It is out of concern for the human right to privacy that the European Court of Human Rights rejected a Russian law that would have broken encryption. Now, what Vladimir Putin could not achieve, the British government is happy to do for him. This is just the latest sign that official attitudes toward personal privacy have crossed a threshold into authoritarian thinking. There is nothing shocking or unusual about privacy in communications. It has been the de facto rule for most person-to-person communications for all of human history. Once a government whets its appetite for your personal information, it will almost always seek more. “Efforts to give the government back-door access around encryption is no different than the government pressuring every locksmith and lock maker to give it an extra key to every home and apartment,” Jaffe said. Now the same country that celebrates the declaration of Sir Edward Coke, a 17th century jurist who declared that every person’s home is his “Castle and Fortress,” is busy forging digital keys. PPSA urges the U.S. government to exert its diplomacy and defend Americans’ privacy. The unanimous U.S. Supreme Court opinion upholding the forced sale of TikTok is a necessary first step toward reining in the wholesale exploitation of Americans’ data. But it is only a first step. Gaping vulnerabilities remain. Let’s first consider this ruling, its reasoning and implications: The Supreme Court’s Thinking: TikTok is owned by ByteDance, a Chinese company that is obligated to share all of its data with the regime in Beijing. Consider that any data collected by TikTok is ready-made material for blackmail, corporate espionage, and weaponization by the Chinese state. What’s at risk, specifically? Just ask the Court, which affirmed that TikTok’s “data collection practices extend to age, phone number, precise location, internet address, device used, phone contacts, social network connections, the content of private messages sent through the application, and videos watched.” But the issue is even bigger. In his concurrence, Justice Neil Gorsuch wrote: “The record before us establishes that TikTok mines data both from TikTok users and about millions of others who do not consent to share their information … TikTok can access ‘any data’ stored in a consenting user’s ‘contact list’ – including names, photos, and other personal information about unconsenting third parties.” It is for these reasons that the Court unanimously found that “the Act is sufficiently tailored to address the Government’s interest in preventing a foreign adversary from collecting vast swaths of sensitive data about the 170 million of U.S. persons who use TikTok.” The Court’s Respect for the First Amendment Justice Gorsuch’s concurrence showed great deference to the First Amendment. “Too often in recent years,” he wrote, “the government has sought to censor disfavored speech online, as if the internet were somehow exempt from the full sweep of the First Amendment.” Justice Gorsuch noted that in this case the Court “rightly refrains from endorsing the government’s asserted interest in preventing ‘the covert manipulation of content’ as a ‘justification for the law before us … One man’s ‘covert content manipulation’ is another’s ‘editorial discretion.’ Journalists, publishers, and speakers of all kinds routinely make less-than-transparent judgments about what stories to tell and how to tell them.” As we’ve written before, it would be a violation of the First Amendment to close a newspaper that ran Chinese disinformation and propaganda. In that instance, policymakers would have to rely both on other media to expose that newspaper and on the good sense of the American people. But if a newspaper came with newsprint that seeped into the fingertips of readers to release a carcinogen, closure would be lawful, necessary, and proper. The Protecting Americans from Foreign Adversary Controlled Applications Act is a law in that vein – and the Court was right to uphold it. Justice Gorsuch also praises the Court for declining to consider the government’s classified evidence, which was withheld from TikTok and its lawyers. He wrote: “Efforts to inject secret evidence into judicial proceedings present obvious constitutional concerns.” Americans Still Data-Naked Before the World The People’s Republic of China is a unique threat to Americans’ privacy. And it is far from contained. Outgoing FBI Director Christopher Wray has warned that Chinese-controlled shell companies can also gain access to our data. But China is far from the only threat. As a foreign entity, one thing China cannot do is smash your door open with a battering ram at 4 a.m., pull you out of bed and prosecute you on the basis of evidence that you will never see and that will never be presented in court. But U.S. domestic law enforcement can do that. The FBI does this by purchasing your personal information from third-party data brokers and examining it without a warrant. This is the very same “backdoor loophole” acknowledged by Pam Bondi in her confirmation hearing as attorney general. Other agencies, ranging from the IRS to the DEA to the Department of Homeland Security are also purchasing and using our data – information that is often more personal than a diary. Today’s Court ruling suggests there are next steps to protecting Americans’ privacy. One would be to take Justice Gorsuch’s constitutional concerns about injecting secret evidence into judicial proceedings and applying them to the State Secrets privilege. That insidious, time-weary doctrine has long prevented defendants from knowing the evidence against them when gleaned by government surveillance. The Bottom Line The upholding of the TikTok law mandating a sale was a good first step toward securing digital privacy for Americans. But much more needs to be done to protect Americans. Another needed action would be final passage of the Fourth Amendment Is Not for Sale Act, which would require U.S. federal agencies to obtain a warrant before inspecting our purchased data. The House passed this legislation in 2024. It should pass this Congress and go to the president’s desk for signature this year. Today’s ruling is a fine start. But we’ve got a long way to go to restore privacy and the Fourth Amendment to American life. What’s Behind Apple’s $95 Million Privacy Settlement? The news that Apple has agreed to a preliminary $95 million settlement to resolve a lawsuit about secret recordings of consumers by its virtual assistant Siri presents us with more questions than answers. This lawsuit began five years ago in the aftermath of a story in The Guardian reporting that less than 1 percent of daily “Hey Siri” activations were being analyzed to improve the virtual assistant and understand human diction. Contractors listened to short snippets of pseudonymous conversations. Along the way, contractors heard confidential medical information, drug deals, and couples having sex. One contractor told The Guardian that in one conversation “you can definitely hear a doctor and patient.” The lawsuit Apple settled alleges that the company not only listened in on conversations but sold them to advertisers. Plaintiffs claim that casual mentions of Air Jordan sneakers and the Olive Garden restaurant triggered ads for these products. Another plaintiff alleges that a private conversation about a brand-name surgical treatment with a doctor triggered ads for that service in his social media feeds. This is a scandal, if true. But we’re withholding judgment. One reason Apple’s valuation is the largest in the world is its commitment to privacy, which CEO Tim Cook calls “a fundamental human right.” In the settlement, in which trial lawyers are set to walk away with about one-third of the take, Apple refused to acknowledge wrongdoing. An Apple spokesman told Fox News, “Siri data has never been used to build marketing profiles and it has never been sold to anyone for any purpose.” We want to believe Apple. Yet we have to say concerning all virtual assistants, we’ve noticed like everyone else a strange correlation between random mentions of products or vacation destinations and ads that pop up on our feeds. Experts chalk this down to cognitive bias, that we often search for these items or hit websites that provoke us to think about them and forget about it. Maybe. But this happens often enough, with very specific items, that it still makes us wonder about what Siri, Alexa, and the rest are taking in. And if they are taking in our private conversations, are federal agencies also able to take them in as well? We may learn more when the settlement goes to U.S. District Judge Jeffrey White for approval in a federal courtroom in Oakland next month. In meantime, try this: Sit next to Siri or Alexa, converse with a friend about Albanian beach vacations, and see what pops up in your feed. Should you be reading this blog? If you’re at work, on a computer provided for you by your employer, is the content of this blog sufficiently work-related for you to justify to your employer the time you’ve spent reading it? Following your search history and the time you spend on particular websites during your working hours are just some of the most obvious ways employers track employees. Now a research paper from Cracked Labs, a non-profit based in Austria, with help from other non-governmental organizations and an Oxford scholar, have mapped out dozens of technologies that allow companies to track employees’ movements and activities at the office. In “Tracking Indoor Location, Movement, and Desk Occupancy in the Workplace,” Cracked Labs demonstrates how vendors are selling technology that pairs wireless networking with Bluetooth technology to follow employees in their daily movements. The former can pinpoint the location of smartphones, laptops, and other devices employees use and often carry. Bluetooth beacons can link to badges, security cameras, and video conferencing systems to track employee behavior. Quoting marketing literature from Cisco, Cracked Labs writes: “Companies can get a ‘real time view of the behavior of employees, guests, customers and visitors’ and ‘profile’ them based on their indoor movements in order to ‘get a detailed picture of their behavior.’” Tracking 138 people with 11 Wi-Fi points, Cisco claims, generated several million location records. Not to be outdone, a European vendor, Spacewell, installs sensors in ceilings, next to doors, and even under desks to track “desk attendance.” Nicole Kobie of ITPro reports that one in five office workers are now being monitored by some kind of activity tracker. She also reports surveys that tracked employees are 73 percent more likely to distrust their employer, and twice as likely to be job hunting as those who are not tracked in their workplace. Cracked Labs concludes: “Once deployed in the name of ‘good,’ whether for worker safety, energy efficiency, or just improved convenience, these technologies normalize far-reaching digital surveillance, which may quickly creep into other purposes.” It is not difficult to imagine that such surveillance could be used by a rogue manager for stalking, to find out who is gathering around the water cooler or kitchen, or to find something to embarrass an office rival. Even when these technologies are used for their stated purposes, we all lose something when privacy is degraded to this extent. Now, how was that for work-related content? Investigative journalist Ronan Farrow delves into the Pandora’s box that is Israel’s NSO Group, a company (now on a U.S. Commerce Department blacklist) that unleashes technologies that allow regimes and cartels to transform any smartphone into a comprehensive spying device. One NSO brainchild is Pegasus, the software that reports every email, text, and search performed on smartphones, while turning their cameras and microphones into 24-hour surveillance devices. It’s enough to give Orwell’s Big Brother feelings of inadequacy. Farrow covers well-tread stories he has long followed in The New Yorker, also reported by many U.S. and British journalists, and well explored in this blog. Farrow recounts the litany of crimes in which Pegasus and NSO are implicated. These include Saudi Arabia’s murder of Jamal Khashoggi, the murder of Mexican journalists by the cartels, and the surveillance of pro-independence politicians in Catalonia and their extended families by Spanish intelligence. In the latter case, Farrow turns to Toronto-based Citizen Lab to confirm that one Catalonian politician’s sister and parents were comprehensively surveilled. The parents were physicians, so Spanish intelligence also swept up the confidential information of their patients as well. While the reality portrayed by Surveilled is a familiar one to readers of this blog, it drives home the horror of NSO technology as only a documentary with high production values can do. Still, this documentary could have been better. The show is marred by too many reaction shots of Farrow, who frequently mugs for the camera. It also left unasked follow-up questions of Rep. Jim Himes (D-CT), Ranking Member of the House Intelligence Committee. In his sit-down with Farrow, Himes made the case that U.S. agencies need to have copies of Pegasus and similar technologies, if only to understand the capabilities of bad actors like Russia and North Korea. Fair point. But Rep. Himes seems oblivious to the dangers of such a comprehensive spyware in domestic surveillance. Rep. Himes says he is not aware of Pegasus being used domestically. It was deployed by Rwandan spies to surveil the phone of U.S. resident Carine Kanimba in her meetings with the U.S. State Department. Kanimba was looking for ways to liberate her father, settled in San Antonio, who was lured onto a plane while abroad and kidnapped by Rwandan authorities. Rep. Himes says he would want the FBI to have Pegasus at its fingertips in case one of his own daughters were kidnapped. Even civil libertarians agree there should be exceptions for such “exigent” and emergency circumstances in which even a warrant requirement should not slow down investigators. The FBI can already track cellphones and the movements of their owners. If the FBI were to deploy Pegasus, however, it would give the bureau redundant and immense power to video record Americans in their private moments, as well as to record audio of their conversations. Rep. Himes is unfazed. When Farrow asks how Pegasus should be used domestically, Rep. Himes replies that we should “do the hard work of assessing that law enforcement uses it consistent with our civil liberties.” He also spoke of “guardrails” that might be needed for such technology. Such a guardrail, however, already exists. It is called the Fourth Amendment of the Constitution, which mandates the use of probable cause warrants before the government can surveil the American people. But even with probable cause, Pegasus is too robust a spy tool to trust the FBI to use domestically. The whole NSO-Pegasus saga is just one part of much bigger story in which privacy has been eroded. Federal agencies, ranging from the FBI to IRS and Homeland Security, purchase the most intimate and personal digital data of Americans from third-party data brokers, and review it without warrants. Congress is even poised to renege on a deal to narrow the definition of an “electronic communications service provider,” making any office complex, fitness facility, or house of worship that offers Wi-Fi connections to be obligated to secretly turn over Americans’ communications without a warrant. The sad reality is that Surveilled only touches on one of many crises in the destruction of Americans’ privacy. Perhaps HBO should consider making this a series. They would never run out of material. Catastrophic ‘Salt Typhoon’ Hack Shows Why a Backdoor to Encryption Would be a Gift to China11/25/2024
Former Sen. Patrick Leahy’s Prescient Warning It is widely reported that the breach of U.S. telecom systems allowed China’s Salt Typhoon group of hackers to listen in on the conversations of senior national security officials and political figures, including Donald Trump and J.D. Vance during the recent presidential campaign. In fact, they may still be spying on senior U.S. officials. Sen. Mark Warner (D-VA), Chairman of the Senate Intelligence Committee, on Thursday said that China’s hack was “the worst telecom hack in our nation’s history – by far.” Warner, himself a former telecom executive, said that the hack across the systems of multiple internet service providers is ongoing, and that the “barn door is still wide open, or mostly open.” The only surprise, really, is that this was a surprise. When our government creates a pathway to spy on American citizens, that same pathway is sure to be exploited by foreign spies. The FBI believes the hackers entered the system that enables court-ordered taps on voice calls and texts of Americans suspected of a crime. These systems are put in place by internet service providers like AT&T, Verizon, and other telecoms to allow the government to search for evidence, a practice authorized by the 1994 Communications Assistance for Law Enforcement Act. Thus the system of domestic surveillance used by the FBI and law enforcement has been reverse-engineered by Chinese intelligence to turn that system back on our government. This point is brought home by FBI documents PPSA obtained from a Freedom of Information Act request that reveal a prescient question put to FBI Director Christopher Wray by then-Sen. Patrick Leahy in 2018. The Vermont Democrat, now retired, anticipated the recent catastrophic breach of U.S. telecom systems. In his question to Director Wray, Sen. Leahy asked: “The FBI is reportedly renewing a push for legal authority to force decryption tools into smartphones and other devices. I am concerned this sort of ‘exceptional access’ system would introduce inherent vulnerabilities and weaken security for everyone …” The New York Times reports that according to the FBI, the Salt Typhoon hack resulted from China’s theft of passwords used by law enforcement to enact court-ordered surveillance. But Sen. Leahy correctly identified the danger of creating such domestic surveillance systems and the next possible cause of an even more catastrophic breach. He argued that a backdoor to encrypted services would provide a point of entry that could eventually be used by foreign intelligence. The imperviousness of encryption was confirmed by authorities who believe that China was not able to listen in on conversations over WhatsApp and Signal, which encrypt consumers’ communications. While China’s hackers could intercept text messages between iPhones and Android phones, they could not intercept messages sent between iPhones over Apple’s iMessage system, which is also encrypted. Leahy asked another prescient question: “If we require U.S. technology companies to build ‘backdoors’ into their products, then what do you expect Apple to do when the Chinese government demands that Apple help unlock the iPhone of a peaceful political or religious dissident in China?” Sen. Leahy was right: Encryption works to keep people here and abroad safe from tyrants. We should heed his warning – carving a backdoor into encrypted communications creates a doorway anyone might walk through. When police send Emergency Data Requests (EDRs) to communications companies like Verizon or Google, they attest that a victim is in danger of serious bodily harm or death unless certain private information about a suspect can be produced. An EDR blows the doors off of any requirement to attach a subpoena or court order with a judge’s signature to honor the requests. Companies usually produce the digital information of the targeted suspect with alacrity. Now the FBI is warning that hackers are worming their way into law enforcement cyber-systems in the United States and around the world, using stolen police credentials to send fake EDRs to steal the private information of innocent people. The potential exists for cybercriminals to issue fake freeze orders on people’s financial accounts, and then follow up with a seizure of assets, diverting funds to a fake custodial wallet that appears to be government-owned. For $1,000 to $3,000, a cybercriminal named Pwnstar will sell buyers police credentials for EDRs in 25 countries, including the United States. “This is social engineering at its highest level and there will be failed attempts at times,” Pwnstar assures his customers on the dark web. He presents himself as a fair businessman, offering to give refunds in the minority of attempts that fail. Krebs on Security reports that Kodex, a company founded by a former FBI agent to identify fake EDRs, found that of 1,597 EDRs it has processed, 485 failed a second-level verification. This status quo puts communications companies in a bind. Krebs writes that “the receiving company finds itself caught between unsavory outcomes: Failing to immediately comply with an EDR – and potentially having someone’s blood on their hands – or possibly leaking a customer record to the wrong person.” What can be done? First, all law enforcement agencies in the United States need to tighten up their digital hygiene to the highest professional levels. An FBI factsheet offers a detailed list of specific security steps police should take, ranging from evaluating the reliability of vendors, to being on the lookout for images that appear doctored or pasted, to strong password protocols, to phishing-resistant multifactor authentication for all services. Finally, the FBI recommends that local law enforcement agencies establish and maintain strong liaison relationships with their local FBI field office. The FBI says it is ready to identify departments’ vulnerabilities and help them mitigate threats. The FBI investigation now underway must answer two questions about the racist text messages sent last week to the cellphones of African-Americans in at least 13 states. The first question, of course, is who is behind this? Was it a state actor – possibly Russia – seeking to drive distrust between Americans? Or was it the proverbial guy in his mom’s basement? The answer to the first question will guide us to a second important question. Given that the attack used the services of TextNow, a company that helps anonymous users to send texts from a randomly generated phone number, is this attack something that anyone (like the guy in his basement) could do? Or did these texts require sophisticated knowledge backed by serious financial and technical resources to pull off? Somehow, this attack precisely targeted African-Americans. Many of the texts landed in the phones of students at historically Black colleges and universities. Did the attackers identify people from personal data purchased by third-party data brokers? Which company did the trolls purchase this data from? How elaborate were the digital profiles of the victims assembled from purchased data? Did these profiles include their financial status, sexual lives, health issues, and private business concerns? Congress and the American public must know the answers to these questions. This attack on the well-being and sense of personal safety of Americans must be understood and countered. But this text assault should also be taken as a warning just how insecure our data is, and how refined future attacks might be. Could a hostile state, in the middle of a crisis, send an official-sounding alert to key military and government personnel that their house is on fire? Answering these questions will clarify how hostile governments, trolls, and even our own government might misuse our data. Vice presidential candidate J.D. Vance (R-OH) told Joe Rogan over the weekend that backdoor access to U.S. telecoms likely allowed the Chinese to hack American broadband networks, compromising the data and privacy of millions of Americans and businesses. “The way that they hacked into our phones is they used the backdoor telecom infrastructure that had been developed in the wake of the Patriot Act,” Sen. Vance told Rogan on his podcast last weekend. That law gave U.S. law enforcement and intelligence agencies access to the data and operations of telecoms that manage the backbone of the internet. Chris Jaikaran, a specialist in cybersecurity policy, added in a recently released Congressional Research Service report about a cyberattack from a group known as Salt Typhoon: “Public reporting suggests that the hackers may have targeted the systems used to provide court-approved access to communication systems used for investigations by law enforcement and intelligence agencies. PRC actors may have sought access to these systems and companies to gain access to presidential candidate communications. With that access, they could potentially retrieve unencrypted communication (e.g., voice calls and text messages).” Thus, the Chinese were able to use algorithms developed for U.S. law enforcement and intelligence agencies to see to any U.S. national security order and presumably any government extraction of the intercepted communications of Americans and foreign targets under FISA Section 702. China doesn’t need a double agent in the style of Kim Philby. Our own Patriot Act mandates that we make it easier for hostile regimes to find the keys to all of our digital kingdoms – including the private conversations of Vice President Kamala Harris and former President Donald Trump. As alarming as that is, it is hard to fully appreciate the dangers of such a penetration. The Chinese have chosen not to use their presence deep in U.S. systems to “go kinetic” by sabotaging our electrical grid and other primary systems. The possible consequences of such deep hacking are highlighted in a joint U.S.-Israel advisory that details the actions against Israel that were enabled when an Iranian group, ASA, wormed its way into foreign hosting providers. ASA hackers allowed the manipulation of a dynamic, digital display in Paris for the 2024 Summer Olympics to denounce Israel and the participation of Israeli athletes on the eve of the Games. ASA infiltrated surveillance cameras in Israel and Gaza, searching for weak spots in Israeli defenses. Worst of all, the hack enabled Hamas to contact the families of Israeli hostages in order to “cause additional psychological effects and inflict further trauma.” The lesson is that when our own government orders companies to develop backdoors into Americans’ communications, those doors can be swung open by malevolent state actors as well. Sen. Vance’s comments indicate that there is a growing awareness of the dangers of government surveillance – an insight that we hope increases Congressional support for surveillance reform when FISA Section 702 comes up for renewal in 2026. Why Signal Refuses to Give Government Backdoor Access to Americans’ Encrypted Communications11/4/2024
Signal is an instant messenger app operated by a non-profit to enable private conversations between users protected by end-to-end encryption. Governments hate that. From Australia, to Canada, to the EU, to the United States, democratic governments are exerting ever-greater pressure on companies like Telegram and Signal to give them backdoor entry into the private communications of their users. So far, these instant messaging companies don’t have access to users’ messages, chat lists, groups, contacts, stickers, profile names or avatars. If served with a probable cause warrant, these tech companies couldn’t respond if they wanted to. The Department of Justice under both Republican and Democratic administrations continue to press for backdoors to breach the privacy of these communications, citing the threat of terrorism and human trafficking as the reason. What could be wrong with that? In 2020, Martin Kaste of NPR told listeners that “as most computer scientists will tell you, when you build a secret way into an encrypted system for the good guys, it ends up getting hacked by the bad guys.” Kaste’s statement turned out to be prescient. AT&T, Verizon and other communications carriers complied with U.S. government requests and placed backdoors on their services. As a result, a Chinese hacking group with the moniker Salt Typhoon found a way to exploit these points of entry into America’s broadband networks. In September, U.S. intelligence revealed that China gained access through these backdoors to enact surveillance on American internet traffic and data of millions of Americans and U.S. businesses of all sizes. The consequences of this attack are still being evaluated, but they are already regarded as among of the most catastrophic breaches in U.S. history. There are more than just purely practical reasons for supporting encryption. Meredith Whittaker, president of Signal, delves into the deeper philosophical issues of what society would be like if there were no private communications at all in a talk with Robert Safian, former editor-in-chief of Fast Company. “For hundreds of thousands of years of human history, the norm for communicating with each other, with the people we loved, with the people we dealt with, with our world, was privacy,” Whittaker told Safian in a podcast. “We walk down the street, we’re having a conversation. We don’t assume that’s going into some database owned by a company in Mountain View.” Today, moreover, the company in Mountain View transfers the data to a data broker, who then sells it – including your search history, communications and other private information – to about a dozen federal agencies that can hold and access your information without a warrant. When it comes to our expectations of privacy, we are like the proverbial frogs being boiled by degrees. Whittaker says that this is a “trend that really has crept up in the last 20, 30 years without, I believe, clear social consent that a handful of private companies somehow have access to more intimate data and dossiers about all of us than has ever existed in human history.” Whittaker says that Signal is “rebuilding the stack to show” that the internet doesn’t have to operate this way. She concludes we don’t have to “demonize private activity while valorizing centralized surveillance in a way that’s often not critical.” We’re glad that a few stalwart tech companies, from Apple and its iPhone to Signal, refuse to cave on encryption. And we hope there are more, not fewer, such companies in the near future that refuse to expose their customers to hackers and government snooping. “We don’t want to be a single pine tree in the desert,” Whittaker says, adding she wants to “rewild that desert so a lot of pine trees can grow.” The Project for Privacy and Surveillance Accountability recently submitted a series of FOIA requests to law enforcement and intelligence agencies seeking critical information on how the agencies handle data obtained through the use of cell-site simulators, also known as Stingrays or Dirtboxes, which impersonate cell towers and collect sensitive data from wireless devices. Specifically, PPSA submitted requests to DOJ, CIA, DHS, NSA, and ODNI. These requests focus on what happens after the government collects this data. As PPSA’s requests state, PPSA “seeks information on how, once the agency obtains information or data from a cell-site simulator, the information obtained is used.” We are particularly interested in learning about the agencies’ policies for data retention, usage, and deletion, especially for data collected from individuals who are not the target of surveillance. PPSA has long been concerned with the invasive nature of these surveillance tools, which capture not only targeted individuals' data but also data from anyone nearby. As we previously stated in a 2021 FOIA request, “this technology gives the government the ability to conduct sweeping dragnets of metadata, location, and even text messages from anyone within a geofenced area.” These FOIA requests specifically demand transparency about what happens after the government collects such data. We seek records regarding policies on data retention, use, and destruction, particularly for information unrelated to surveillance targets. As our requests state, “PPSA wishes to know what policies govern such use and what policies, if any, are in place to protect the civil liberties and privacy of those whose data might happen to get swept up in a cell-site simulator’s data collection activities.” As we previously highlighted, Stingrays represent a significant intrusion into personal privacy, and we are committed to holding the government accountable for its use of such tools. By pursuing these requests, we aim to inform the public about the scope and potential risks of the agencies’ surveillance activities, and to push for greater safeguards over Americans’ private information. PPSA will continue to push towards transparency, and we will keep the public informed of our efforts. An important analysis from Real Clear Investigations probes the extent to which censorship abroad threatens the First Amendment here at home. Writer Ben Weingarten asks whether foreign demands that domestic media companies operating abroad comply with those nations’ often far more censorial legal requirements will lead in turn to more censorship here at home. The preponderance of the evidence suggests bad news for fans of the First Amendment. Weingarten points specifically to the European Union’s Digital Services Act, which imposes content moderation standards that far exceed what would be considered constitutional in the United States. For example, companies doing business in the EU must combat “illegal content online,” which includes the disfavored rhetoric like “illegal hate speech.” Writes Weingarten: “Platforms also must take ‘risk-based action,’ including undergoing independent audits to combat ‘disinformation or election manipulation’ – with the expectation those measures should be taken in consultation with ‘independent experts and civil society organisations.’ The Commission says these measures are aimed at mitigating ‘systemic issues such as … hoaxes and manipulation during pandemics, harms to vulnerable groups and other emerging societal harms’ driven by ‘harmful’ but not illegal content.” What’s more, investigations pursuant to the DSA can result in fines of up to 6% of annual global revenue, a potential outcome likely to give companies like X and Facebook pause when considering whether to comply with the invasive oversight of European bureaucrats and NGOs serving as arbiters of the appropriate. Then there’s the question of whether social media companies that agree to the EU’s demands are likely to run parallel services – for example, a DSA compliant version of X and another that is consistent with the requirements of the First Amendment. Elon Musk seemed willing to abandon Brazil after that country banned X for failing to de-platform the account of former president Jair Bolsonaro. (Though Musk’s company is now very much back in business there.) But the EU is a much bigger market with a lot more monetizable users. As Weingarten documents, the punishment of media companies abroad for speech that is well within the bounds of the First Amendment is a growing trend – not just in the EU but also in countries like the UK and Australia. And Weingarten reserves no small amount of criticism for the Biden Administration’s silence – and even capitulation – in the face of such foreign censorship. Bills like the No Censors on our Shores Act, which could “punish foreign individuals and entities that promote or engage in the censorship of American speech,” offer one potential solution to foreign censorship creep. So do articles like Weingarten’s, which provide a much-needed diagnosis of our speech-related ailings and failings. A whitepaper from social media company Meta presents a startling new reality in bland language. It claims that magnetoencephalography (MEG) neural imaging technology “can be used to decipher, with millisecond precision, the rise of complex representations generated in the brain.”
In layman’s terms, AI can crunch a person’s brainwaves and apply an image generator to create an astonishingly accurate representation of what a person has seen. Paul Simon was right, these really are the days of miracles and wonders – and also of new threats to personal privacy. (If you want to see this science-fictional sounding technology in action, check out these images from science.org to see how close AI is to representing images extrapolated from brain waves.) Until now, even in a total surveillance state such as North Korea or China, netizens could have their faces, movements, emails, online searches and other external attributes recorded throughout the day. But at least they could take comfort that any unapproved thoughts about the Dear Leader and his regime were theirs and theirs alone. That is still true. But the robustness of this new technology indicates that the ability of brain data to fully read minds is not far off. Researchers in China in 2022 announced technology to measure a person’s loyalty to the Chinese Communist Party. A number of non-invasive brain-wave reading helmets are on the U.S. market for wellness, education, and entertainment. The Members of the California State Assembly and Senate were sufficiently alarmed by these developments to follow the example of Colorado and regulate this technology. This new law amends the California Consumer Privacy Act to include “neural data” under the protected category of “personal sensitive information.” On Saturday, Gov. Gavin Newsom signed that bill into law. Under this new law, California citizens can now request, delete, correct, and limit what neural data is being collected by big tech companies. We know what you’re thinking, would I be sufficiently concerned about my privacy that I would register with a state-mandated database to make changes to my privacy profile? Actually, that was just our best guess about what you’re thinking. But give it a few years. The Emerging “Silicon Curtain”We’ve long warned that our cars are deceptive. They feel like they offer us private spaces. Yet when we get behind the wheel, we are actually settling inside a comprehensive recording and tracking device. The surveillance modern car is connected to the larger world by Bluetooth and Wi-Fi, satellites and cell service, all fed by a web of sensors that surround us.
As a result, our cars can log where we go, record what we listen to, the calls we make, and even how much our weight fluctuates over time. If cameras and microphones are installed in our cars to check for inebriation, as some propose, the car will fully surpass even the smartphone as an all-round surveillance device. It is for this reason that PPSA applauds the Biden administration and Commerce Secretary Gina Raimondo for issuing a ban on Chinese-developed software from internet-connected cars, trucks, and buses sold in the United States. The Biden administration – in an action sure to be upheld by either Kamala Harris or Donald Trump – is taking a rational step in response to the discovery of Volt Typhoon. This is an organization of Chinese hackers who enacted a covert campaign to embed malicious code throughout U.S. infrastructure. With a few keystrokes, China had installed the means to contaminate U.S. drinking water, cut off oil and gas pipelines, turn off our electricity, close hospitals, and halt rail and civilian aviation. The administration was wise to include Russia in the ban. It also reserves the right to extend the ban to other countries with regimes that express malevolent intentions toward the United States. “This is not about trade or economic advantage,” Secretary Raimondo said. “This is strictly a national security action.” The Commerce Secretary told reporters that connected vehicles could spy on drivers’ movements, where their children go to school, shut down to create traffic jams, or even crash to kill their occupants. The risks of both surveillance and mayhem are too dire for any responsible leader to ignore. China Daily warns the United States of the dangers to global trade if we fail to “shed China paranoia.” But we cannot forget the words of Catch-22 author Joseph Heller: “Just because you’re paranoid doesn’t mean they aren’t after you.” Before last week it might have seemed paranoid to the leaders of Hezbollah that their pagers and walkie-talkies could assassinate them. Suddenly, cars being remotely instructed to drive head-on into each other doesn’t sound so far-fetched. It certainly isn’t paranoid. Former U.S. Rep. Mike Gallagher, in a thoughtful piece in The Wall Street Journal, looks to the larger threat environment, from Chinese surveillance embedded in cranes in ports, to systems that control cargo ships in Europe. Gallagher writes: “Anyone with control over a portion of the technology stack such as semiconductors, cellular modules, or hardware devices, can use it to snoop, incapacitate, or kill.” Gallagher calls for the development of an “interoperable free-world technological industrial base” that would “make the free-world’s technology stack more attractive than the totalitarian alternative, drawing more countries to our side of the emerging Silicon Curtain.” The bifurcation of global trade in technology by a Silicon Curtain is a somber new reality. The results will include endless hassle, higher costs, and reduced innovation. The alternative is worse – knowing that any day you could get inside your car only to find out it is taking you somewhere you don’t want to go. The FBI and FCC are warning Americans about “smishing” scams, a portmanteau of “phishing” and “SMS,” the “short messaging service” texts we all receive on our phones.
One scam unfolding across the United States alerts consumers that they owe money to Quick Pass or other wireless toll collectors. The FBI offers a sample message: “We’ve noticed an outstanding toll amount of $12.51 on your record. To avoid a late feel of $50.00, visit https://myturnpiketollservice.com to settle your balance.” Why would any scammer go to the trouble to collect such trivial amounts? Of course, on a mass basis, these little scams could add up to millions of dollars raked in by criminals. But the small amount also points to the likelier possibility that the real purpose of this scam is to prompt you to press the link, in order to infect your phone or device with malware so criminals can exploit your personal information and financial accounts. Now we’re talking real money. If you receive one of these texts, the FBI Internet Crime Complaint Center (IC3) recommends that you:
We need to train ourselves to never react to online requests impulsively. Stand back and take a breath, avoiding the temptation of what appears to be a quick and easy solution to make an annoyance disappear. Always go to the validated website of your service provider to check your account. Scammers are endlessly inventive and willing to dedicate time and research into “social engineering” to appear legitimate. If your sister is in trouble and needs cash immediately, make a phone call to your sister, as antediluvian as that sounds. Never click an untrusted message and always subject any request for money to scrutiny. The Customs and Border Patrol (CBP) has little respect for the Fourth Amendment. From international airports to border stations, Americans returning from abroad often fall prey to the routine CBP practice of scanning their laptops, mobile phones, and other digital devices without a warrant.
As if that were not enough, CBP also scans people’s faith, violating their First Amendment rights as well. Consider the case of Hassan Shibly, a U.S. citizen and student at the University of Buffalo Law School. When he returned to the United States in 2010 with his wife, a lawful permanent resident, and their seven-month-old son, from a religious pilgrimage and family visit in the Middle East, Shibly was taken aside by CBP agents. A CBP officer asked him: “Do you visit any Islamist extremist websites?” And: “Are you part of any Islamic tribes?” And then the kicker: “How many gods or prophets do you believe in?” Other returning Muslim-Americans are interrogated about the mosques they attend, their religious beliefs, and their opinions about the U.S. invasion of Iraq and support for Israel. One New Jerseyan, Lawrence Ho, attended a conference in Canada and returned to the United States by car. He was asked: “When did you convert?” Ho does not know how the agent knew he had converted to Islam. A group of Muslim-Americans, fed up by this treatment, are now being represented by the American Civil Liberties Union in a suit before the Ninth Circuit Court of Appeals against CBP for civil rights violations. The plaintiffs are correct that subjecting Americans to deep questions about their faith – as a condition to reentry to their home – violates their First Amendment rights, as well as the Religious Freedom Restoration Act (RFRA). Ashley Gorski, senior staff attorney with ACLU’s National Security Project, said that “this religious questioning is demeaning, intrusive, and unconstitutional. We’re fighting for our clients’ rights to be treated equally and to practice their faith without undue government scrutiny.” To be fair, CBP has its work cut out for it when it comes to screening the border for potential terrorists. And we should not avert our eyes to the fact that there are sick and dangerous ideologies at work around the world. But we are also fairly confident that actual terrorists would not be stumped by the kind of naïve and unlawful interrogations CBP has imposed on these returning Americans. Heavy-handed questions about adherence to one of the great world religions doesn’t seem to be a useful security strategy or a demonstration that our government is familiar with its own Constitution. The FBI, which surveilled academics at the University of California, Berkeley, in the 1950s and 1960s, is now reaching out to a think tank on that campus for help in devising ways to break encryption and other privacy measures used by consumers and private social media companies.
In this task, the FBI is seeking advice from the Center for Security in Politics, founded by former Arizona governor and Homeland Security Secretary Janet Napolitano, to devise ways to access the contents of communications from apps and platforms. “We need to work with our private-sector partners to have a lawful-access solution for our garden-variety cases,” one FBI official at the event told ABC News. The FBI’s actions are in keeping with a growing global crackdown on encryption, highlighted by the recent arrest of Telegram founder Pavel Durov in France. We could take days trying to unravel this Gordian knot of ironies. Better to just quote Judge James C. Ho of the Fifth Circuit Court of Appeals, who wrote in a recent landmark opinion on geofence warrants that: “Hamstringing the government is the whole point of our Constitution.” In finding geofencing the data of large numbers of innocent people unconstitutional, Judge Ho noted that “our decision today is not costless. But our rights are priceless.” The FBI has a lot of tools to catch the drug dealer, the pornographer and the sex trafficker. After all, the Bureau has been doing that for decades. The best mission for the partnership between the FBI and the Center for Security in Politics would be to focus on the “lawful-access” part of their quest. With so many smart people in the room, surely they can invent new and effective ways to solve many crimes while honoring the Fourth Amendment. |
Categories
All
|