The federal government is moving quickly to operationalize its plan to build a massive central storehouse containing extensive data on every American citizen. As we reported previously, it’s part of an executive order issued in March, “Stopping Waste, Fraud and Abuse by Eliminating Information Silos.” Now, as NPR reports, part of that executive order includes requiring states to allow “unfettered access” to data from state programs that receive federal funds. This unprecedented surveillance expansion is flimsily premised on notions of “efficiency” (who’s not for that?), of eliminating “bureaucratic duplication” (overlooking that information silos exist between, say, the databases of the IRS and the FBI precisely to keep government from abusing our personal information), and of vague accusations of “fraud” proffered without evidence (which does nothing to address actual fraud such as the Russian-backed $10 billion Medicare scheme U.S. prosecutors announced last week). And so now the federal government wants to harvest the private data of Americans held in the trust of the 50 states. This is a power grab designed to facilitate the careful, silent, and utterly unconstitutional surveillance of every American, and especially those who don’t play by whatever rules the future database’s administrators arbitrarily write. Yet the data grab has already begun. According to NPR:
These requests are likely in direct violation of at least one federal law, the Privacy Act of 1974 (which is why many of these moves are on pause while being challenged in the courts). The Privacy Act requires government to let the public know how they intend to use and safeguard personal data before any of it is even collected, and, once collected, to use it for nothing other than the stated purposes. Besides tossing out memes like efficiency, the central excuse the government makes for the expansion of surveillance is “national security.” Nicole Schneidman of Protect Democracy put it bluntly: “It is critical for every American to understand there is no 'undo' button here.” Whether you are MAGA, traditional conservative, liberal, progressive, or libertarian, you have skin in this game. As Geena Davis said in the horror-classic The Fly, “Be afraid, be very afraid …” We frequently report on the dangers of general surveillance in the hands of government actors willing to disregard quaint notions of privacy and civil liberties. Now comes a sobering reminder that bad actors can use the global surveillance economy to track down people in order to kill them. According to The Guardian, that’s exactly what the Sinaloa drug cartel did in 2018, as detailed in a new Justice Department report. “El Chapo” Guzmán was extradited to the United States in 2017. As payback, a hacker working for El Chapo’s drug cartel subsequently accessed the phone of an FBI assistant legal attaché at the U.S. Embassy in Mexico City. The cartel, reports Reuters, used the phone number to obtain records of calls in and out as well as geolocation data. Next, the cartel got into Mexico City’s extensive camera system to track the FBI official to identify everyone who met with him. Some were intimidated and threatened. Others were murdered. Out of the 10 largest metropolises in the world, Mexico City ranks seventh in CCTV cameras per 1,000 people, or roughly six cameras per person. Yet there, as elsewhere, it has had little impact on the crime index. In the meantime, savvy criminals working for shadowy organizations can use it for criminal surveillance. The Justice Department report says that the FBI has a strategic plan in the works to help mitigate such vulnerabilities. Such efforts are well-intentioned, to be sure, but likely to be one-sided. Playing defense against hackers with access to every tool they need is not a long-term solution. Let’s start by putting stronger guardrails on camera networks in every country that uses them. The warning is that when governments create systematic surveillance networks, they could easily enable the crimes they seek to prevent. Perhaps it was always the height of naïveté to assume that forty years of research into “computer vision” – the field of artificial intelligence that allows computers to interpret images – would only make the world a better place. Now a landmark study published in Nature reports:
Let’s be clear about what that means.
The field of computer vision research, says ScienceBlog, has become “a vast network that transforms academic insights into tools for watching, profiling, and controlling human behavior.”
The Nature study also discovered that, to help normalize their voyeurism, researchers in the computer vision field had adopted some clever linguistic tactics, including referring to humans and our body parts as “objects” – a kind of dehumanizing rhetoric that brings to mind the worst of the previous century. The obfuscating language, notes Sadowski, has the added bonus of abstracting “any potential issues related to critical inquiry, ethical responsibility, or political controversy.” Such studies underscore a pressing reality, namely that we are rushing full tilt into uncharted territory without brakes and without any guardrails on the road. And in some cases, without any roads at all or even any maps. Technology, and especially AI, doesn’t have to exist in the absence of privacy and accountability. With that goal in mind, one of the leaders of the Nature study, Dr. Abeba Birhane, recently established the AI Accountability Lab at Trinity College Dublin. She represents the best of us in the struggle against an unrestrained surveillance state. The whole site is worth a deep dive, but we’ll leave you with this excerpt:
AI Accountability Lab reminds us that any new powerful technology inevitably has political and personal consequences. When technology drives the times, philosophy and prescience are needed as never before. Congress and the states need to enact – use whatever metaphor you like – legal maps, guardrails, brakes. In the end, they all represent the same idea – a return to the privacy guaranteed to all American citizens in the Bill of Rights. Former federal prosecutor Katie Haun reminds us of one of our “favorite” topics – the ill-conceived Bank Secrecy Act (BSA) of 1970, which has brought us to this sorry point of a near-lack of privacy in our financial dealings. She writes: “Most Americans don’t realize they live under an expansive surveillance regime that likely violates their constitutional rights. Every purchase, deposit, and transaction, from the smallest Venmo payment for a coffee to a large hospital bill, creates a data point in a system that watches you – even if you’ve done nothing wrong.” The U.S. Supreme Court’s disastrous 1976 ruling in United States v. Miller upheld the BSA and further declared that Americans had “no legitimate expectation of privacy” in checking and deposits. From that point on, Haun laments, it was open season on Americans’ data, but especially our financial information. The BSA turned banks into spies and its reporting requirements have been weaponized into instruments of political repression. And little wonder, in the digital era and post-9/11, the original act was amended to the point of readily enabling the creation of a mass surveillance state and its attendant bureaucracy. Under the BSA, notes Haun, law enforcement doesn’t need a search warrant to access our financial records: “A prosecutor can ‘cut a subpoena’ – demanding all your bank records for the past 10 years – with no judicial oversight or limitation on scope … In contrast, a proper search warrant must be narrowly tailored, with probable cause and judicial authorization.” That last bit is a reference to the basic American right against unlawful search and seizure as guaranteed by the Fourth Amendment. But ever since the Bank Secrecy Act laid the foundation and the Supreme Court put up the walls, true financial privacy is a right Americans no longer possess. And without reforming the BSA, warns Haun, we may never get it back: “Indiscriminate financial surveillance such as what we have today is fundamentally at odds with the Fourth Amendment in the digital age. Technological innovations over the past several decades have brought incredible convenience to economic life. Now our privacy standards must catch up. With Congress considering landmark legislation on digital assets, it’s an important moment to consider what kind of financial system we want – not just in terms of efficiency and access, but in terms of freedom. Rather than striking down the BSA in its entirety, policymakers should narrow its reach, particularly around the bulk collection and warrantless sharing of Americans’ financial data. Financial surveillance shouldn’t be the price of participation in modern life.” We couldn’t agree more. And we hope that savvy BSA reform legislation proposed this session will find fertile ground in Congress among Republicans and Democrats alike. Unfortunately, in the meantime, the Fourth Amendment remains for sale. For further reading, check out this piece from the Pacific Legal Foundation. Malware continues to evolve more quickly than Android operating systems. Lifehacker says the original 2021 version of the Godfather banking trojan is back and more prolific than ever. We’ll spare you the technical details, just know that the previous version could draw a fake screen on top of banking and crypto apps to mimic them. Unsuspecting users would then enter their credentials assuming it was business as usual. The Godfather targeted hundreds of financial apps around the world. The new iteration of Godfather creates a complete virtual environment on phones and then makes copies of financial apps to run there. When users open one of their real apps, they are invisibly redirected to the virtual environment where everything they do is captured and harvested. The malware can even control those apps remotely, initiating transfers and payments while users go about their day. And because everything is hidden in a virtual environment, on-device security measures are likely to miss it. Fortunately for most of our readers, the Godfather is presently focused on financial apps in a few European countries. But if it’s anything like the last version, that could soon be a dozen nations. Given that some estimates put Android’s smartphone market share at 72 percent worldwide, it’s just a matter of time until the new Godfather finds its way to app-loving Americans. In the meantime, say experts, Android users should make sure Google Play Protect is enabled and that every app is kept up to date via the Google Play Store. Out-of-date apps are dangerous apps in any operating system. Finally, depending on the specific version of Android OS, there’s some variation of a setting called “Install unknown apps” – which most users probably don’t even realize is there. Review that list and make sure no apps, especially browsers, have permission to do so. We’re in an all-out footrace against the fraudsters. Knowing that you are in this race is the first necessary step to protecting your hard-earned savings. Then keep up with your security measures to keep private information private. Or, as Vito Corleone says in the original Godfather, “Never tell anyone outside the family what you’re thinking again.” Data journalist Jamie Ballard reported on a recent YouGov poll entitled “Privacy and Government Surveillance.” The conceptual divide among respondents appears to be whether someone is regarded as a public or a private figure. Clear majorities condoned monitoring the online activities of the following groups, who are either public by nature or of public concern:
To be clear, the survey question didn’t ask if government workers and politicians should be surveilled because they are under sufficient suspicion of a crime to justify a probable cause warrant. Nope, they are spy-worthy simply by virtue of being public figures. Those polled do believe that private citizens should be afforded more protection, with majorities agreeing that an ongoing criminal investigation is required in order to justify monitoring someone’s digital activity. Still, what made us do a spit take is why so many people deem it acceptable for federal spy agencies to surveil the online activities of a president, or Members of Congress, or governors, at will. This seems at odds with another finding in the same poll, that 71 percent of Americans are concerned that surveillance powers could be used by the U.S. government to target political opponents or suppress dissent. So what’s going on? It may partially reflect widespread disillusionment with leaders in Washington, D.C. and many states. But that doesn’t come close to explaining the reasons behind this response. PPSA and others not only oppose warrantless surveillance of politicians, we advocate for enhanced guardrails when it comes to legal surveillance of political candidates and elected officials. We believe those protections should be extended to journalists as well. This is not because politicians and journalists are special people with special rights, by any means. The reason is more profound than that. When a politician or a journalist is targeted, that act necessarily involves the political and speech rights of the many Americans who voted for that officeholder or who follow that journalist. Monitoring of the online activity of politicians and journalists is an attack on a free political system itself. Such were the grievous wrongs when the FBI investigated Donald Trump in 2016 on allegations the Bureau itself knew were disproven, and when the executive branch secretly pulled communications of Members of Congress and aides of both parties in 2017. Republicans and Democrats both had reason to be alarmed. Our intelligence agencies have a history of secretly overseeing their overseers. Perhaps this one result in the YouGov poll is just an outlier. But it merits our attention. Americans need to appreciate that underhanded surveillance of politicians is actually an attack on them. Civil libertarians clearly have a lot of work to do in the realm of public education. If you wanted to build a mass surveillance program capable of monitoring 800 million people, where would you start? Ars Technica’s Ashley Belanger found the answer: You order OpenAI’s ChatGPT to indefinitely maintain all of its regular customer chat logs, upending the company’s solemn promise of confidentiality for customers’ prompts and chats. Which is what Ona Wang, U.S. Magistrate Judge for the Southern District of New York, did on May 13. From that date forward, OpenAI has had to keep everything – even users’ deleted chats. All of the rest is now stored “just in case” it’s needed someday. We asked ChatGPT about this, and it told us:
So our lives – health, financial, and professional secrets – are now being stored in AI chats that Judge Wang thinks should be kept on file for any warrant or subpoena, not to mention any Russian or Chinese hacker. Not included in the judge’s order are ChatGPT Enterprise (used by businesses) and Edu data (used by universities). Problem: Many businesses and students use regular ChatGPT without being Enterprise subscribers, including entrepreneur Jason Bramble. He asked the judge to consider the impact of her ruling on – well, you name it – his company’s proprietary workflows, confidential information, trade secrets, competitive strategies, intellectual property, client data, patent applications, trademark requests, source code, and more.
As for the underlying case giving rise to all of this overreach, it involves a copyright infringement lawsuit between OpenAI and the New York Times. It’s a big case, to be sure, but no one saw this coming except for Jason Bramble and one other ChatGPT user, Aidan Hunt. Hunt had learned about the judge’s order from a Reddit forum and decided it was worth fighting on principle. In his motion, he asked the court to vacate the order or at least modify it to exclude highly personal/private content. He politely suggested that Judge Wang was overstepping her bounds because the case “involves important, novel constitutional questions about the privacy rights incident to artificial intelligence usage – a rapidly developing area of law – and the ability of a magistrate to institute a nationwide mass surveillance program by means of a discovery order in a civil case.” Judge Wang’s response was petulant. She noted that Hunt mistakenly used incident when he meant incidental. And then she casually torpedoed two hundred years of judicial review by denying his request with this line: “The judiciary is not a law enforcement agency.” Because, after all, when have judicial decisions ever had executive branch consequences? Judge Wang had denied business owner Jason Bramble’s earlier request on the grounds that he hadn’t hired a lawyer to draft the filing. The magistrate is swatting at flies while asking ChatGPT users to swallow the herd of camels she’s unleashed. Even if a properly narrowed legal hold to preserve evidence relevant to The New York Times’ copyright infringement claim would be appropriate, the judge massively overstepped in ordering ChatGPT to preserve global chat histories. The complaints of Bramble and Hunt, as well as similar pleadings from OpenAI, aim true: The court’s uninformed, over-reaching perspectives ignore the pressing realities of pervasive surveillance of those who accepted the promise that their conversations with ChatGPT were truly private. Judge Wang wondered Hamlet-like whether the data could be anonymized in order to protect users’ privacy. As we’ve written before, and is now commonly understood, government and hackers have the power to deanonymize anonymized data. As MSN points out, the more personal a conversation is, the easier it becomes to identify the user behind it. In declaring that her order is merely about preservation rather than disclosure, Judge Wang is naively declaring “privacy in our time.” As in 1938, we stand at the edge of a gathering storm – this time, not a storm of steel, but of data. What can you do? At the least, you can start minding your Ps and Qs – your prompts and questions. And take to heart that “delete” doesn’t mean what it used to, either. Here's a chronology of Ashley Belanger’s detailed reporting on this story for Ars Technica: June 4 June 6 June 23 June 25 J. Edgar Hoover’s FBI was famously obsessed with Dr. Martin Luther King, Jr., convinced King was a communist pawn (largely due to his association with left-wing civil rights activist Stanley Levison). The irony, of course, is that if Moscow actually was predicating its hopes of dividing America by supporting Dr. King, then any backdoor support for King’s cause from communists ultimately amounted to one of the greatest own goals in history. Dr. King’s approach that married Christian love to hardnosed political tactics might well have prevented a race war. That approach certainly helped to transform the heart of America. Then, as now, law enforcement overreach was premised on “national security.” But the motivation behind the FBI’s surveillance of King soon revealed itself to be a character assassination campaign centered on his sex life – salacious, personal, harassing, and utterly invasive. Hoover made sure that King’s personal foibles were sent to Lyndon Johnson’s White House, Members of Congress, the AP, UPI, and Coretta Scott King herself. Hoover even held a press conference denouncing King as “the most notorious liar in the country.” And yet few deemed the information newsworthy. Americans instinctively realized that one’s private life is exactly that, undeserving of the indignity of unauthorized surveillance and the terror of state-sanctioned moral harassment. Now those files, sequestered for nearly half a century, are under review to be released as part of Trump’s Executive Order 14176 on the 1960s trinity of assassinations, those of the Kennedys and King. So here’s the truth about releasing the contents of King’s classified files: Nobody wins. What is of real value is not what the FBI learned, but exactly how and why the FBI invaded King’s privacy. Consider what we already know: Hoover’s appalling and thoroughly discredited COINTELPRO program included policing morality. To quote the Church Committee’s 1976 report, the program aimed not only to protect national security, but to maintain the “existing social and political order.” Nowhere in the U.S. Code will you find Congress tasking the FBI with upholding its idea of what society should look like. “No holds were barred,” lamented COINTELPRO chief William Sullivan in his posthumous memoir. He recalled that Hoover’s team saw King as a demagogue and “the most dangerous Negro of the future in this nation” after his history-making speech on the National Mall.
We don’t need to know what Dr. King did in his private life. We need to know what the FBI did, under what legal guidance it acted (assuming there was any), why it happened, and what could have prevented it. The FBI must never again engage in this kind of politically motivated violation of an American’s privacy. For an era in which surveilling Americans is now so easy, the FBI’s misbehavior is the only worthwhile remaining part of the King story that must be told. There you are in an overstuffed chair at your favorite coffee shop, sipping a vanilla sweet cream cold brew and working on that top secret professional project. But you know your laptop is vulnerable to snoopers through local Wi-Fi, so you “airgap” it – cut it off from networks. This everyday form of airgapping means keeping your laptop unplugged from a physical internet or ethernet line. You would also disable all but the most basic programs, and turn off your Wi-Fi and Bluetooth. You might also want to arrive with plenty of juice to keep your laptop charged, given that some public USB ports used for charging have been known to be converted into data extractors, or “juice jacking.” (TSA and the FBI warns that this is common at airports). Are you safe? Probably. But now we know that a person with a smartwatch seated several tables away might still be able to extract some of your data – by pulling it out of the air. All because you forgot to disable your laptop’s audio systems. This is the finding of Ben-Gurion University researcher Mordechai Guri, who has made a career of finding exploitable weaknesses in computer networks of all kinds. He excels in identifying ways to break into standalone systems, long considered the gold standard in cyber security because they’re not connected to the outside world. Where the rest of us see only air, Dr. Guri observes an invisible world of electromagnetism, optics, vibration, sound, and temperature – all of them potential channels for covertly stealing and transmitting our data. Now he’s suggesting that the humble smartwatch can take advantage of sound waves to defeat airgapped systems. But just as no man is an island, no computer is completely, truly airgapped. Dr. Guri writes: “While smartphones have been extensively studied in the context of ultrasonic covert communication, smartwatches remain largely unexplored. Given their widespread adoption and constant proximity to users, smartwatches present a unique opportunity for covert data exfiltration.” It isn’t easily done, to be sure, but it’s doable. Here’s what Dr. Guri describes:
What makes the overlooked smartwatch so effective in this scenario? Pretty much everything about it, says Dr. Guri: “Smartwatches possess several technological features that enable them to receive ultrasonic signals effectively.” These include high-sensitivity microphones, advanced signal processing software, and powerful chips. (Dr. Guri’s personal site is appropriately named covertchannels.com and offers a deep-dive into his extensive research history.) A smartwatch attack is a low-probability event for most people, at least for the moment. But the takeaway is that airgapping is still at best one layer of protection, not a guarantee of perfect security. Megan K. Slack of The New York Times writes about her surprise that her tour guide in China told her not to worry about leaving her luggage unattended. “There’s no crime,” he said. Slack writes that seems to be largely true because there is also no privacy in China, either. China’s immense system of cameras integrated with facial recognition and artificial intelligence, along with “everything from banking to municipal services to social media to shopping” is linked through the Chinese platform WeChat. Does the example of China serve as a warning for the ongoing consolidation of Americans’ data from disparate federal databases by Palantir Technologies? Slack quotes Maya Wang, associate China director at Human Rights Watch: “The really powerful thing is when personal data gets integrated. Not only am I me, but I like these things, and I am related to so-and-so, and my friends are like this, and I like to go to these events regularly on Wednesdays at 6:30. It’s knowing relationships, movements, and also any irregularities.” Slack writes: “Ms. Wang mentioned Police Cloud, an ambitious Chinese public safety project that uses all manner of collected data to find hidden relationships between events and people; to spy on those considered dangerous (petitioners, dissidents, Uyghurs, people with ‘extreme thoughts,’ according to a document reviewed by Human Rights Watch); and to combine real-time monitoring with predictions for what may be about to happen. Predictive software has been adopted by local authorities around China: A Tianjin data project designed to head off protests analyzes who is most likely to file complaints; software in the city of Nanning can warn authorities if ‘more than three key people’ checked into a hotel. “It’s not that our government is using the surveillance infrastructure in the same manner as China. It’s that, as far as the technology goes, it could.” The news broke last week that Meta will soon post ads on a dedicated segment of WhatsApp. This is a big change for a popular messaging app that has long shunned advertising. Ads will not appear on WhatsApp’s chat feature with friends, instead appearing in a special “Updates” section. But in order for ads to be effective, Meta will still need to collect users’ location and language data to target ads to individual user’s accounts. Meta insists no information will be gleaned from messages or calls. “The fact that Meta has promised that it’s adding ads to WhatsApp with privacy in mind does not make me trust this new feature,” Lena Cohen of the Electronic Frontier Foundation told Fast Company. “Ads that are targeted based on your personal data are a privacy nightmare, no matter what app they’re on.” This story comes on the heels of another recent big story about Meta, one that should inform any evaluation of the company’s promises about WhatsApp. Meta has been making aggressive use of users’ data on its other two main platforms. Here’s what we know about that: 1) The Washington Post reports that Meta, desperate to build a “digital” version of real customers for advertising purposes, secretly positioned Facebook and Instagram to silently track Android users’ browser activity, then forwarded that information to its servers. If you think about all the private searches you might have performed on your smartphone browser, that is a sobering realization. 2) Meta’s apparent tactics touch on multiple areas of ethical and legal concern:
For its part, Meta called the whole affair a “potential miscommunication,” but agreed to pause the “feature.” Meta wasn’t the only offender. A Russian tech company called Yandex has apparently been doing the same since 2017, but flatly denies any wrongdoing. Anyone with Yandex apps on their phones (Android or otherwise), should immediately click “Uninstall.” And in terms of using a relatively more secure Android browser, consider Brave. Some reporting suggests that browser successfully protected its users from Meta and Yandex’s incursions. We understand that consumers give away a bit of privacy in exchange for a free service that selects ads for them on an anonymized basis. As Meta expands its ad presence to WhatsApp, however, Congress and the public need a better understanding of what the company has already done with apps on Facebook and Instagram. PPSA will watch developments in this story closely. Imagine a scenario in which nine of your friends have been saving copies of their digital interactions with you – text messages, emails, etc. Collectively, they created a company that stores that data and then sells it commercially – to government agencies, consumer research outfits, advertisers, and various businesses. Now assume those “friends” are named American Airlines, Delta, Southwest, United, Alaska Airlines, JetBlue, Luthansa, Air France, and Air Canada, and suddenly you’re no longer in the realm of imagination. Wired, 404 Media, and The Lever report that these nine members of the air carrier industry co-own a data broker firm and have been selling customer information – passenger names, flight itineraries, and financial details – to various federal agencies. It’s a lot of data – representing about 12 billion passenger flights a year, mostly those purchased through third-party sites like Booking and Expedia. The name of the co-owned data broker is ARC, which simply stands for Airlines Reporting Corporation. ARC has been around since 1985 and was originally conceived as a clearinghouse to settle transactions between airlines and travel agencies. But like so many legacy institutions from the ‘70s and ‘80s, that have long since morphed into full-fledged data brokers in the digital era (and post-9/11 in particular), prefer to conduct business far from the light of day. ARC’s sales contracts with federal customers forbid revealing the source of their data. It’s almost as if they don’t want to get caught doing something technically legal but that would be offensive to their customers. Meanwhile, the clandestine nature of these transactions seems just fine with ARC’s federal data purchasers, which include Defense and Treasury, in addition to ICE and Customs and Border Protection. The Center for Democracy & Technology summed it up this way: “As with many other types of sensitive and revealing data, the government seems intent on using data brokers to buy their way around important guardrails and limits.” In the words of Sen. Ron Wyden (D-OR) the whole arrangement is “shady.” It is understandable that federal and state law enforcement agencies need to gather data from a variety of sources about fliers in regard to specific criminal investigations. The Fourth Amendment Is Not For Sale Act, which passed the U.S. House last year, would make it clear that to track fliers, the government must obtain a warrant based on probable cause before sorting through our personal data. And what could be more personal than when and where we go? In the meantime, if you’re worried about what ARC is doing with your data, their long, legalistic privacy policy suggests submitting a “Subject Access Request” at [email protected] to demand its erasure/deletion. (But you can’t escape the data trawl – just be sure to include the last four digits of any credit cards you’ve used to purchase air travel). If you do, we hope you have better luck than the reporters who broke the story. When contacted directly, eight of the airlines failed to reply and one said, in effect, “no comment.” Hearing Evokes Unprompted, Strong Endorsement of a Warrant Requirement for Section 702 The CLOUD Act of 2018 is a framework for working with U.S. tech companies to share digital data with other governments. This law and basis for international agreements was a reasonable concession to allow these companies to do business around the world. But the agreement has gone off the rails because of the United Kingdom’s astonishing attempt to force Apple to break end-to-end encryption so they can access the data of all Apple users stored in the cloud. Rather than violate the privacy of its users, Apple has stood by its customers and withdrawn encrypted iCloud storage from the UK altogether. The House Judiciary’s Subcommittee on Crime and Federal Government Surveillance was already skeptical about that agreement, but appalled when the British government used it to secretly order Apple to provide that unfettered, backdoor access to all the cloud content uploaded by every Apple user on the planet. It was an unprecedented request, and an unexpected one from a fellow democracy.
In April, members of the House Judiciary Committee asked Attorney General Pam Bondi to terminate the U.K. agreement. As extreme as that sounds, PPSA supports that proposal as the best way to persuade Britain to back off an unreasonable position. In the worst-case scenario, no agreement would be better than comprehensive violation of Americans’ privacy. Undeterred, the subcommittee convened a recent hearing entitled “Foreign Influence On Americans’ Data Through The CLOUD Act.” Greg Nojeim from the Center for Democracy & Technology was an invited witness. If one had to name a single theme to his powerful testimony, it would come down to one word: “dangerous.” Subcommittee Chairman Andy Biggs used the same word, declaring the secretive British demand of Apple “sets a dangerous precedent and if not stopped now could lead to future orders by other countries.” Ranking Judiciary Committee Member Jamie Raskin struck a similar chord: “Forcing companies to circumvent their own encrypted services in the name of security is the beginning of a dangerous, slippery slope.” In short, the hearing demonstrated that the CLOUD Act has been abused by a foreign government that does not respect privacy and civil liberties or anything remotely like the Fourth Amendment to our Constitution. It needs serious new guardrails, beginning with new rules to address its failure to protect encryption. Expert witness Susan Landau of Tufts University warned the subcommittee that the U.K. appeared to be undermining encryption as a concept. A U.S.-led coalition of international intelligence agencies, she observed, recently called for maximizing the use of encryption to the point of making it a foundational feature of cybersecurity. Yet Britain conspicuously demurred.
That debate will likely become intense between now and next spring when Congress takes up the reauthorization of Section 702 of FISA, the Foreign Intelligence Surveillance Act. Judiciary Chairman Jim Jordan indicated as much when he used his opening remarks to tout the “good work” the Committee has ahead of it in preparing to evaluate and reform Section 702. Later in the hearing, Chairman Jordan returned to the looming importance of the Section 702 debate, asking each of the witnesses in turn a version of the question, “Should the United States government have to get a warrant before they search the 702 database on an American?” All agreed without hesitation. “Wow!” declared Rep. Jordan in response. “This is amazing! We all think we should follow the Constitution and require a warrant if you're going to go search Americans’ data.” Rep. Raskin nodded along. And that’s as bipartisan as it gets. New FBI Warning Highlights Latest Ways Cyber Thieves Steal Your Identity and Money – and How You Can Stop Them The FBI is issuing a new warning that cybercriminals are now focusing on impersonating employee self-service websites – such as payroll services, unemployment programs, and health savings accounts – with the goal of stealing your money through fraudulent wire payments or redirecting payments. You might notice your service’s website on an ad, or find it in an email or a link, without noticing the slight difference in the URL that marks it as a digital clone. Such a scam site will ask you for your credentials to gain access. A self-described representative from a bank or some other service may call you to “confirm” your one-time passcode. Don’t fall for it. The FBI recommends that you take the following precautions:
Skip Sanzeri, a strategic advisor at iValt, surveys in Forbes all the reasons that you are probably insufficiently paranoid about being cleaned out by a cyber thief. “Thanks to ever-increasing online access and connectivity, AI, and quantum computing, it is increasingly difficult for legitimate businesses and sites to know the true identity of users accessing their systems. Think in terms of deepfakes, where video and audio can be created to mimic the real user. And since our daily activities, thoughts and preferences are tracked and stored, data is available everywhere on all of us. Any person or system from anywhere in the world can access nearly any information on government or corporate systems due to our pervasive use of the Internet, leading to predictions from groups like Forrester that cybercrime could cost up to $12 trillion this year alone.” Sanzeri concludes that the current system, which relies on passwords, logins, two-factor identification and even tokens, is not enough. He suggests a deeper reliance on biometrics, machine ID (mobile phones and other devices for authentication), geofencing your location, and “time-bounding,” in which you limit your access to, say, a payroll or brokerage account to a specific time, every time. All of these practices add one more data point for cybercriminals to have to know in order to be a convincing impersonator. Of course, biometrics and geofencing come at a cost to your privacy. And with advances in computing, it won’t be long before cybercriminals learn to use those as well. The dispiriting reality is that there is no way to seal off all possibility of fraud. This is a never-ending footrace between consumers and cybercriminals. But if you take every precaution, the odds are you will not be the next mark. Israel’s spycraft is first-rate. From the “pager” attacks that decapitated Hezbollah, to the surgical strikes over the last few days that have eliminated Iran’s top generals and scientists, it is clear that Israel’s strategic success owes much to world-leading intelligence capabilities in the digital realm. “In Israel, a land lacking in natural resources, we learned to appreciate our greatest national advantage – our minds,” said the late Israeli Prime Minister Shimon Peres. Under constant threat, Israel has applied its great minds to information technology in the service of national defense. What works well in the national security space for Israel, however, is a problem for the rest of the world when cutting-edge surveillance technologies are exported. PPSA has extensively covered the Israeli-based NSO Group, which released malware called Pegasus into the international market. Pegasus is a “zero-click” attack that can infiltrate a smartphone, extract all its texts, emails, images and web searches, break the encryption of messaging apps like WhatsApp and Signal, and transform that phone’s camera and microphone into a 24/7 surveillance device. It is ingenious, really. Zero-click means the victim doesn’t have to accidentally fall for a phishing scam. The malware is just installed into a phone remotely. Victims can then be counted on to do what we all do – compulsively carry their smartphones with them wherever they go, allowing total surveillance of all they and their friends say and do.
Another Israeli technology company, Paragon, differentiates itself from the NSO Group by promising a more careful approach. Its U.S. subsidiary promises that it is about “Empowering Ethical Cyber Defense.”
Much of the world media reports that an indignant Italian government severed ties with Paragon. But Israeli media reports that after the Italian government rejected an offer by the company to investigate one of these cases, it was Paragon that unilaterally terminated its contract with the Italian government. The takeaway from all this is that even with a responsible vendor who sets guardrails and ethical policies, a zero-click hack is too tempting a capability for intelligence services, even those in democracies. Whether Pegasus or Graphite, a zero-click, total surveillance capability is like a dandelion in the wind. It will want to go everywhere – and eventually, it will. The Ninth Circuit ruled that American tech companies share a degree of liability if their tools facilitate human rights abuses in other countries. The court’s 2023 decision meant that thirteen members of the Falun Gong spiritual practice group could continue to press their years-long case against Cisco Systems for its role in supporting China’s “Golden Shield.” Golden Shield is the Chinese Communist Party’s domestic internet surveillance system. Members of the Falun Gong creed claim that the Chinese government used the Cisco-powered system to aggressively persecute them in a long-running and coordinated campaign. Because a significant portion of Cisco’s work on Golden Shield was done in the United States, ruled the Ninth Circuit, the plaintiffs had sufficient standing to sue here. Importantly, the court noted that, “Cisco in California acted with knowledge of the likelihood of the alleged violations of international law and with the purpose of facilitating them.” The company’s role was essential, direct, and substantial to the point of being liable for “aiding and abetting.” As the Electronic Frontier Foundation points out, this ruling wouldn’t apply to American companies that merely market a tool that anyone could buy and then potentially misuse. What happened in this case was different. Cisco is alleged to have designed, built, maintained – and even upgraded – a “customized surveillance product that the company knew would have a substantial effect on the ability of the Chinese government to engage in violations of human rights.” In so many words, said the Court in assessing Cisco’s role, the Chinese couldn’t have done it without them. To wit, Cisco empowered the following aspects of the Golden Shield surveillance system:
Cisco is accused of doing this while simultaneously helping the Chinese build a nationwide video surveillance system. The result was a state-of-the-art integrated system capable of creating “lifetime” information profiles on Falun Gong members, so full-featured that it could even be updated with data from members’ latest “interrogation” and “treatment” sessions at the hands of Chinese security personnel. Cisco is alleged to have done it all in an environment in which it is common knowledge that torture, and other violations of international law, are likely to take place. This is not conjecture, but clear information in news coverage, shareholder resolutions, State Department communiques, etc. Cisco rejects the Ninth Circuit’s decision, and recently asked the U.S. Supreme Court to grant cert and rule in its favor. As of now, the High Court has yet to decide whether or not it will do so, but on May 27 it asked the Solicitor General to weigh in with the government’s opinion. This case has always been about testing whether foreign victims can sue U.S. companies for deliberately helping foreign governments commit human rights abuses – an inevitable outcome of advanced surveillance systems in particular. Let’s hope the Supreme Court will deny Cisco’s request. If it does, that will only mean that the case will move forward in California and Cisco and its accusers will still get a full and proper hearing. This is too important a question with too many far-reaching implications to skip a step. Last year brought surveillance reform achingly close to passage. The Fourth Amendment Is Not for Sale Act – which would have forced the government to obtain a warrant before purchasing Americans’ personal data from data brokers – passed the U.S. House but died in the U.S. Senate. A warrant requirement for the review of Americans’ personal data fell short in the House in a tie vote. Now, we know that these were uphill votes not just because of the intense opposition of federal intelligence agencies, but because the Biden White House had overseen an intense lobbying effort to give the illusion of grassroots opposition from state law enforcement. To create this illusion, the administration reached out to local and federal law enforcement alike with pre-approved talking points from a Washington lobbying firm, letters to sign, and a list of lawmakers to target. The efforts involved the misuse of High Intensity Drug Trafficking Areas (HIDTAs). These are hybrid federal-state entities intended to provide coordination and ensure the efficient use of federal funds in fighting organized drug crime. The federal side of this partnership is directly overseen by the White House Office of National Drug Control Policy. A response to a PPSA Freedom of Information Act (FOIA) request reveals that during the prior 118th Congress, these organizations were repurposed for lobbying Congress. Emails from the Chicago HIDTA piggybacked off efforts from a Capitol Hill lobbying firm and orchestrated all the elements of what would appear to a Member of Congress to be a spontaneous grassroots movement by state law enforcement groups and associations in opposition to popular surveillance reform amendments. This network of federal agencies working behind the scenes to coordinate this messaging, under the purview of the White House, distorted the debate and abused Congressional trust in sincere-sounding letters to Congressional leaders like Rep. Jim Jordan, Chairman of the House Judiciary Committee, and Rep. Jerry Nadler, Ranking Member. Given that HIDTAs are distribution points for significant amounts of much-needed federal funding, it’s questionable how voluntary the sign-on from state law enforcement groups really was. Perhaps Chairman Jordan and Ranking Member Nadler might want to look into how much federal money might have been spent limiting their oversight. At the very least, the current administration should cut off federal funds for lobbying before the surveillance reform debate begins again next year. During last year’s congressional debate over surveillance, many defenders of the status quo, including then-FBI Director Christopher Wray, argued that a warrant requirement for the inspection of Americans’ personal information would be a security risk because it would be too time-consuming and burdensome. But a recent response to one of our Freedom of Information Act (FOIA) requests filed with the Criminal Division of the Department of Justice shows that filling out warrant applications are routine and close to boilerplate. In recent years, many of our FOIA requests have gone ignored. In one instance, we received a rude response from the Department of Justice in which 39 pages were redacted, and the 40th page only said: “Hope that’s helpful.” Perhaps there has been a recent change of heart at DOJ. When we sought documents about cell-site simulators (which mimic cell towers and trick cellphones into revealing personal information), we received a polite and partial response. Included in the release was a draft affidavit to guide special agents of the FBI in applying to a U.S. district court to obtain a search warrant to identify a particular cellular device. In it, an agent is prompted to:
The agent then submits this document as sworn testimony. PPSA hopes this response to our FOIA is a sign of a renewed commitment to meet our lawful requests for documents. And we urge surveillance hawks to consider that the routine filing of such applications demonstrates that it is far from excessively burdensome. There, that wasn’t so hard now, was it? Sen. Rand Paul (R-KY) celebrated the termination of the “Quiet Skies” surveillance program in which U.S. Marshals posed as airline passengers to shadow targets. This $200 million a year program did not, according to the Department of Homeland Security, stop a single terrorist attack. But, in the words of Sen. Paul in The American Conservative, it “was an unconstitutional dystopian nightmare.” Sen. Paul writes: “According to Department of Homeland Security documents I obtained, former Congresswoman and now Director of National Intelligence Tulsi Gabbard was surveilled under the program while flying domestically in 2024. Federal Air Marshals were assigned to monitor Gabbard and report back on their observations including her appearance, whether she used electronics, and whether she seemed ‘abnormally aware’ of her surroundings. She wasn’t suspected of terrorism. She wasn’t flagged by law enforcement. Her only crime was being a vocal critic of the administration. What an insanely invasive program – the gall of Big Brother actually spying on a former congresswoman. It’s an outrageous abuse of power … “And perhaps the most absurd of all, the wife of a Federal Air Marshal was labeled a ‘domestic terrorist’ after attending a political rally. She had a documented disability and no criminal record. Still, she was placed under Special Mission Coverage and tracked on commercial flights – even when accompanied by her husband, who is himself a trained federal law enforcement officer. She remained on the watchlist for more than three years. To make matters worse, this case resulted in the diversion of an Air Marshal from a high-risk international mission ... “Liberty and security are not mutually exclusive. When government hides behind secrecy to justify surveillance of its own people, it has gone too far." In the intelligence business, “tradecraft” is the professional use of techniques, methods, and technologies to evaluate a purported threat. When an official finding is made that a threat assessment memo lacks tradecraft standards, that is a hard knock on the substance of the memo and the agent who wrote it. Thanks to the efforts of Sen. Chuck Grassley (R-IA) and the forthcoming response from FBI Director Kash Patel, we now know that the infamous memo from the Richmond, Virginia, field office targeting “radical traditional Catholics” was riddled with conceptional errors and sloppy assumptions. In the FBI’s own judgment, it showed poor tradecraft. Worse, the impact of this assessment of traditional Catholics was rooted in smears from the Southern Poverty Law Center (SPLC), which Sen. Grassley correctly calls “thoroughly discredited and biased.” Contrary to dismissive statements from former FBI Director Christopher Wray, this memo wasn’t the product of one field office. In its preparation, the Richmond, Virginia, field office consulted with Bureau offices in Louisville, Portland, and Milwaukee to paint Catholics who adhere to “conservative family values/roles” as being as dangerous as Islamist jihadists. Sen. Grassley’s document reveal also shows that there were similar efforts in recent years in Los Angeles and Indianapolis. This memo was not a mere thought experiment. It was a predicate for surveillance. Among the activities we know about that resulted from this memo were attempts to develop a priest and a choir director into FBI informants on parishioners. Sen. Grassley also produced a memo from Tonya Ugoretz, FBI Assistant Director, Directorate of Intelligence, acknowledging that the Southern Poverty Law Center’s (SPLC) list of hate groups – and lack of explanation for its threshold in slapping such a label on organizations and people – went unexamined by this Richmond memo. Yet that original memo from the Richmond field office found SPLC as a trustworthy source to assert that there will be a “likely increase” in threats from “radical traditional Catholics” in combination with “racially and ethnically-motivated violent extremism.” Another memo produced by Sen. Grassley reveals the conclusion of the FBI’s Directorate of Intelligence: “The SPLC has a history of having to issue apologies and retract groups and individuals they have identified as being extremist or hate groups.” Now Sen. Grassley and Sen. James Lankford (R-OK) are appealing to the FBI to direct field offices not to rely on the characterizations of the SPLC. This whole episode should serve as a reminder that merely opening an investigation of a religious group for its First Amendment-protected speech is a punishment in itself, at best violating practitioners’ privacy; at worst, incurring huge legal costs and anxiety. Sen. Grassley deserves the gratitude of the surveillance-reform community for bringing to light the extent to which the FBI allowed America’s culture wars to become a predicate for suspicion of law-abiding Americans. HBO’s hit series Westworld wasn’t actually about replicating the old West, but a cautionary tale about the new frontier of artificial intelligence. It didn’t end well. For the humans, that is. The third season’s big reveal was a sinister-looking AI sphere the size of a building, called Rehoboam. It was shaped like a globe for a very good reason – it determined the destinies of every person in the world. It predicted and manipulated human behavior and life paths by analyzing massive amounts of personal data – effectively controlling society by assigning roles, careers, and even relationships to people, all in the name of preserving order. The American government – yes, you read that correctly – America, not China, is plotting to build its own version of Rehoboam. Its brain trust will be Palantir, the AI power player recently called out in the Daily Beast with the headline, “The Most Terrifying Company in America Is Probably One You’ve Never Heard Of.” In March of this year, President Trump issued Executive Order 14243: “Stopping Waste, Fraud, and Abuse by Eliminating Information Silos.” The outcome will be a single database containing complete electronic profiles of every soul in the United States. And all of it is likely to be powered by Palantir’s impenetrable, proprietary AI algorithms. Reason got to the heart of what’s at stake: an AI database on such a massive scale is only nominally about current issues such as tracking illegal immigrants. It’s really about the government’s ability to profile anyone, anytime, for any purpose. With a billion dollars in current federal contracts across multiple agencies, Palantir is currently in talks with Social Security and the IRS. Add that to existing agreements with the Departments of Defense, Health and Human Services, Homeland Security, and others. Add to that the Biden administration’s previous contract with Palantir to assist the CDC with vaccine distribution during the pandemic. While the primary arguments in favor of such an Orwellian construct are commendable-sounding goals like a one-shop stop for efficiency, PPSA and our pro-privacy allies find such thinking – at best – appallingly naïve. And at worst? There’s an applicable aphorism here: “This is a bad idea because it’s obviously a bad idea.” Let’s not kid ourselves – this is the desire for control laid bare, and its results will not be efficiency, but surveillance and manipulation. It makes sense for Treasury to know your tax status or State to know your citizenship status. But a governmentwide database, accessible without a warrant by innumerable government agents, is potentially the death knell for privacy and the antithesis of freedom. Think of all the government already knows about you, your family, and friends across multiple federal databases. All this data is about to be mobilized into one single, easily searchable database, containing everything from disability status and Social Security payments to personal bank account numbers and student debt records to health history and tax filings – plus other innumerable and deeply personal datapoints ad infinitum. Simply put, this database will put together enough information to assemble personal dossiers on every American. It is bad enough to think any U.S. government employee in any agency will have access to all of your data in one central platform. But at least those individuals would theoretically authorized for such access. Not so the Russian and Chinese cyberhackers who’ve already demonstrated the ability to lift U.S. databases in toto. If that ever happens with this database, it will truly become a matter of one-stop shopping. How Police “Emergency” Entries into Homes Will Lead to “Emergency” Entry into Phones The U.S. Supreme Court this week granted a petition for review in what will be the first case that the Court has agreed to hear addressing the scope of the Fourth Amendment’s warrant requirement since 2021. The case seeks clarity on whether the so-called “emergency-aid” exception to the Fourth Amendment requires police to have probable cause that an emergency is ongoing. After police officers learned that William Case of Montana had threatened suicide, they entered his home without a warrant and seized evidence later used to convict him of a felony. Because the officers “were going in to assist him,” they felt unrestrained by the Fourth Amendment’s warrant requirement even though they did not actually believe that he was in any immediate danger since he was attempting to commit suicide at the hands of the police. This Court had not reaffirmed the sanctity of the home since Caniglia v. Strom (2021), which found that allowing warrantless entry into the home for community caretaking – duties beyond law enforcement or keeping the peace – would have been completely at odds with the privacy expectations and demands of the Framers. PPSA, which filed the only amicus brief in William Case v. State of Montana, informed the Court that if now upheld, such warrantless intrusion would inevitably lead to warrantless inspection of the very personal information on Americans’ smartphones and other digital devices. In our brief, PPSA warned the Court of the “diluting effect such a low bar for emergency aid searches” would cause in other contexts – especially regarding digital devices. PPSA told the Court: “Such devices hold vast amounts of personal information that, historically, would only have been found in the home. Lowering the burden of proof required to justify the warrantless search of the place the Constitution protects most robustly would lead law enforcement and the courts to dilute protections for other, less historically safeguarded areas, such as electronic devices, which would be devastating to the privacy of Americans … “If the government may enter the home without a warrant based only on a reasonable belief, far short of probable cause, that an emergency exists, the government may treat electronic sources of information the same way, posing an even greater threat to privacy and the ultimate integrity of the Fourth Amendment. The insidious branding almost writes itself: ‘Big Brother’ may be ‘watching you,’ but it’s for your own good!” PPSA’s brief also made clear the long history of elevated protection of the home in both American law and English common law. By the 17th century it was established law that the agents of the Crown were permitted to intrude on the home only in a narrow set of extreme circumstances, and only when supported by strong evidence of an emergency that corresponds to at least probable cause. PPSA wrote that if the new emergency standard is allowed “Seemingly benevolent searches would then become an engine for criminal prosecutions even though no warrant was ever obtained, and no probable cause ever existed. The emergency-aid exception would thus become a license for the government to discover criminal activity that – in all other circumstances – would only have been discoverable through a warrant supported by probable cause.” In Caniglia, the Court unanimously restricted the community care exception to the Fourth Amendment. PPSA will report back when the Court holds oral arguments on the emergency-aid exception in Case v. Montana. India has a pro tip for would-be users of surveillance cameras, especially ones installed in your own government’s buildings: Don’t buy from China. Recognizing since at least 2021 that they might have a teensy-weensy security problem with the one million Chinese-made cameras installed in government institutions, India has finally decided that maybe they should, well, do something. In April, according to Reuters, Indian officials met with 17 surveillance gear makers and asked them if they were ready to play by the country’s new rules, which require closed-circuit television (CCTV) vendors to “submit hardware, software and source code for assessment in government labs.” And to absolutely no one’s surprise, they answered (more or less), “Um, no. We don’t like your rules, so, we’re not ready.” All of which is to say, the surveillance gear makers pitched a wall-eyed fit, predictably portending industry losses, marketplace tremors, timeline impacts, and disruption of various unspecified projects. Of all the CCTV players, China has the most to lose, given their million installed cameras and that 80 percent of all camera components in India are Chinese-made. For its part, China sees India’s new rules as a smear campaign. But it’s hard to be sympathetic when U.S. officials discovered:
The U.S. government has wisely banned certain brands of Chinese telecom equipment because they posed an unacceptable risk to U.S. national security. But India reminds us that we need to do more. We don’t think India’s stance is old-fashioned protectionism, as some of the new policy’s detractors would like to suggest. Given China’s track record, we consider it a prudent form of self-preservation and risk mitigation. In February, a Department of Homeland Security (DHS) bulletin connected the dots in no uncertain terms: Chinese cameras double as spy tools for the Chinese Communist Party and could even be used to disrupt critical U.S. infrastructure. The DHS bulletin’s advice is as clear as its warning: “Broader dissemination of tools designed to help recognize PRC cameras, particularly white-labeled cameras, could tighten enforcement of the 2022 Federal Communication Commission (FCC) ban on the import of these cameras and help mitigate the threat of PRC cyber actors exploiting them for malicious purposes.” Tens of thousands of such cameras are currently used across U.S. sectors that include critical ones like the energy and chemical industries. Yet the DHS bulletin notes that because of loopholes like the aforementioned “white-labeling” (where imported cameras ship under other companies’ brands), the ongoing proliferation of this Chinese spy tech continues. It’s time to end practices like white-labeling banned Chinese cameras. And while we’re at it, let’s open up the cases on samples of CCTV cameras sold here and have a look inside. And if doing so “voids the warranty,” we should just take our chances. As we’ve written many times before, the commercially available information (CAI) of American citizens should not be for sale. It’s one of the few things Republicans and Democrats agree on. Unfortunately, the Office of the Director of National Intelligence (ODNI) not only wants to ensure that our personal data remains for sale, but also see to it that the government’s intelligence community gets “the best data at the best price.” Quite the reversal from the stark warning about the purchase of CAI presented to the ODNI just two years ago. And so it was that on a quiet Tuesday in April the DNI fast-tracked a request for proposals for what it calls the Intelligence Community Data Consortium, or ICDC – a centralized clearinghouse where the legions of unruly CAI data vendors would be forced to get their act together, making it even easier for the government to violate our Fourth Amendment rights. We suppose calling this initiative the “Ministry of Truth” would have seemed too baroque and “One Database to Watch Them All” too obvious. So ICDC it is. The RFP is looking for a vendor to help the DNI and the IC eliminate “problems” with our private information like:
Decentralized, fragmented, duplicative, siloed, overpriced, limited – literally everything you might hope your personal data would actually be. But no, the ODNI insists on being able to “access and interact with this commercial data in one place.” The intelligence community apparently complained and, lo, the ODNI heard its cries. And the voice of American citizens in all of this – the rightful owners of all that data? The main RFP mentions civil liberties and Americans’ privacy exactly one time, and then only in passing. Make no mistake: This change is a quantum leap in the wrong direction. The Intercept quotes the Brennan Center’s Emile Ayoub and EPIC’s Calli Schroeder to make the point that the DNI doesn’t even have a specific use in mind for this data – it just wants it, and it doesn’t want to answer to privacy statutes or constitutional protections. This dance has gone on for years, prolonged and encouraged by a lax regulatory environment and a commercial sector whose lack of scruples would make Jabba the Hutt repent and join the Jedi priesthood. Given that it now seems here to stay, we’ve decided it’s time to give the dance a name – the Constitutional Sidestep. What else should you know about the ICDC and the Constitutional Sidestep? According to The Intercept and other sources, plenty:
Speaking of AI, get ready for one more dance – the Reidentification Rumba. Because when AI gets hold of these previously fragmented pieces of data, it will be easy to re-identify personal information that was previously anonymized: location histories, identities, associations, ideologies, habits, medical history – shall we go on? And here it must be noted that AI safeguards have been rescinded. The remedy for all this is, of course, the Fourth Amendment Is Not for Sale Act, which would require a warrant before Americans’ personal information can be acquired and accessed. That law passed the House in 2024. This news ought to provide fresh momentum for that measure to become law. When you seal and mail a letter, the fact that you’re sending something via letter is not private – the addresses, the stamp, etc. Those are all visible and meant to be seen. You mailed a sealed letter. Everybody knows it. You can’t walk into FedEx or the Post Office screaming at strangers, “Don’t you dare look at me while I’m mailing this letter!” Ah, but the contents of your sealed letter? Now that’s private. No one is entitled to know what’s inside except for you (and anyone you consent to give permission to, like a recipient). And so it is with electronic storage services like Dropbox. The fact that you have a Dropbox account is not private, but what you store there is. And that’s a big deal, because believe it or not, it hasn’t been entirely clear if electronic communications (including files stored in the cloud) are protected by the Fourth Amendment from unlawful search and seizure by the government. But now we know. The Fifth Circuit Court of Appeals wrote in an opinion issued just last week: “The Fourth Amendment protects the content of stored electronic communications.” If you didn’t intend for something to be public and made a reasonable effort to keep it private (such as password-protecting it in the cloud), you’re entitled to privacy. The government doesn’t have the right to access it without a warrant and probable cause. In the case at hand, Texas officials used a disgruntled ex-employee of a contractor to spy on the contractor by searching its Dropbox files. To quote the Fifth Circuit, “This was not a good-faith act.” File (pardon the pun) all of this under “reasonable expectation of privacy.” Brought to you by the Fourth Amendment to the United States Constitution. Proudly serving Americans since 1791. |
Categories
All
|