The Internet of Things (IoT), long promised, is already here. It is happening incrementally – from coffee makers, to cars, to refrigerators – that send voluminous quantities of our personal information to the cloud. As the IoT knits together, consumers need to know how our information is being collected.
Most people are unaware that refrigerators, washers, dryers, and dishwashers now often have audio and video recording components. By 2026, over 84 million households will have smart devices, each one a node within a seamless web of personal information. But how will this storehouse of personal data be regulated?
Looking ahead to the growing hazards of the near-future, Sen. Maria Cantwell (D-WA), and Sen. Ted Cruz (R-TX), introduced the Informing Consumers about Smart Devices Act. This legislation would require the Federal Trade Commission to create reasonable disclosure guidelines for products that have video or audio recordings.
“Most consumers expect their refrigerators to keep the milk cold, not record their most personal and private family discussions,” Sen. Cantwell said.
We would make the larger point that Americans shouldn’t have to think about what they say or do in the presence of their appliances. (Although it would be nice to have a smart refrigerator that slaps our hand after 9 p.m.) The greater issue is that all the data that apps, and perhaps now our smart appliances, extract from us can be accessed by government agencies without any need to obey the constitutional requirement to obtain a warrant. All an agency needs to do to obtain our personal information is to purchase it from a private data broker.
That’s all the more reason to pass the Fourth Amendment Is Not For Sale Act.
The Electronic Frontier Foundation, an indispensable pioneer of surveillance accountability, has just released a powerful new version of its Atlas of Surveillance that gives Americans insight into the myriad surveillance technologies that are being used by more than 5,500 law enforcement agencies, across all levels of government, to watch Americans in all 50 states.
EFF is a notable leader in watching the watchers. In September, PPSA examined EFF’s helpful highlighting of marketing slides about the potential for Fog Technology to track people to their homes.
This Atlas of Surveillance, begun with the help of journalism students at the University of Nevada, Reno, recently hit a threshold of 10,000 data points, making it a robust – though not yet complete – survey of which surveillance technologies are being used in which communities.
We entered results for the District of Columbia to give it a try.
John Stuart Mill, quoting the Roman satirist Juvenal, asked: Quis custodiet ipsos custodes? The Atlas of Surveillance gives us confidence that we can at least begin to watch the watchers.
University of Nevada, Reno, interns did a professional job of integrating public documents, crowdsourced information, and news articles to compile this atlas. Kudos to EFF and to their UNR student partners. Be sure and check the Atlas to see how you’re being watched in your community.
Republicans of the House Judiciary Committee recently released a 1,000 page report concerning the creeping politicization of the Federal Bureau of Investigation and the Department of Justice. The report describes the “FBI’s Washington hierarchy as ‘rotted at its core’ with a ‘systemic culture of unaccountability.’”
Though it was drafted by House Republicans, Democrats should be worried enough about the scale and scope of abuses to jointly investigate at least some of the reports’ allegations.
Internet conspiracy theories notwithstanding, the report demonstrates all the valid reasons to be concerned about the integrity of the FBI. Michael Horowitz, the Inspector General of the U.S. Department of Justice, called out the rampant abuses, noncompliance, and mishandling that goes on daily within the Bureau. That such criticism comes from a senior official, a Democrat, now serving in President Biden’s Administration, should demonstrate the bipartisan nature of these concerns.
Under the Foreign Intelligence Surveillance Act (FISA), the FBI is authorized to examine data likely to return foreign intelligence information. Sometimes, U.S. citizens or residents get incidentally caught up in calls, texts, or emails with a targeted foreigner. In these cases, oversight should ensure constitutional rights are protected. One would expect in such a system, then, that “incidental” collections of U.S. person information would be modest.
According to information from the Office of the Director for National Intelligence, however, the FBI conducted an estimated 3,394,053 U.S. person queries in 2021. This is a staggering increase over the approximately 1,324,057 U.S. person queries conducted in the previous year.
The Foreign Intelligence Surveillance Court (FISC) disclosed numerous instances in which the FBI queried acquired information for criminal investigations and reviewed content results without first obtaining court permission. Judge James E. Boasberg, then-presiding judge of the FISC, concluded that “the Court is concerned about the apparent widespread violations …”
Most familiar is the FBI’s abuse of its FISA authority to illegally surveil former Trump campaign associate Carter Page. IG Horowitz reported “17 significant ‘errors or omissions’ and 51 wrong or unsupported factual assertions in the applications to surveil Page.” An FBI lawyer went so far as to manufacture evidence presented to a judge to support surveillance against Page. The Justice Department was later forced to admit that the whole basis for this secret surveillance of a presidential campaign aide was flawed. But by then, the damage to civil liberties was done.
The FBI may also be maintaining the technological capacity to unleash “zero-click” spyware programs, including NSO Group’s Pegasus. The U.S. Commerce Department has put Pegasus’ developer, NSO Group, on a list of foreign companies that restricts the ability of U.S. companies to work with it, but that didn’t stop the FBI from obtaining, testing, and retaining it for later use.
In March, members of the Judiciary Committee wrote to FBI Director Wray seeking documents and information relating to the FBI’s acquisition, testing, and uses of NSO Group’s spyware. The FBI has provided none of the requested documentation, while concerns about its intentions with such a dangerous piece of spyware only grow.
As has been reiterated by Republicans, Democrats, and President Biden’s own Inspector General, there is serious cause for concern about the agency’s hierarchy, culture, and use of its authorities.
We all have a stake in these investigations.
Last week PPSA appealed a federal district court decision denying our motion under the Freedom of Information Act (FOIA) to force the FBI to produce records concerning the agency’s “unmasking” of various Members of Congress. Although the legal issue in this case may seem technical and abstruse, the legal question PPSA presents is important to Americans’ ability to hold our government accountable for surveillance directed at all of us.
These are the kinds of overarching, important concerns behind our FOIA requests. But such larger issues are often subsumed along the way in legal wrangling. These cases often center around the government’s efforts to avoid responding to a FOIA at all.
At first blush, the FOIA process seems straightforward. You might imagine that: PPSA files a FOIA request seeking records concerning surveillance practices, training, or procedures to a given government agency; the request is transmitted to the relevant agency component; and then the agency produces responsive records a few weeks later and we publicize them. After all, that is what FOIA requires.
But things are never so easy with FOIA. Government agencies routinely employ delaying tactics and denials to frustrate and exhaust even the most persistent requesters. In addition to simply ignoring requests, FBI and other agencies rely on a judicially invented doctrine called the Glomar response to claim that they are not even required to confirm or deny the existence of records about a given subject. Elsewhere, agencies claim that they don’t need to comply with FOIA because it would be too burdensome, as if digital search engines had yet to reach government record-keeping.
Such responses were meant by Congress to be rare exceptions to the rule. In practice, they’ve become the rule.
In the face of such obstructionism from officialdom, PPSA always takes the long view. A FOIA request is just the opening play in a long set. A denial, often on Glomar grounds, is the customary result. Once we receive an official denial to our request (usually long past the statutory deadline), PPSA then files an administrative appeal. Barring a satisfactory result (which is rare), we take the agency to court.
So we were not surprised when a judge on the U.S. District Court for the District of Columbia upheld the government’s argument that it cannot respond to a FOIA request we filed in 2020.
PPSA had asked for documents concerning government identification, or “unmasking,” of 48 sitting and former members of congressional intelligence committees in their communications from 2008 to 2020. Predictably, the government pled “Glomar,” and the judge agreed. So we are appealing.
In another case, PPSA was surprised when a request for FBI records of opinions from the Foreign Intelligence Surveillance Court (FISC) was denied because – the FBI asserted – it cannot locate these court opinions on its revised computer system. As excuses go, this is a dog-ate-my-homework level of sophistication. This is where flabber goes to meet with gasted. If the FBI truly cannot locate FISC opinions directed at the Bureau, we are truly in trouble.
In this instance, PPSA is pursuing an administrative appeal to DOJ’s Office of Information Policy. The appeal is couched in the customary legalese, but the gist of it is: “C’mon guys, this last one doesn’t pass the laugh test.”
Following FOIA requests on their long journeys is a tough, gritty business. But, as they say, it may be a dirty job, but someone has to do it.
In Christopher Nolan’s magnificent movie The Dark Knight, Bruce Wayne presents his chief scientist, Lucius Fox, with a sonar technology that transforms millions of cellphones into microphones and cameras. Fox surveys a bank of screens showing the private actions of people around the city.
The character, played by Morgan Freeman, takes it all in and then declares the surveillance to be “beautiful, unethical, dangerous … This is wrong.”
What was fiction in 2008 became reality a few years later with Pegasus: zero-click spyware that allows hackers to infiltrate cellphones and turn them into comprehensive spying devices, no sonar needed. A victim need not succumb to phishing. Possessing a cellphone is enough for the victim to be tracked and recorded by sound and video, as well as to expose the victim’s location history, texts, emails, images, and other communications.
This spyware created by the Israeli NSO Group might have originally been developed, as most of these surveillance technologies are, to catch terrorists. It has since been used by various dictatorships and cartels to hunt down dissidents, activists, and journalists, sometimes marking them for death – as it did in the cases of Jamal Khashoggi and Mexican journalist Cecilio Pineda Birto.
PPSA reported earlier this year that the FBI had purchased a license for Pegasus but has been keeping it locked away in a secure office in New Jersey. FBI Director Christopher Wray has assured Congress that the FBI was keeping the technology for research purposes. Now, Mark Mazzetti and Ronen Bergman of The New York Times have updated their deep dive into FBI documents and court records about Pegasus produced by a Freedom of Information Act request.
PPSA waded through these now-declassified documents, half of each page blanked out by censors. What we could see was alarming.
One document, dated Dec. 4, 2018, pledged that the U.S. government would not sell, deliver, or transfer Pegasus without written approval from the Israeli government. The letter certified that “the sole purpose of end use is for the collection of data from mobile devices for the prevention and investigation of crimes and terrorism, in compliance with privacy and national security laws.”
Since many in the national security arena and their allies assert that executive order EO 12333 gives intelligence agencies unlimited authority, the restraining influence of privacy and national security laws is questionable. And true to form, the FBI documents show that the agency did, in fact, give serious consideration to using Pegasus for U.S. criminal cases.
Why the turnaround? It was at time that a critical mass of Pegasus stories – with no lack of murders, imprisonments, and political scandals – emerged in the world press. That is surely why the FBI left this hot potato in the microwave. One wonders, however, what to make of the attempt of a U.S. military contractor, L3Harris, to purchase NSO earlier this year? If the FBI was out of the picture, was this aborted acquisition an effort by the CIA to lock down NSO and its spyware menagerie? And if the CIA has found some other route to possess this technology – and to be frank, they’d be guilty of malfeasance if they didn’t – is the agency staying within its no-domestic-spying guardrails in deploying this invasive technology? Recent revelations of bulk surveillance by the CIA does not inspire confidence.
Nor can we discount what the FBI might do in the future. Despite the FBI’s decision to avoid using the technology, Mazzetti and Bergman report that an FBI legal brief filed in October stated: “Just because the FBI ultimately decided not to deploy the tool in support of criminal investigations does not mean it would not test, evaluate and potentially deploy other similar tools for gaining access to encrypted communications used by criminals.”
No doubt, targeted use of such technologies would catch many fentanyl dealers, human traffickers, and spies. But as Lucius Fox asks, “at what cost?”
Thomas Germain on Gizmodo has an alarming piece on research from two app developers, Tommy Mysk and Talal Haj Bakry, who claim that despite Apple’s explicit promise to allow you to turn off all tracking, Apple still tracks you.
Apple advertises its ability to turn off iPhone tracking on its privacy settings. But according to Mysk and Bakry, after turning off tracking, Apple continues to collect data from many iPhone apps, including the App Store, Apple Music, Apple TV, Books, and Stocks. They found the analytic control and other privacy settings had no discernable effect on Apple’s data collection.
“Opting-out or switching the personalization options off did not reduce the amount of detailed analytics that the app was sending,” Mysk told Gizmodo. “I switched all the possible options off, namely personalized ads, personalized recommendations, and sharing usage data and analytics.” Apple still continued to track.
What could be at stake for consumers? Germain wrote:
“In the App Store, for example, the fact that you’re looking at apps related to mental health, addiction, sexual orientation, and religion can reveal things you might not want to be sent to corporate servers.”
Germain concedes that Apple may not be using this information, but it is impossible to know since Apple has not responded. Perhaps a hint of an answer was foreshadowed by Craig Federighi, Senior Vice President of software engineering, when he recently told The Wall Street Journal that “quality advertising and product privacy could coexist.”
That is far too vague to explain how Apple’s explicit privacy promises work in the real world. PPSA calls on Apple to provide a full explanation of how it treats digital privacy.
“The First Amendment bars the government from deciding for us what is true or false, online or anywhere,” the ACLU recently tweeted. “Our government can’t use private pressure to get around our constitutional rights.”
The ACLU responded to a report from Ken Klippenstein and Lee Fang of The Intercept news organization that the federal government works in secret to suggest content that social media organizations should suppress. The Intercept claims that years of internal DHS memos, emails, and documents, as well as a confidential source within the FBI, reveal the extent to which the government works secretly with social media executives in squashing content.
After a few days of cool appraisal of this story, we have to say we have more questions than answers. It is fair to note that The Intercept has had its share of journalistic controversies with questions raised regarding the validity of its reporting. It also appears that this report is significantly sourced on a lawsuit filed by the Missouri Attorney General, a Republican candidate for the U.S. Senate. We’ve also sounded out experts in this space who speculate that much of the content government is flagging is probably illegal content, such as Child Sexual Abuse Materials.
There is also reason for the government to track and report to websites state-sponsored propaganda, malicious disinformation, or use of a platform by individuals or groups that may be planning violent acts. If Russian hackers promote a fiction about Ukrainians committing atrocities with U.S. weapons – or if a geofenced alert is posted that due to the threat of inclement weather, an election has been postponed – there is good reason for officials to act.
The government is in possession of information derived from its domestic or foreign information-gathering that websites don't have, and the timely provision of that information to websites could be helpful in removing content that poses a threat to public safety, endangers children, or is otherwise inappropriate for social media sharing. It would certainly be interesting to know whether the social media companies find the government’s information-sharing efforts to be helpful or whether they feel pressured.
The undeniable problem here is the secret nature of this program. Why did we have to find out about it from an investigative report? The insidious potential of this program is that we will never know when information has been suppressed, much less if the reason for the government’s concern was valid.
The Intercept reports that the meeting minutes appended to Missouri Attorney General Eric Schmitt’s lawsuit includes discussions that have “ranged from the scale and scope of government intervention in online discourse to the mechanics of streamlining takedown requests for false or intentionally misleading information.”
In a meeting in March, one FBI official reportedly told senior executives from Twitter and JPMorgan Chase “we need a media infrastructure that is held accountable.” Does she mean a media secretly accountable to the government? Klippenstein and Fang report a formalized process for government officials to directly flag content on Facebook or Instagram and request that it be suppressed. The Intercept included the link to Facebook’s “content request system” that visitors with law enforcement or government email addresses can access.
The Intercept reports that the purpose of this program is to remove misinformation (false information spread unintentionally), disinformation (false information spread intentionally), and malinformation (factual information shared, typically out of context, with harmful intent). According to The Intercept, the department plans to target “inaccurate information” on a wide range of topics, including “the origins of the COVID-19 pandemic and the efficacy of COVID-19 vaccines, racial justice, U.S. withdrawal from Afghanistan, and the nature of U.S. support to Ukraine.”
The Intercept also reports that “disinformation” is not clearly defined in these government documents. Such a secret government program may include information gathered from activities that violate the Fourth Amendment prohibition on accessing personal information without a warrant. It would also be, to amplify the spirited words of the ACLU, a Mack Truck-sized flattening of the First Amendment.
One cannot ignore the potential that the government is doing more than helpfully sharing information with websites along with a suggestion that it be taken down. Is the information-sharing accompanied by pressure exerted by the government on the website? From the information now available, we simply don't know.
Bottom line: if these allegations are true, the U.S. government in some cases may be secretly determining what is and what is not truth, and on that basis may be quietly working with large social media companies behind the scenes to effect the removal of content. So, the possible origin of COVID-19 in a Chinese laboratory was deemed suppressible, until U.S. intelligence agencies reversed course and determined that a man-made origin of the virus is, in fact, a possibility. And the U.S. withdrawal from Afghanistan? Is our government suppressing content that suggests that it was somehow a less-than-stellar example of American power in action?
If these allegations are true, Jonathan Turley, George Washington University professor of law, is correct in calling this “censorship by surrogate.”
This program, which Klippenstein and Fang report is becoming ever more central to the mission of DHS and other agencies, is not without its wins. “A 2021 report by the Election Integrity Partnership at Stanford University found that of nearly 4,800 flagged items, technology platforms took action on 35 percent – either removing, labeling, or soft-blocking speech, meaning the users were only able to view content after bypassing a warning screen.” On the other hand, the Stanford research shows that in 65 percent of the cases websites exercised independent judgment to maintain the content unmoderated notwithstanding the government's suggestion.
After mulling this over for a few days, we propose the following:
There is no reason why the government cannot stand behind its finding that a given post is the product of, say, Russian or Chinese disinformation, or a call to violence, or some other explicit danger to public safety. But we need to know if the most powerful media in existence is subject to editorial influence from the secret preferences of bureaucrats and politicians. If so, this secret content moderation must end immediately or be radically overhauled.
Evan Greer and Anna Bonesteel of Fight for the Future have an impassioned piece on NBC’s News Think on the effects of near-ubiquitous doorbell cameras like Amazon’s Ring, Google’s Nest, and Wyze. Reading their piece feels being the proverbial frog that finally understands it is already in boiling hot water.
Greer and Bonesteel write:
“Devices like Ring and the apps associated with them are made to keep us on constant alert. They ping us with notifications, demanding our attention, and offer ‘infinite scroll’ like Facebook and Instagram, but for neighborhood crime. These devices make watching one another constantly feel acceptable, expected and even addicting.”
As we’ve reported, Amazon encourages customers to share images with about 2,000 police and fire departments. Greer and Bonesteel write that Amazon is “effectively giving police an easy push-button portal to request video from Ring camera owners in exchange for officers’ help in marketing Amazon products.”
They add that “Ring’s lax security practices in the past have allowed stalkers and hackers to break into people’s cameras … This dystopian vision of a private police camera on every home would have been unthinkable a generation ago.” We would add to that observation the disturbing fact that general counsels of federal law enforcement and intelligence agencies assert a right to purchase Americans’ personal data from digital data brokers without a warrant.
In China, the erection of universal surveillance is the result of a deliberate campaign by the Chinese Communist Party to watch and listen in on everyone. In the United States, a similar Panopticon is being erected, piece by piece, out of desire to gain market share for doorbell cameras, lawn furniture, and home fitness equipment sold online. But the destination is beginning to look the same.
Carolyn Iodice of Clause 40 Foundation has penned a brilliant analysis and history of the Foreign Intelligence Surveillance Act (FISA), a worldly examination of how that law operates in practice. Briefly put, FISA is a statute that is often treated by the government not as law that must be obeyed, but as a potpourri to mask the stench of illicit surveillance.
Iodice begins her paper with a report issued earlier this year by Sens. Ron Wyden and Martin Heinrich that the CIA has secretly gathered Americans’ records as part of a warrantless bulk data collection program. This program, the senators noted, works “entirely outside the statutory framework that Congress and the public believe govern this collection, and without any of the judicial, congressional, or even executive branch oversight that comes with FISA collection.”
To enter the world of FISA is to enter Alice’s Wonderland where agency general counsels talk backwards and agency chiefs assert six impossible things before breakfast. Iodice makes a bold statement in the beginning that the rest of her paper validates:
“In the context of FISA, the government has succeeded in violating the law by using implausible interpretations of statutory language and even by evading the statute entirely. Of course, it’s not uncommon for the executive branch to overstep its statutory authorities, but if FISA is understood to be legally binding on the government’s surveillance activities in the same way that, for instance, the EPA’s authority to set national air quality standards is granted and defined by the Clean Air Act, then the flagrancy and frequency of the government’s unlawful surveillance activities is puzzling. If FISA—a law duly passed by Congress and signed by the president—sets legal rules for surveillance programs, why does the government keep flouting them?”
Unlike with the Clean Air Act, she explains, with FISA there is no agreement where the lines exist between legislative, judicial, and executive authority. Worse still, there is a lack of agreement how far executive authority can be extended when national security is invoked. The need for the Fourth Amendment’s requirement for a probable cause warrant in criminal cases is clear, even if that principle is often now observed in the breach. But the Supreme Court has not supplied much guidance on how the Fourth Amendment applies to operations within the United States that are for intelligence purposes.
The rest of Iodice’s paper tracks the steady weakening of FISA in the post-9/11 world.
This paper is a timely primer for what promises to be a key surveillance debate: By the end of next year, FISA’s Section 702 must be reauthorized or expire. Section 702 grants the intelligence community the authority to surveil foreign intelligence targets. While Fourth Amendment protections prevent Americans from being targeted, the law allows the communications of Americans to get swept up in “incidental” collection. This loophole has been extended to whatever width or shape the government needs to do whatever it wants.
Iodice concludes that if Congress reasserted its authority, or the courts resolved the Fourth Amendment and separation-of-powers issues in FISA, then FISA would operate more like a statute should. In the meantime, civil liberties champions in Congress need to be deadly serious about holding up reauthorization of Section 702 if demands for serious FISA reforms are not met.
Last year, we reported on Apple’s plan to open a digital backdoor on CSAM, or Child Sexual Abuse Material. We reported that a content-flagging system was not just invasive of people’s privacy, but it could open a backdoor for China to use the technology to persecute dissidents and spy on Americans.
Throughout the privacy discussion, the European Union has generally led the world in pushing for higher standards for digital privacy, often challenging the United States to follow its lead.
Now, in the necessary drive to detect and prosecute those who abuse children, the EU Commission is driving a proposal that could result in the scanning of every private message, photo, and video to detect CSAM. It is also proposing using software to seek out adults engaged in “grooming” children to be victimized.
Every decent person agrees that we need to be aggressive in rooting out and prosecuting adults who exploit children. What could go wrong with the EU proposal?
Joe Mullin of the Electronic Frontier Foundation reports that the Commission “wants to open the intimate data of our digital lives up to review by government-approved scanning software, and then checked against databases that maintain image of child abuse.” Private digital conversations, even for Americans, will no longer be truly private.
Problem: The detection software produces far more false positives than catches.
Mullin writes: “Once the EU votes to start running the software on billions more messages, it will lead to millions of more false accusations. These false accusations get forwarded on to law enforcement agencies. At best, they’re wasteful; they also have potential to produce real-world suffering … That is why we shouldn’t waste efforts on actions that are ineffectual and even harmful.”
We would add that PPSA is concerned that technology developed for an admirable purpose is technology that will soon be used for any purpose.