George Orwell wrote that in a time of deceit, telling the truth is a revolutionary act.
Revolutionary acts of truth-telling are becoming progressively more dangerous around the world. This is especially true as autocratic countries and weak democracies purchase AI software from China to weave together surveillance technology to comprehensively track individuals, following them as they meet acquaintances and share information. A piece by Abi Olvera posted by the Bulletin of Atomic Scientists describes this growing use of AI to surveil populations. Olvera reports that by 2019, 56 out of 176 countries were already using artificial intelligence to weave together surveillance data streams. These systems are increasingly being used to analyze the actions of crowds, track individuals across camera views, and pierce the use of masks or scramblers intended to disguise faces. The only impediment to effective use of this technology is the frequent Brazil-like incompetence of domestic intelligence agencies. Olvera writes: “Among other things, frail non-democratic governments can use AI-enabled monitoring to detect and track individuals and deter civil disobedience before it begins, thereby bolstering their authority. These systems offer cash-strapped autocracies and weak democracies the deterrent power of a police or military patrol without needing to pay for, or manage, a patrol force …” Olvera quotes AI surveillance expert Martin Beraja that AI can enable autocracies to “end up looking less violent because they have better technology for chilling unrest before it happens.” Olivia Solon of Bloomberg reports on the uses of biometric identifiers in Africa, which are regarded by the United Nations and World Bank as a quick and easy way to establish identities where licenses, passports, and other ID cards are hard to come by. But in Uganda, Solon reports, President Yoweri Museveni – in power for 40 years – is using this system to track his critics and political opponents of his rule. Used to catch criminals, biometrics is also being used to criminalize Ugandan dissidents and rival politicians for “misuse of social media” and sharing “malicious information.” The United States needs to lead by example. As our facial recognition and other systems grow in ubiquity, Congress and the states need to demonstrate our ability to impose limits on public surveillance, and legal guardrails for the uses of the sensitive information they generate. David Pierce has an insightful piece in The Verge demonstrating the latest example of why every improvement in online technology leads to a yet another privacy disaster.
He writes about an experiment by OpenAI to make ChatGPT “feel a little more personal and a little smarter.” The company is now allowing some users to add memory to personalize this AI chatbot. Result? Pierce writes that “the idea of ChatGPT ‘knowing’ users is both cool and creepy.” OpenAI says it will allow users to remain in control of ChatGPT’s memory and be able to tell it to remove something it knows about you. It won’t remember sensitive topics like your health issues. And it has a temporary chat mode without memory. Credit goes to OpenAI for anticipating the privacy implications of a new technology, rather than blundering ahead like so many other technologists to see what breaks. OpenAI’s personal memory experiment is just another sign of how intimate technology is becoming. The ultimate example of online AI intimacy is, of course, the so-called “AI girlfriend or boyfriend” – the artificial romantic partner. Jen Caltrider of Mozilla’s Privacy Not Included team told Wired that romantic chatbots, some owned by companies that can’t be located, “push you toward role-playing, a lot of sex, a lot of intimacy, a lot of sharing.” When researchers tested the app, they found it “sent out 24,354 ad trackers within one minute of use.” We would add that data from these ads could be sold to the FBI, the IRS, or perhaps a foreign government. The first wave of people whose lives will be ruined by AI chatbots will be the lonely and the vulnerable. It is only a matter of time before sophisticated chatbots become ubiquitous sidekicks, as portrayed in so much near-term science fiction. It will soon become all too easy to trust a friendly and helpful voice, without realizing the many eyes and ears behind it. The first deepfake of this long-anticipated “AI election” happened when a synthetic Joe Biden made robocalls to New Hampshire Democrats urging them not to vote in that presidential primary. “It’s important that you save your vote for the November election,” fake Biden told Democrats. Whoever crafted this trick expected voters to believe that a primary vote would somehow deplete a storehouse of general-election votes.
Around the same time, someone posted AI-generated fake sexual images of pop icon Taylor Swift, prompting calls for laws to curb and punish the use of this technology for harassment. Other artists are calling for protections not of their visage, but of their intellectual property, with paintings and photographs being expropriated as grist for AI’s mill. Members of Congress and state legislators are racing to pass laws to make such tricks and appropriations a crime. It certainly makes sense to criminalize the cheating of voters by making candidates appear to say and do things they would never say or do. But sweeping legislation also poses dangers to the First Amendment rights of Americans, including crackdowns on what is clearly satire – such as a clear joke image of a politician in the inset behind the “Weekend Update” anchors of Saturday Night Live. Such caution is needed as pressure for legislative action grows with the proliferation of deepfakes. Even among non-celebrities, this technology is used to create sexually abusive material, commit fraud, and harass individuals. According to Control AI, a group concerned about the current trajectory of artificial intelligence, such technology is now widely available. All someone needs to create a compelling deepfake is a photo of you or a short recording of your voice, which most of us have already very helpfully posted online. Control AI claims that an overwhelming 96 percent of deepfake videos are sexually abusive. And they are becoming more common – 13 times as many deepfakes were created in 2023 as in 2022. Meanwhile, only 42 percent of Americans even know what a deepfake is. The day is fast approaching when anyone can create a convincing fake sex tape of a political candidate, or a presidential candidate announcing the suspension of his campaign on the eve of an election, or a fake video of a military general declaring martial law. A few weeks ago, a convincing fake video of the Louvre museum in Paris on fire went viral, alarming people around the world. With two billion people poised to vote in major elections around the globe this year, deepfake technology is positioned to brew distrust and wreak some havoc. While the Biden campaign has the resources to quicky refute the endless stream of fake photos and videos, the average American does not. A fake sex tape of a work colleague could burn through the internet before she has a chance to refute it. An AI-generated voice recording could be used to commit fraud, while even a fake photo could do immense damage. And if you thought forcing AI to include a watermark in whatever it produces, think again. Control AI points out that it is simply impossible to create watermarks that cannot be removed easily by AI. Many strategies to stop deepfakes are about as effective as trying to keep kids off their parents’ computer. It is unrealistic to believe we can slow down the evolution of artificial intelligence, as Control AI proposes to do. Certainly America’s enemies can be counted on to use AI to their advantage. Putting AI behind a government lock and key stifles the massive innovation that AI promises to bring, gives a technological edge to Russia and China, while also giving sole use of the technology to the federal government. That, too, poses serious problems for surveillance and oversight. Given the First and Fourth Amendment implications, Congress should not act in haste. Congress should start the long and difficult conversation about how best to contain AI’s excesses, while best benefitting from its promise in human health and wealth creation. Congress should continue to hold hearings and investigate solutions. Meanwhile, the best guard against AI is a public that is already deeply skeptical of conventional information encountered online. As more Americans learn what a deepfake is, the less impact these images will have. |
Categories
All
|