FBI PSA: The Safe Bet Is to Assume It’s Fake Remember when the only person you worried might fall prey to scammers was your favorite aunt, who had only her Welsh Corgi at home with her during the day? “Now, Trixie,” you’d say, “don’t agree to anything and always call me first.” Those days are over. Forget your late aunt Trixie. Worry about yourself. Imagine if you received a phone call from a close friend, family member, even your spouse that was actually an utterly-convincing AI-generated version of that person’s voice – urgently begging you to provide a credit card number to spring her out of a filthy jail in Veracruz or pay an emergency room hospital bill. The age of AI augers many things, we are told. But while we’re waiting for flying taxis and the end of mundane tasks, get ready to question the veracity of every form of media you encounter, be it text, image, audio, or video. In what is sure to be the first of many such public service announcements, the FBI is warning that the era of AI-powered fraud hasn’t just dawned, it is fully upon us. The theme of the FBI’s announcement is “believability.” It used to be that scams were easy to spot – the writing was laughably bad, or the video and audio were noticeably “off” or even a little creepy – a phenomenon known as the Uncanny Valley effect. The newfound power of generative AI to produce realistic versions of traditional media has put an end to such reliable tells. Anyone who thinks they’re immune to such trickery misunderstands the nature of generative AI. Consider:
Whenever a friend or family member sends a video that clearly shows him or her in need of help (stranded on vacation or having their wallet stolen at a nightclub perhaps), don’t automatically assume it’s real no matter how convincing it looks. And thanks to generative AI’s “vocal cloning” ability, a straight-up phone call is even easier to fake. So, what can we do? The FBI advises: Agree to a secret password, phrase, or story that only you and your family members know. Do the same with your friend groups. Then stick to your guns. No matter how close your heartstrings come to breaking, if they don’t know the secret answer, it’s a scam-in-waiting. The FBI also recommends limiting “online content of your image or voice” and making social media accounts private. Fraudsters scrape the online world for these artifacts to produce their deepfake masterpieces. All generative AI needs to create a convincing representation of you is a few seconds of audio or video and a handful of images. Rest in peace, Aunt Trixie. We miss her and the good old days when all we had to do was warn her not to give her personal information to a caller who said he was from the Corgi Rescue Fund. Today, if an AI scamster wanted to, he could now have Aunt Trixie call you from the grave, needing money, of course. AI Inventor Muses About the Authoritarian Potential of General AIRobert Oppenheimer was famously conflicted about his work on the atomic bomb, as was Alfred Nobel after inventing dynamite. One supposes any rational, non-sociopath would be. But imagine if Alexander Graham Bell had similarly cast aspersions on the widespread use of telephones or Edison on electrification? When Morse transmitted, “What hath God wrought?” as the first official telegraph, it was meant as an expression of wonder, even optimism. We expect weapons of destruction to come with warnings. By contrast, technological revolutions that improved human existence have rarely come with dire predictions, much less from their inventors. So it’s a bit jarring when it happens. And with artificial intelligence, it’s happening. Geoffrey Hinton, the “godfather of AI,” quit Google after warning about its dangers and later told his Nobel Prize audience, “It will be comparable with the Industrial Revolution. But instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us.” Now enter Sam Altman, the man whose company, OpenAI, brought artificial intelligence into the mainstream. In a blog post published this week, Altman opened with his own paraphrase of “But this feels different.” Hinton and Altman are both referring to what many consider the inevitable turning point in the coming AI revolution – the advent of artificial general intelligence, or AGI. In short, this will be when almost every computer-based system we encounter is as smart or smarter than us. “We never want to be reckless,” Altman writes in the blog (emphasis added). “We believe that trending more towards individual empowerment is important,” Altman writes, “the other likely path we can see is AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy.” To be fair, OpenAI was founded with the goal of preventing AGI from getting out of hand, so perhaps his somewhat conflicted good cop/bad cop perspective is to be expected. Yet that hasn’t stopped Altman from taking what might someday be seen as the “self-fulfilling prophecy” step on our road to perpetual surveillance. Altman is partnering with Oracle and others in a joint venture with the U.S. government to build an AI infrastructure system, the Stargate Project. Two weeks after the venture was announced, his blog is acknowledging the need for a “balance between safety and individual empowerment that will require trade-offs.” What to make of all this? Sam Altman is a capitalist writ large. He believes in the American trinity of money, freedom, and individualism. So when he feels compelled to ponder the looming potential of a technocratic authoritarian superstate from his brainchild, he is to be believed. Altman dances ever-so-deftly around the potential dangers of mass surveillance in the hands of an AGI-powered authoritarian state, but it’s there. AI is the glue that makes a surveillance state work. This is already happening in the People’s Republic of China, where AI drinks in the torrent of data from a national facial recognition system and total social media surveillance to follow netizens and any wayward expressions of belief or questioning of orthodoxy. Altman is fundamentally worried that the technology he’s helping to unleash on the world could prove to be the fundamental unraveling of individual liberty, and democracy itself. One last thing worth noting: Sam Altman is an apocalypse-prepper. “I try not to think about it too much,” he told The New Yorker in 2016. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.” Just imagine what he isn’t telling us. George Orwell wrote that in a time of deceit, telling the truth is a revolutionary act.
Revolutionary acts of truth-telling are becoming progressively more dangerous around the world. This is especially true as autocratic countries and weak democracies purchase AI software from China to weave together surveillance technology to comprehensively track individuals, following them as they meet acquaintances and share information. A piece by Abi Olvera posted by the Bulletin of Atomic Scientists describes this growing use of AI to surveil populations. Olvera reports that by 2019, 56 out of 176 countries were already using artificial intelligence to weave together surveillance data streams. These systems are increasingly being used to analyze the actions of crowds, track individuals across camera views, and pierce the use of masks or scramblers intended to disguise faces. The only impediment to effective use of this technology is the frequent Brazil-like incompetence of domestic intelligence agencies. Olvera writes: “Among other things, frail non-democratic governments can use AI-enabled monitoring to detect and track individuals and deter civil disobedience before it begins, thereby bolstering their authority. These systems offer cash-strapped autocracies and weak democracies the deterrent power of a police or military patrol without needing to pay for, or manage, a patrol force …” Olvera quotes AI surveillance expert Martin Beraja that AI can enable autocracies to “end up looking less violent because they have better technology for chilling unrest before it happens.” Olivia Solon of Bloomberg reports on the uses of biometric identifiers in Africa, which are regarded by the United Nations and World Bank as a quick and easy way to establish identities where licenses, passports, and other ID cards are hard to come by. But in Uganda, Solon reports, President Yoweri Museveni – in power for 40 years – is using this system to track his critics and political opponents of his rule. Used to catch criminals, biometrics is also being used to criminalize Ugandan dissidents and rival politicians for “misuse of social media” and sharing “malicious information.” The United States needs to lead by example. As our facial recognition and other systems grow in ubiquity, Congress and the states need to demonstrate our ability to impose limits on public surveillance, and legal guardrails for the uses of the sensitive information they generate. David Pierce has an insightful piece in The Verge demonstrating the latest example of why every improvement in online technology leads to a yet another privacy disaster.
He writes about an experiment by OpenAI to make ChatGPT “feel a little more personal and a little smarter.” The company is now allowing some users to add memory to personalize this AI chatbot. Result? Pierce writes that “the idea of ChatGPT ‘knowing’ users is both cool and creepy.” OpenAI says it will allow users to remain in control of ChatGPT’s memory and be able to tell it to remove something it knows about you. It won’t remember sensitive topics like your health issues. And it has a temporary chat mode without memory. Credit goes to OpenAI for anticipating the privacy implications of a new technology, rather than blundering ahead like so many other technologists to see what breaks. OpenAI’s personal memory experiment is just another sign of how intimate technology is becoming. The ultimate example of online AI intimacy is, of course, the so-called “AI girlfriend or boyfriend” – the artificial romantic partner. Jen Caltrider of Mozilla’s Privacy Not Included team told Wired that romantic chatbots, some owned by companies that can’t be located, “push you toward role-playing, a lot of sex, a lot of intimacy, a lot of sharing.” When researchers tested the app, they found it “sent out 24,354 ad trackers within one minute of use.” We would add that data from these ads could be sold to the FBI, the IRS, or perhaps a foreign government. The first wave of people whose lives will be ruined by AI chatbots will be the lonely and the vulnerable. It is only a matter of time before sophisticated chatbots become ubiquitous sidekicks, as portrayed in so much near-term science fiction. It will soon become all too easy to trust a friendly and helpful voice, without realizing the many eyes and ears behind it. The first deepfake of this long-anticipated “AI election” happened when a synthetic Joe Biden made robocalls to New Hampshire Democrats urging them not to vote in that presidential primary. “It’s important that you save your vote for the November election,” fake Biden told Democrats. Whoever crafted this trick expected voters to believe that a primary vote would somehow deplete a storehouse of general-election votes.
Around the same time, someone posted AI-generated fake sexual images of pop icon Taylor Swift, prompting calls for laws to curb and punish the use of this technology for harassment. Other artists are calling for protections not of their visage, but of their intellectual property, with paintings and photographs being expropriated as grist for AI’s mill. Members of Congress and state legislators are racing to pass laws to make such tricks and appropriations a crime. It certainly makes sense to criminalize the cheating of voters by making candidates appear to say and do things they would never say or do. But sweeping legislation also poses dangers to the First Amendment rights of Americans, including crackdowns on what is clearly satire – such as a clear joke image of a politician in the inset behind the “Weekend Update” anchors of Saturday Night Live. Such caution is needed as pressure for legislative action grows with the proliferation of deepfakes. Even among non-celebrities, this technology is used to create sexually abusive material, commit fraud, and harass individuals. According to Control AI, a group concerned about the current trajectory of artificial intelligence, such technology is now widely available. All someone needs to create a compelling deepfake is a photo of you or a short recording of your voice, which most of us have already very helpfully posted online. Control AI claims that an overwhelming 96 percent of deepfake videos are sexually abusive. And they are becoming more common – 13 times as many deepfakes were created in 2023 as in 2022. Meanwhile, only 42 percent of Americans even know what a deepfake is. The day is fast approaching when anyone can create a convincing fake sex tape of a political candidate, or a presidential candidate announcing the suspension of his campaign on the eve of an election, or a fake video of a military general declaring martial law. A few weeks ago, a convincing fake video of the Louvre museum in Paris on fire went viral, alarming people around the world. With two billion people poised to vote in major elections around the globe this year, deepfake technology is positioned to brew distrust and wreak some havoc. While the Biden campaign has the resources to quicky refute the endless stream of fake photos and videos, the average American does not. A fake sex tape of a work colleague could burn through the internet before she has a chance to refute it. An AI-generated voice recording could be used to commit fraud, while even a fake photo could do immense damage. And if you thought forcing AI to include a watermark in whatever it produces, think again. Control AI points out that it is simply impossible to create watermarks that cannot be removed easily by AI. Many strategies to stop deepfakes are about as effective as trying to keep kids off their parents’ computer. It is unrealistic to believe we can slow down the evolution of artificial intelligence, as Control AI proposes to do. Certainly America’s enemies can be counted on to use AI to their advantage. Putting AI behind a government lock and key stifles the massive innovation that AI promises to bring, gives a technological edge to Russia and China, while also giving sole use of the technology to the federal government. That, too, poses serious problems for surveillance and oversight. Given the First and Fourth Amendment implications, Congress should not act in haste. Congress should start the long and difficult conversation about how best to contain AI’s excesses, while best benefitting from its promise in human health and wealth creation. Congress should continue to hold hearings and investigate solutions. Meanwhile, the best guard against AI is a public that is already deeply skeptical of conventional information encountered online. As more Americans learn what a deepfake is, the less impact these images will have. |
Categories
All
|