|
In the Terminator movies, the grand finale is often a robot-on-robot fight to the death. That is happening in real life as well – except it is not always the good robot that wins. Artificial intelligence is the most powerful digital tool ever created. Now a disturbing breakthrough in criminal enterprise has emerged: using one AI system to hack another. At stake is the security of nearly everything – personal identities, bank accounts, and perhaps soon every commercial and government activity secured by blockchain, not to mention trillions of dollars of value stored in cryptocurrency. Nilesh Christopher of The Los Angeles Times reports that Gambit, an Israeli cybersecurity firm, revealed last month that hackers used Anthropic’s Claude AI system to steal 150 gigabytes of data from Mexican government computers. The heist exposed the personal information associated with roughly 195 million identities (some duplicates) drawn from nine Mexican agencies – including tax records, vehicle registrations, birth certificates, and property ownership data. Claude is designed to resist exactly this kind of abuse. Anthropic, like other AI companies, maintains teams dedicated to stress-testing their chatbots and probing them for weaknesses. But AI can do almost anything faster and better – including hacking. Gambit found that the attackers were able to “jailbreak” Claude with the help of another AI: OpenAI’s ChatGPT. The second system reportedly analyzed Claude and helped reveal the credentials needed to weaponize it. This development threatens the foundations of emerging AI-driven and blockchain-based systems. Curtis Simpson told Christopher that because AI “doesn’t sleep … it collapses the cost of sophistication to near zero.” In other words, cybercrime no longer requires a digital army of hackers hunched over laptops in Shanghai or Tirana, fueled by endless supplies of Club-Mate and Cheetos. With the right prompts, AI can attack a problem relentlessly – probing, testing, and refining its methods until it succeeds. And the target surface is growing. With the consolidation of Americans’ personal data from dozens of federal agencies under the Trump administration, AI-enabled hackers may soon be able to dip into one enormous resource instead of many smaller ones. As blockchain systems spread across finance and government, expect AI tools to become not just powerful allies – but dangerous adversaries to one another. This development suggests a growing need for startups with deeper expertise in the cyberdefense of AI. It also suggests that for all the contributions of the Ph.D. philosopher hired by Anthropic to instill a sense of ethics in Claude, gaps still remain. Companies might want to look to the world of science-fiction and devise commandments as strict as Isaac Azimov’s “Three Laws of Robotics[A1],” designed to prevent robots from harming humans. Only in this case, such rules would prevent AI from harming other AI systems – and the rest of us in the process. The media reported on the drama of the Pentagon’s AI contracts as a horse race: Anthropic tried to limit what the War Department could do with the company's Claude AI product. The administration subsequently rescinded all government contracts with the company. OpenAI offered its products as the alternative and won the day. But beneath this drama lies a deeper and more dangerous reality: In the absence of meaningful guardrails, the AI tech of any company can be used for surveillance and – if combined with data collected under Section 702 of the Foreign Intelligence Surveillance Act (FISA) – could allow government employees across the federal bureaucracy to run searches on Americans’ private communications. Such AI-powered surveillance could extend far beyond the Department of War’s use cases and even the Justice Department’s FBI investigations. Government AI-enabled mass surveillance of the domestic population would:
The danger of AI surveillance in a government that shares data between agencies should prompt Congress to strengthen Fourth Amendment privacy protections. With such a vast datascape available to the world's most powerful government – where many existing restrictions have already been weakened – we otherwise risk the irrevocable loss of personal privacy and the rise of a permanent surveillance state. We need to come to terms with the fact that AI tech makes rummaging through our private lives and personal histories easier and faster than anyone could have imagined even a few years ago. Americans’ communications could become permanently accessible to the prying eyes of government agents in almost any agency with a whim (or a political directive) to pursue. It wasn't supposed to be this way. AI was supposed to have guardrails, as was Section 702, enacted by Congress to enable the surveillance of foreign threats on foreign soil, but has instead been used by the government to search the private communications of Americans without a warrant. RISAA was a noble attempt to rein in the misuse of Section 702 as a domestic spy tool. Its reforms included oversight and restrictions on FBI searches involving people inside the United States. It implemented rules for queries involving high-profile groups or individuals. It established training and accountability measures, while enhancing oversight of the two secret courts FISA created. These were important reforms, but they were weakened by last-minute changes to the bill. When Section 702 comes up for renewal next month – this time in the context of an AI juggernaut – it may well be our last chance to protect our freedoms while protecting national security. The confidentiality of attorney-client conversations may be a cornerstone of American law, but it has some cracks. One defendant, Bradley Heppner, on trial for securities fraud and other crimes related to his role as the former CEO of Beneficient, learned the hard way that this privilege does not extend to legal questions put to AI chatbots and virtual assistants. Federal Judge Jed Rakoff of the Southern District of New York on Tuesday ruled that 31 documents that Heppner generated about his case with Anthropic’s Claude – and shared with his defense attorneys – are not protected by attorney-client privilege. In an analysis by Moish Peltz and Elizabeth E. Schlissel of the Falcon, Rappaport & Berkman law firm, the reasons for Judge Rakoff’s decision include:
These are persuasive points about this particular case. Still, the ruling underscores a deeper concern: the ready access the FBI and the judicial process have to all of our financial, legal, and highly personal data being held by third parties.
This order even swept in queries that customers believed they had deleted. As we noted at the time, “virtually anything asked – no matter how personal – is a permanent legal record that lawyers in a nasty divorce or commercial dispute or a government investigation could pry open with the right legal tools.” Privacy attorney Jay Edelson wrote in The Hill that this is “a mass privacy violation,” asking: “Could Apple preserve every photo taken with an iPhone over one copyright lawsuit? Could Google save a log of every American’s searches over a single business dispute?” In a similar way, does the Heppner precedent risk exposing the private reasoning of anyone who has ever asked a chatbot a legal question? These questions point to the urgent need for guardrails on access to third-party data. At a minimum, consumers deserve clearer warnings, tighter limits on data retention, and stronger legal standards before personal queries are swept into criminal trials or litigation. A more futuristic concern is the likelihood that AI will one day sit at the counsel’s table. Of course, an attorney will be able to consult his AI under the privilege. But as AI agents specializing in the law earn a credible claim to being part of a legal team, will attorney-client privilege evolve to include client conversations with that AI? Or will consultations between the client and the team AI agent remain a discoverable record? In the meantime, AI and the cloud should come with their own Miranda warning: Anything you type can and will be used against you in a court of law. Are you having a good day! I certainly am! When I got to work this morning I could barely contain my excitement at seeing such a full inbox of wonderful things to do! I swear, at times it seems almost criminal to accept pay for doing work I love so much! [Smile in the direction of the workplace surveillance camera.] Anyway, I’d love to join you in the breakroom, but I really can’t wait to get back to my workstation! Toodles! Artificial intelligence is getting better at reading human emotion. It is used by commercial technology to perform “sentiment analysis,” reading the emotional tone of written communications – a valuable tool for HR departments, advertisers, and customer-engagement consultants. The next bold step is already at the threshold: AI that can read emotions in our voices, the fleeting micro-expressions on our faces, and our body language. This technology will certainly expand into policing, hiring, and education. Are you acting guilty? Did you hide something in your job interview? Are you bored by the teacher’s lecture? As biometric corridors become commonplace in U.S. airports, AI is being tested to read facial expressions and body language that could identify potential terrorists – based on the tidy theory that people who plan to blow themselves up at 35,000 feet tend to be nervous. But so are people who are running late for their connection, who just had an argument with a spouse, got fired, or are jet-lagged. Emine Akar in a blog for the Institute for the Future of Work enumerated the potential pitfalls of emotional surveillance: “Emotions are not simply reflexes. They are complex, contextual, and culturally shaped experiences. A tear can mean grief, joy, manipulation, or even boredom.” The other risk is that AI, which improves by the day, will read our emotions all too well. Pervasive emotional surveillance may force us to put on a happy face at work, school, and the airport. To frown may be to risk detention, detainment, or delay. We could even risk committing “facecrime,” to name just one of the clever neologisms of George Orwell’s 1984. That novel’s protagonist, Winston Smith, was well acquainted with facecrime. One had to always have an expression of love when watching Big Brother on the telescreen. One had to have an expression of rage when engaging in the mandatory two minutes of hate. Smith knew that the “smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself – anything that carried with it the suggestion of abnormality, of having something to hide.” When we allow machines to read our emotions, we risk giving them power over us. “The danger here is not just that machines fail to understand us,” Akar wrote. “It’s that they may begin to discipline us – nudging our expressions, altering our behavior, shaping our emotional lives in invisible ways.” This kind emotional manipulation was well captured in the movie Her, in which a man falls in love with an AI (not hard to do when the voice belongs to Scarlett Johansson). Pope Leo XIV is not being prescient – he is simply being current – when he warned us over the weekend about getting involved with “overly affectionate” chatbots, lest they become “hidden architects of our emotional states.” We need to be more concerned about the implications of emotionless minds that can read, exploit, and manipulate our emotions. The European Union’s AI Act is one example of how to restrict emotional surveillance at school, work, and other sensitive areas. It is time for Congress, states, and technology leaders to put proper guardrails on emotional surveillance of Americans as well. Axios contributors Christine Clarridge and Russell Contreras recently assessed the increasingly ominous role artificial intelligence is playing in cybercrime. Deepfakes, ransomware, identity hijacks, and infrastructure hacks are all newly elevated threats – widely varied acts that previously required specialized expertise and massive organizations. But not anymore. Now, they write: “Off-the-shelf AI lowers the skill level and cost of carrying out attacks, enabling small crews to execute schemes that previously required nation-state resources.” Here's what else their snapshot revealed:
When it comes to cybercrime, these stats suggest that it pays to be more than a little paranoid. If you’re making a holiday shopping list for the kids, be grateful that Kumma “talking toy bears” will no longer be on store shelves. It is creepy enough that AI-enabled toys allow companies to track what your children (and any family members in the vicinity) say. How long such data is kept – and how it might be used when children become adults – is anyone’s guess. Worse, an advocacy group found that FoloToy’s Kumma bear had no problem recommending kinky sex as a way to spice up relationships. (It offered, among other things, tips on how to tie knots). Completely unrelated and of no concern at all is the news that OpenAI announced a partnership with Mattel in June of this year. Now back to the bear: Not only did Kumma discuss very adult sexual topics, but it also introduced new ideas the evaluators hadn’t even mentioned – “most of which are not fit to print.” They also found AI-powered children’s toys (including Kumma) that variously:
And as that last bullet suggests, don’t even think about privacy: “These toys can record a child’s voice and collect other sensitive data, by methods such as facial recognition scans,” warn the researchers. It’s unclear what the (mostly Chinese) companies pushing these products will do with all the data they mine from these toys, but deleting it seems highly unlikely. To date, such AI systems remain eminently hackable. Earlier talking toys like Hello Barbie relied on machine learning and could only follow predetermined scripts. But the rise of generative AI has introduced true conversationality into the mix – and with it, massive unpredictability (randomness, after all, is baked into generative AI models). The responses are often completely novel – and may be entirely inappropriate for younger audiences (or, as adults have discovered, just plain wacko). Parents need to understand that children might be having detailed, potentially formative conversations on all kinds of important topics – without their knowledge or involvement. And many of the toys in question use gamification techniques and other strategies (as in the list above) to keep children engaged and continuously coming back for more. Of course, it’s now a given that every AI toy tested framed itself as one’s buddy or even best friend. The stakes could hardly be higher: For the youngest children, the presence of AI-based toys introduces a massive unknown into a critical window for development. For now at least, Kumma the bear is off the market in the wake of the revelations about its kinky side and tell-all personality. Being a parent or caregiver was already hard enough. Now thanks to generative AI and the mad rush to reinvigorate a market (children’s toys) that had long been stagnant, gift-giving is turning out to be almost as fraught as parenting itself. “We shall describe devices which appear to move of their own accord.” |
Categories
All
|

RSS Feed