|
Axios contributors Christine Clarridge and Russell Contreras recently assessed the increasingly ominous role artificial intelligence is playing in cybercrime. Deepfakes, ransomware, identity hijacks, and infrastructure hacks are all newly elevated threats – widely varied acts that previously required specialized expertise and massive organizations. But not anymore. Now, they write: “Off-the-shelf AI lowers the skill level and cost of carrying out attacks, enabling small crews to execute schemes that previously required nation-state resources.” Here's what else their snapshot revealed:
When it comes to cybercrime, these stats suggest that it pays to be more than a little paranoid. Security consulting firm Koi recently published an exposé about a new online privacy threat, one with the unforgettable name of “ShadyPanda.” The scheme allowed browser extensions to infect 4.3 million Chrome and Edge users. In this case, “infect” means sit there quietly, take control whenever it wants, then pretty much do whatever it pleases, including:
ShadyPanda’s extensions often worked legitimately for years before being activated and turned into full-blown spyware – making it an especially effective tool for keeping tabs on businesses. Some of the extensions were simple wallpaper galleries or productivity tools, and many had been marked as “trusted” or “verified” by the marketplaces that hosted them. One of the key vulnerabilities this research exposed was the whole “trust and verify” approach. Once approved by various marketplaces, extensions were never re-verified. And because most users opt for “auto-updating,” the extensions could continue to build up a large user base and then be activated as spy tools when needed. Koi reports: “Chrome and Edge's trusted update pipeline silently delivered malware to users. No phishing. No social engineering. Just trusted extensions with quiet version bumps that turned productivity tools into surveillance platforms.” And where is all that collected data going? To surveillance-obsessed China, of course. Worried that you might be infected? Check out The Hacker News’ partial list of the culprits. Infosecurity Magazine recommends you also check your browser extensions and remove anything you don’t recognize or no longer use. And turn off auto-updating while you’re at it. It is a dispiriting truth of modern life that we are – and likely always will be – in a footrace against hackers and thieves, whose tools will grow even more dangerous as AI evolves. But we don’t have to be helpless. At least we can take satisfaction in knowing that by embracing best practices, we can at least be a step ahead and leave the ShadyPandas of the world empty-handed. If you’re making a holiday shopping list for the kids, be grateful that Kumma “talking toy bears” will no longer be on store shelves. It is creepy enough that AI-enabled toys allow companies to track what your children (and any family members in the vicinity) say. How long such data is kept – and how it might be used when children become adults – is anyone’s guess. Worse, an advocacy group found that FoloToy’s Kumma bear had no problem recommending kinky sex as a way to spice up relationships. (It offered, among other things, tips on how to tie knots). Completely unrelated and of no concern at all is the news that OpenAI announced a partnership with Mattel in June of this year. Now back to the bear: Not only did Kumma discuss very adult sexual topics, but it also introduced new ideas the evaluators hadn’t even mentioned – “most of which are not fit to print.” They also found AI-powered children’s toys (including Kumma) that variously:
And as that last bullet suggests, don’t even think about privacy: “These toys can record a child’s voice and collect other sensitive data, by methods such as facial recognition scans,” warn the researchers. It’s unclear what the (mostly Chinese) companies pushing these products will do with all the data they mine from these toys, but deleting it seems highly unlikely. To date, such AI systems remain eminently hackable. Earlier talking toys like Hello Barbie relied on machine learning and could only follow predetermined scripts. But the rise of generative AI has introduced true conversationality into the mix – and with it, massive unpredictability (randomness, after all, is baked into generative AI models). The responses are often completely novel – and may be entirely inappropriate for younger audiences (or, as adults have discovered, just plain wacko). Parents need to understand that children might be having detailed, potentially formative conversations on all kinds of important topics – without their knowledge or involvement. And many of the toys in question use gamification techniques and other strategies (as in the list above) to keep children engaged and continuously coming back for more. Of course, it’s now a given that every AI toy tested framed itself as one’s buddy or even best friend. The stakes could hardly be higher: For the youngest children, the presence of AI-based toys introduces a massive unknown into a critical window for development. For now at least, Kumma the bear is off the market in the wake of the revelations about its kinky side and tell-all personality. Being a parent or caregiver was already hard enough. Now thanks to generative AI and the mad rush to reinvigorate a market (children’s toys) that had long been stagnant, gift-giving is turning out to be almost as fraught as parenting itself. Sometimes the best defense against privacy violations is as simple as choosing a good password. Such was the case in South Korea, where officials recently arrested multiple suspects accused of hacking into private surveillance cameras and capturing footage as pornography for voyeurs. The 120,000 cameras were inherently hackable because they are, after all, internet devices. But users made it all the easier by choosing exceptionally weak passwords. It's uncertain just how explicit the footage was (sourced from homes, Pilates studios, and even a women’s health clinic). Some of it was sold on overseas platforms that appear to cater to sexually exploitative content. Pro tip: “11111” and “12345” are terrible passwords, as are any other repeating or sequential numbers. And this maxim is especially relevant when dealing with devices that are internet-connected. Yet from Zoomers to octogenarians, the password problem remains, as The Register’s Connor Jones reports, as “prevalent and dangerous as ever.” Case in point: the recent news that the password for the ransacked Louvre’s CCTV system was “Louvre.” So clearly the vulnerability of camera systems is a problem that goes beyond South Korea and this particular (ab)use case. In June, security researchers found that they could access tens of thousands of internet-connected cameras worldwide (35 percent of which were in the United States). Vulnerable systems were everywhere in addition to homes: retail sites, construction zones, hotels – you name it. By studying the feeds, researchers noted, bad actors can find a treasure trove of useful information – from poorly lit spots to unguarded doors to times when no one’s around. Somewhere out there is a black market for anything a “security” camera might capture. So think twice about even having Internet-connected cameras (CCTVs that record directly to local devices are a better alternative). If you must be connected, however, then at least up your password game. Finally, if you’ve installed connected cameras, try not to forget where they are five years hence on some enchanted evening. When your identity is confirmed by a string of numbers in a computer, are you still yourself if the algorithm determines you (the person) are not you (the digital ID)? One state, Utah, is leading the nation in answering this question with policies that safeguard humans, while Washington, D.C. is heading down the path of reducing humans to algorithms. Consider ACLU’s Jay Stanley, who praised Utah for its “State-Endorsed Digital Identity” (SEDI), the state’s new framework for digital ID systems. In an approach that should be the norm rather than the notable exception, the Beehive State puts privacy first. Utah begins with the conviction that identity “is not something bestowed by the state, but that inherently belongs to the individual; the state merely ‘endorses’ a person’s ID.” In other words, our identities belong to us. We are born with them. We own them. With that realization comes new-found respect for privacy and other forms of personal freedom. This view of identity stands in sharp contrast to the definition Stanley found in the data-driven world of federal law enforcement. With the feds, identity is becoming something only the state can grant, defaulting to incomplete or faulty digital verification of citizenship. To be clear, both Utah’s SEDI platform and the federal approach utilize digital ID systems, but one is a case study in digital due diligence while the other illustrates the dangers of slapdash digital recklessness. The federal system is based on incomplete databases, poorly designed architecture, evolving (meaning, far from perfect) technology, and an utter disregard for the constitutional rights of individuals. Utah’s approach differs from the federal approach in very important ways:
Stanley goes on to quote the Ranking Member of the House Homeland Security Committee, who reports that an app (called Mobile Fortify) used by Immigration and Customs Enforcement (ICE) now constitutes “definitive” determination of a person’s status “and that an ICE officer may ignore evidence of American citizenship – including a birth certificate.” That’s bad enough on its own of course, but along the way, the government now sweeps up Americans’ biometric identifiers en masse. The databases Mobile Fortify accesses contain not only our photographs but enough records to constitute a permanent digital dossier. Congress did not get to review, much less approve, any of this. The American people never voted on it. In fact, the whole thing leaves us wondering what happened to the Privacy Act, signed into law by President Ford in 1974. It has been described as “the American Bill of Rights on data.” By declaring that identity is solely digital, determined by stealthy algorithms and policies, and deniable to those whose data is non-existent, incomplete or inaccurate, the federal standard – in sharp contrast to Utah’s – subverts 250 years of traditional, constitutional practice. Remember: Our founders built the world’s most vibrant democracy on pieces of parchment copied by hand. In any truly free society, identities are personal possessions (to help secure individual rights and facilitate their voluntary participation in society). Identities bestowed by the state ultimately serve only the state. That we even need to ponder the nature of identity reveals the absurdity of these abuses our personhood and privacy. Nevertheless, here we are. Without transparent conversations and healthy debate, we face a future in which we are whomever the state says we are, made of malleable 0s and 1s, with nothing grounded in the physical world. It's a discussion that, as of now, Utah alone seems committed to having. The Double-Edged Sword Wrapped in Eric Swalwell’s Privacy Lawsuit Against Housing Chief Bill Pulte12/1/2025
Those who live by surveillance cry by surveillance. We wonder how many times politicians on both sides of the aisle will have to get slammed by the very government spying practices they’ve supported before this lesson sinks in. Case in point: Rep. Eric Swalwell (D-CA). Last week, he filed a lawsuit against Bill Pulte, President Trump’s director of the Federal Housing Finance Agency, for accessing and leaking private mortgage records in retaliation for political speech. Pulte has issued criminal referrals to the Department of Justice (DOJ) against Swalwell, New York Attorney General Letitia James, Sen. Adam Schiff (D-CA), and Federal Reserve Governor Lisa Cook on the basis of alleged mortgage fraud. A federal judge dismissed the charges against James, while President Trump used the allegation against Cook to fire her from the Federal Reserve Board (she remains in her job while the Supreme Court reviews the case). Rep. Swalwell’s lawsuit makes an important point: “Pulte’s brazen practice of obtaining confidential mortgage records from Fannie Mae and/or Freddie Mac and then using them as a basis for referring individual homeowners to DOJ for prosecution is unprecedented and unlawful.” We cannot think of any prior use of private mortgage applications to harass political opponents (at least one of them, James, is arguably guilty of using lawfare herself to harass Donald Trump). Pulte’s actions appear to be a flagrant violation of the Privacy Act of 1974, which governs how the government can and cannot handle Americans’ private information. The law, as Swalwell notes, “explicitly forbids federal agencies from disclosing – or even transmitting to other agencies – sensitive information about any individual for any purpose not explicitly authorized by law.” Congress passed the Privacy Act to prevent the creation of a federal database that would create comprehensive dossiers on every American, something we’ve warned is now being attempted. The law specifically forbids agencies from freely sharing Americans’ confidential data gathered for one purpose (such as IRS tax collection), for another purpose (an FBI investigation). Agencies must issue written request justifying any such information sharing. Pulte is anything but transparent. “I’m not going to explain our sources and methods, where we get tips from, who are whistleblowers,” Pulte told the media. This mindset is in keeping with the corrupting spread of the best practices of the intelligence-surveillance state playbook. Today, it is the federal housing agency. We shouldn’t be surprised if tomorrow such “sources and methods” thinking trickles down to federal poultry inspections. Meanwhile, we remain dry-eyed over Rep. Swalwell’s plight. As a member of the House Judiciary Committee, Swalwell argued against – and voted against – the Protect Liberty and End Warrantless Surveillance Act. This bill would have reformed Section 702 of the Foreign Intelligence Surveillance Act by requiring a warrant before the government could access U.S. citizens’ data collected through programs enacted to surveil foreign threats on foreign soil. The Protect Liberty Act would have ended the government practice of using a foreign database to conduct “backdoor searches” on Americans… not unlike, say, a regulatory agency pulling a political opponent’s private mortgage application. The principle of mutually assured payback is something to keep in mind when lawmakers again debate the provisions of Section 702 in April. Imagine being targeted for surveillance because of your race – not with facial recognition or government inspection of your personal digital data, but through your electric meter. If you lived in parts of Sacramento, this is exactly what happened, as a decade-long scheme quietly bled Americans’ privacy one kilowatt hour at a time. Sacramento’s Municipal Utility District (SMUD) and local police zeroed in on Asian-American customers, flagging those deemed to be using “too much” electricity. Many were assumed to be growing marijuana illegally – and police eagerly requested bulk data on entire ZIP codes to feed their suspicions. The Electronic Frontier Foundation in July joined the Asian American Liberation Network to ask the Sacramento County Superior Court to end the local utility district’s illegal dragnet surveillance program. Last week, the court agreed, finding that routine, ZIP-code-wide data dumps had nothing to do with “an ongoing investigation.” The court wrote: “The process of making regular requests for all customer information in numerous city ZIP codes, in the hopes of identifying evidence that could possibly be evidence of illegal activity, without any report or other evidence to suggest that such a crime may have occurred, is not an ongoing investigation.” The response from EFF was even sharper: “Investigations happen when police try to solve particular crimes and identify particular suspects. The dragnet that turned all 650,000 SMUD customers into suspects was not an investigation.” The court recognized the obvious danger – dragnets turn vast numbers of innocent citizens and entire communities into suspects. Still, it wasn’t a clean sweep. The court stopped short of ruling that SMUD’s practice violated the “seizure and search” clause in California’s Constitution. But even a qualified victory is still a victory. We are reminded that privacy wins do happen – one dragged-into-the-sunlight surveillance program at a time. This win is something to be thankful for as we count our blessings this week. Today, the House Judiciary Committee did something too rare in Washington – it unanimously passed a meaningful privacy reform. By voice vote, Republicans and Democrats joined together to approve the Non-Disclosure Order (NDO) Fairness Act, a bill that reins in one of the most abused secrecy powers in federal law. Credit for this privacy victory goes to Rep. Scott Fitzgerald (R-WI) and Rep. Jerry Nadler (D-NY), as well as Chairman Jim Jordan (R-OH) and Ranking Member Jamie Raskin (D-MD). Their leadership moved this bill out of committee. It is now up to the full House to pass this measure and send it to the Senate. The bill’s reform is sorely needed. Under current law, prosecutors can secretly dig through your phone records, emails, and other data – and then slap your telecom provider with a gag order forbidding it from ever telling you that your privacy has been violated. These nondisclosure orders can last indefinitely, leaving Americans in the dark that someone has sifted through their personal communications. The NDO Fairness Act changes that. It puts reasonable limits on gag orders, and forces prosecutors to justify any extension. It also requires courts to explain in writing why continued secrecy is necessary – whether to protect an investigation, safeguard a vulnerable person, or address a real national security concern. The NDO Fairness Act makes sunlight the default, not the exception. The House has, of course, passed the NDO Fairness Act before, only to watch it stall in the Senate. But the politics are shifting. Senators are furious after learning that Special Counsel Jack Smith secretly subpoenaed the communications of eight senators. They were justifiably upset, but their response was misguided. The Senate quietly added a provision to the recent short-term funding bill giving senators the exclusive right to sue the federal government for up to $500,000 for privacy violations. Americans don’t need a special carveout for elected officials. They need a law that protects everyone. The NDO Fairness Act does exactly that. It closes a major privacy loophole without hindering legitimate investigations, striking a balance between public safety and the Fourth Amendment rights of all Americans. The House and Senate now have a chance to fix this problem the right way – by advancing a bill that protects the people who sent them to Washington, not just themselves. Once upon a time, in Google’s 2004 IPO filing, it aspired to “Don’t Be Evil,” imagining itself a company “that does good things for the world.” Dateline, November 2025: Various outlets have reported that Google’s app store now includes a version of its Mobile Identify app for Customs and Border Protection. This version is tailored to state and local law enforcement officers who are deputized to work with Immigration and Customs Enforcement (ICE) by using facial recognition to scan people using facial recognition algorithms. If a match is found on federal databases, officials at ICE are notified. And those databases (at least the ones we know of) contain records on more than 270 million people. Odds are you and your loved ones are in those databases. The fact that the law enforcement officers who use Mobile Identify are deputized to work alongside ICE is beside the point, as is the fact that ICE has its own, presumably more powerful version of the same app, called Mobile Fortify. Of far greater concern is that any government agency possesses this ability. It’s easily shared across jurisdictions and Google seems to have no qualms about enabling a tool that could be deployed as a weapon to surveil American citizens at will. After all, Google’s leaders could’ve just said “no.” But they didn’t, and now an insidious new public-private partnership is afoot. Today, it’s Google and ICE and the issue is immigration enforcement, but don’t expect it to stay that way for long. These kinds of surveillance technologies never stay contained, nor do limitations on who they target. Soon it will be Google and the government – federal, state, county, and local – and the reasons for spying on us could be our religion, political party, ethnicity, affiliation, or – well, you name it. Mobile Identify is just one more reason why Congress must debate how federal agencies are accessing our private information without a warrant. This is something to keep in mind when FISA Section 702, a federal surveillance policy, comes up for reauthorization in April. Watching the Watchers: If You Are Stopped by ICE, Your Biometric Data Will Be Held for a Generation11/18/2025
Robert Frommer, a senior attorney with the Institute for Justice, tells the harrowing story of George Retes, a U.S. citizen and Army veteran of the Iraq War, who was stopped in his car during an immigration sweep. He was on his way to work when he encountered an Immigration and Customs Enforcement (ICE) roadblock. A melee broke out between protesters and ICE agents. Retes’s car was engulfed in tear gas. The Institute for Justice reports that agents smashed Retes’s car window, dragged him out, and forced him to the ground with knees on his neck and back – even though he was not resisting. Despite Retes presenting proof of his citizenship, ICE agents detained him for three days without charges, strip-searched him, and forced him to provide DNA samples. He was not allowed to call a lawyer or given a hearing before a judge. Because Reyes was held incommunicado, his family was left to frantically search for him. Writing in MSN, Frommer explores what happens to the biometric data ICE collected on Reyes. “In addition to our DNA, the Department of Homeland Security (DHS) has recently and quietly authorized ICE officers to forcibly collect and retain intimate identifiers: our fingerprints and digital images of our faces. Combined with other technologies, the department is creating a general warrant for our persons, the kind of abuse that ignited the American Revolution. “A DHS document, meant to ensure our privacy, lays out the facts. An app called Mobile Fortify allows ICE and Customs and Border Protection (CBP) officers to photograph and scan anyone they ‘encounter’ in the field, regardless of citizenship or immigration status. If there isn’t a photo match, officers can collect people’s fingerprints, which are then checked against DHS biometric records. Once DHS has that sensitive data, the app feeds it into CBP’s Automated Targeting System – an enormous watch list that merges border records, passport photos and prior ‘encounter’ images. CBP retains every nonmatch photograph for 15 years, meaning that even if you’re an American citizen mistakenly stopped on the street, the government has your biometric records for (almost) a generation.” Congress should investigate and debate this retention of Americans’ biometric records before reauthorizing a single surveillance authority. And PPSA is hopeful that ICE will be forced to explain its unconstitutional detention of George Reyes when it faces his lawsuit under the Federal Torts Claim Act. “If you want to keep a secret, you must also hide it from yourself.” Imagine a dish called Surveillance Stew. It’s served anytime multiple privacy-threatening technologies come together, rather like a witch’s brew of bad ideas. It's best served cold. The latest Surveillance Stew recipe includes location data, social media, and facial recognition. Nicole Bennett, who studies such things, writes in The Conversation that this particular concoction represents a turning point: borders are no longer physical but digital. The government has long held that the border is a special zone where the Fourth Amendment has little traction. Now the government is expanding border rules to the rest of America. Immigration and Customs Enforcement (ICE) has put out a call to purchase a comprehensive social media monitoring system. At first glance, Bennett notes, it seems merely an expansion of monitoring programs that already exist. But it’s the structure of what’s being proposed that she finds new, expansive, and deeply concerning. “ICE,” she writes, “is building a public-private surveillance loop that transforms everyday online activity into potential evidence.” The base stock of Surveillance Stew came with Palantir’s development of a national database that could easily be repurposed into a federal surveillance system. Add ICE’s social media monitoring function and the already-thoroughgoing Palantir system becomes “a growing web of license plate scans, utility records, property data and biometrics,” says Bennett, “creating what is effectively a searchable portrait of a person’s life.” Such a technology gumbo seems less a method for investigating individual criminal cases than a sweeping supposition that any person anywhere in the United States could, at any moment, be a “criminal.” It’s a dragnet, says Wired’s Andrew Couts, noting that 65 percent of ICE detainees had no criminal convictions. Dragnets are inimical to privacy and corrosive to the spirit of the Constitution. Traditional, law-based approaches to enforcement are one thing – and enforcement, of course, is ICE’s necessary job. The problem now, warns Bennett, is that “enforcement increasingly happens through data correlations” rather than the gathering of hard evidence. We agree with Bennett's conclusion that these sorts of “guilt by digitization” approaches fly in the face of constitutional guardrails like due process and protection from warrantless searches. To quote Wired’s Couts again, “It might be ICE using it today, but you can imagine a situation where a police officer is standing on a corner and just pointing his phone at everybody, trying to catch a criminal.” The existence of Palantir’s hub makes it inevitable that ICE’s expanded monitoring capability will migrate to other agencies – from the FBI to the IRS. And when that happens, what ICE does to illegal immigrants can just as easily be done to American citizens – by any government entity, for any reason. When our daily lives are converted into zeroes and ones, the authorities can draw “borders” wherever they want. “Privacy is not just about hiding things or keeping secret, it’s about controlling who has access to your life.” |
Categories
All
|
RSS Feed