|
Axios contributors Christine Clarridge and Russell Contreras recently assessed the increasingly ominous role artificial intelligence is playing in cybercrime. Deepfakes, ransomware, identity hijacks, and infrastructure hacks are all newly elevated threats – widely varied acts that previously required specialized expertise and massive organizations. But not anymore. Now, they write: “Off-the-shelf AI lowers the skill level and cost of carrying out attacks, enabling small crews to execute schemes that previously required nation-state resources.” Here's what else their snapshot revealed:
When it comes to cybercrime, these stats suggest that it pays to be more than a little paranoid. If you’re making a holiday shopping list for the kids, be grateful that Kumma “talking toy bears” will no longer be on store shelves. It is creepy enough that AI-enabled toys allow companies to track what your children (and any family members in the vicinity) say. How long such data is kept – and how it might be used when children become adults – is anyone’s guess. Worse, an advocacy group found that FoloToy’s Kumma bear had no problem recommending kinky sex as a way to spice up relationships. (It offered, among other things, tips on how to tie knots). Completely unrelated and of no concern at all is the news that OpenAI announced a partnership with Mattel in June of this year. Now back to the bear: Not only did Kumma discuss very adult sexual topics, but it also introduced new ideas the evaluators hadn’t even mentioned – “most of which are not fit to print.” They also found AI-powered children’s toys (including Kumma) that variously:
And as that last bullet suggests, don’t even think about privacy: “These toys can record a child’s voice and collect other sensitive data, by methods such as facial recognition scans,” warn the researchers. It’s unclear what the (mostly Chinese) companies pushing these products will do with all the data they mine from these toys, but deleting it seems highly unlikely. To date, such AI systems remain eminently hackable. Earlier talking toys like Hello Barbie relied on machine learning and could only follow predetermined scripts. But the rise of generative AI has introduced true conversationality into the mix – and with it, massive unpredictability (randomness, after all, is baked into generative AI models). The responses are often completely novel – and may be entirely inappropriate for younger audiences (or, as adults have discovered, just plain wacko). Parents need to understand that children might be having detailed, potentially formative conversations on all kinds of important topics – without their knowledge or involvement. And many of the toys in question use gamification techniques and other strategies (as in the list above) to keep children engaged and continuously coming back for more. Of course, it’s now a given that every AI toy tested framed itself as one’s buddy or even best friend. The stakes could hardly be higher: For the youngest children, the presence of AI-based toys introduces a massive unknown into a critical window for development. For now at least, Kumma the bear is off the market in the wake of the revelations about its kinky side and tell-all personality. Being a parent or caregiver was already hard enough. Now thanks to generative AI and the mad rush to reinvigorate a market (children’s toys) that had long been stagnant, gift-giving is turning out to be almost as fraught as parenting itself. “We shall describe devices which appear to move of their own accord.” |
Categories
All
|

RSS Feed