|
“The future of AI is not about replacing humans, it's about augmenting human capabilities.” - Sundar Pichai, Google After you read this, you’ll wish that students using AI to cheat was the biggest problem with the technology. Turns out, a bigger issue is just how inconsistent AI is at monitoring students for “safety risks.” It’s a privacy nightmare we’ve written about before, with laptops snapping pictures of students at home, and the chilling effect such surveillance has on creative expression and First Amendment rights. But almost four years after we first reported on this increasingly popular trend in secondary education, it shows no signs of letting up – even as we wait for the outcome of a major lawsuit by Columbia’s Knight Institute designed to compel a school district to disclose the nature of their surveillance tech. Instead, we continue to read more headlines like this one from Sharon Lurye from the Associated Press: “Students have been called to the office – and even arrested – for AI surveillance false alarms.” You can read the details of the story for yourself, but the gist is this: A student made a joke on a school-related chat account. The joke was both culturally insensitive and had a reference to feigned violence. It was also somewhat self-deprecating. It was therefore exactly the kind of crass, completely innocent sarcastic drivel that you would expect from a teenager. The only difference is that AI was watching (and, apparently, without the aid of humans possessed of common sense). So, of course, the student was arrested and separated from her parents for 24 hours. Then, somehow, a court made up of non-AI judges ordered eight weeks of house arrest, a full psych evaluation, and 20 days at an “alternative” school. When asked about the incident, the CEO of Gaggle, the company that made the software, opined, “Golly, I wish that was treated as a teachable moment, not a law enforcement moment.” (Okay, we added the “Golly.”) In all such cases, best as we can tell, these are traditional AI systems – unthinking, rules-based programs that have absolutely no sense of context. Traditional student surveillance products are close to 20 years old. The systems that schools pay companies like Gaggle six figures to operate as elaborate keyword-matching programs don’t “think,” and they certainly don’t understand context. Just imagine a student paraphrasing one of Shakespeare’s characters crying, “O, I am slain!” Should that student be flagged for suicide watch? That, of course, is a rhetorical question – something that we’re genuinely worried students in these surveillance-based school systems might never learn. (Of course, we have no idea if any Shakespeare character ever uttered anything like that because we used AI to suggest it.) We get that being proactive about student safety is critical. But monitoring what they type isn’t the right way to do it. Students type – and say – all kinds of tasteless statements because that’s what being in elementary, junior high, and high school is all about. Students should not get arrested (and traumatized) merely for writing sarcastic or ironic language – the kinds of expressive skills school are supposed to teach them in the first place. This isn’t working and it’s time for parents and school systems – and yes, the students themselves who have filed lawsuits – to stand in solidarity and demand at least an overlay of common sense. Without human discernment, AI-powered surveillance systems are unthinking, non-stop monitors designed to destroy privacy, creativity, and individual expression. We would also remind the school administrators who surely mean well when they initially deploy such systems not to forget the cardinal rule of any AI system: Always keep a human in the loop. Every flagged item should be reviewed by at least one school system employee – preferably a principal with, perhaps, the addition of a school counselor – before anything gets reported to law enforcement. Comments are closed.
|
Categories
All
|
RSS Feed