Project for Privacy and Surveillance Accountability (PPSA)
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE

 NEWS & UPDATES

AI And Schools: Cheating Isn’t The Problem

8/12/2025

 
“The future of AI is not about replacing humans, it's about augmenting human capabilities.”
- Sundar Pichai, Google
Picture
​After you read this, you’ll wish that students using AI to cheat was the biggest problem with the technology. Turns out, a bigger issue is just how inconsistent AI is at monitoring students for “safety risks.” It’s a privacy nightmare we’ve written about before, with laptops snapping pictures of students at home, and the chilling effect such surveillance has on creative expression and First Amendment rights.

But almost four years after we first reported on this increasingly popular trend in secondary education, it shows no signs of letting up – even as we wait for the outcome of a major lawsuit by Columbia’s Knight Institute designed to compel a school district to disclose the nature of their surveillance tech.

Instead, we continue to read more headlines like this one from Sharon Lurye from the Associated Press: “Students have been called to the office – and even arrested – for AI surveillance false alarms.” You can read the details of the story for yourself, but the gist is this: A student made a joke on a school-related chat account. The joke was both culturally insensitive and had a reference to feigned violence. It was also somewhat self-deprecating. It was therefore exactly the kind of crass, completely innocent sarcastic drivel that you would expect from a teenager.

The only difference is that AI was watching (and, apparently, without the aid of humans possessed of common sense). So, of course, the student was arrested and separated from her parents for 24 hours. Then, somehow, a court made up of non-AI judges ordered eight weeks of house arrest, a full psych evaluation, and 20 days at an “alternative” school. When asked about the incident, the CEO of Gaggle, the company that made the software, opined, “Golly, I wish that was treated as a teachable moment, not a law enforcement moment.” (Okay, we added the “Golly.”)

In all such cases, best as we can tell, these are traditional AI systems – unthinking, rules-based programs that have absolutely no sense of context. Traditional student surveillance products are close to 20 years old. The systems that schools pay companies like Gaggle six figures to operate as elaborate keyword-matching programs don’t “think,” and they certainly don’t understand context.

Just imagine a student paraphrasing one of Shakespeare’s characters crying, “O, I am slain!” Should that student be flagged for suicide watch? That, of course, is a rhetorical question – something that we’re genuinely worried students in these surveillance-based school systems might never learn. (Of course, we have no idea if any Shakespeare character ever uttered anything like that because we used AI to suggest it.)

We get that being proactive about student safety is critical. But monitoring what they type isn’t the right way to do it. Students type – and say – all kinds of tasteless statements because that’s what being in elementary, junior high, and high school is all about. Students should not get arrested (and traumatized) merely for writing sarcastic or ironic language – the kinds of expressive skills school are supposed to teach them in the first place.

This isn’t working and it’s time for parents and school systems – and yes, the students themselves who have filed lawsuits – to stand in solidarity and demand at least an overlay of common sense. Without human discernment, AI-powered surveillance systems are unthinking, non-stop monitors designed to destroy privacy, creativity, and individual expression.
​
We would also remind the school administrators who surely mean well when they initially deploy such systems not to forget the cardinal rule of any AI system: Always keep a human in the loop. Every flagged item should be reviewed by at least one school system employee – preferably a principal with, perhaps, the addition of a school counselor – before anything gets reported to law enforcement.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Comments are closed.

    Categories

    All
    2022 Year In Review
    2023 Year In Review
    2024 Year In Review
    Analysis
    Artificial Intelligence (AI)
    Biometric Data
    Call To Action
    Congress
    Congressional Hearings
    Congressional Unmasking
    Court Appeals
    Court Hearings
    Court Rulings
    Data Privacy
    Digital Privacy
    Domestic Surveillance
    Facial Recognition
    FISA
    FISA Reform
    FOIA Requests
    Foreign Surveillance
    Fourth Amendment
    Fourth Amendment Is Not For Sale Act
    Government Surveillance
    Government Surveillance Reform Act (GSRA)
    Insights
    In The Media
    Lawsuits
    Legal
    Legislation
    Letters To Congress
    NDO Fairness Act
    News
    Opinion
    Podcast
    PPSA Amicus Briefs
    Private Data Brokers
    Protect Liberty Act (PLEWSA)
    Saving Privacy Act
    SCOTUS
    SCOTUS Rulings
    Section 702
    Spyware
    Stingrays
    Surveillance Issues
    Surveillance Technology
    The GSRA
    The SAFE Act
    The White House
    Warrantless Searches
    Watching The Watchers

    RSS Feed

FOLLOW PPSA: 
© COPYRIGHT 2026. ALL RIGHTS RESERVED. | PRIVACY STATEMENT
Photo from coffee-rank