Project for Privacy and Surveillance Accountability (PPSA)
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE
  • Issues
  • Solutions
  • SCORECARD
    • Congressional Scorecard Rubric
  • News
  • About
  • TAKE ACTION
    • Section 702 Reform
    • PRESS Act
    • DONATE

 NEWS & UPDATES

How AI Can Leak Your Data to the World

4/21/2025

 
Picture
​As we labor to protect our personal and business information from governments and private actors, it helps to think of our data as running through pipes the way water does. Just like water, data rushes from place to place, but is prone to leak along the way. Now, as the AI revolution churns on, workplaces are getting complete overhauls of their data’s plumbing. Some information leaks are thus almost inevitable. So, just as you would do under a sink with a wrench, you should be careful where you poke around.
 
A major new source of leakage is conversational AI tools, which are built on language in all its forms – words and sentences, but also financial information, transcripts, personal records, documents, reports, memos, manuals, books, articles, you name it. When an organization builds a conversational AI tool, many of these source items are proprietary, confidential, or sensitive in some way. Same with any new information you give the tool or ask it to analyze. It absorbs everything into its big, electronic, language-filled brain. (Technically, these are called “large language models,” or LLMs, but we still prefer “big, electronic, language-filled brains.”)
 
So be careful where you poke around.
 
As Help Net’s Mirko Zorz reminds us, companies should give employees clear guidelines about safely using generative AI tools. Here is our topline advice for using AI at work.

  • If it’s a public tool like ChatGPT, absolutely no confidential personal or business information should be entered that isn’t already publicly available. Just ask Samsung about their misadventure.
 
  • Even internal, private company tools carry risk. Make sure you’re authorized to access the confidential information your system contains. And don’t add any additional sensitive information either (documents, computer code, legal contracts, etc.) unless you’re cleared to do so.
 
  • Like a person, LLMs can be “tricked” into disclosing all manner of sensitive information, so don’t give your credentials to anyone who does not have the same authorization as you. Those new employees from Sector 7G? Sure they seem nice and perfectly harmless, but they could be corporate spies (or more likely, just untrained). Don’t trust them until they’re vetted.
 
  • Any company that isn’t educating its employees on how to use AI tools acceptably is asking for trouble. If your company isn’t training you or at least providing basic guidelines, demand both. Vigilant employees are the last line of defense in any organization that doesn't bring its “A” game to AI. And “A” really is the operative letter here (we’re not just being cute). Authorization and Authentication are the bywords of any IT organization worth its salt in the AI space.
 
  • Just because an approved software program you’ve been using at work for years has suddenly added an AI feature does NOT mean it’s safe to use. Consult with your IT team before trying the new feature. And until they give you the all-clear, be sure to avoid inputting any sensitive or otherwise restricted information.
​
Finally, leave everything work-related at work (wherever work is). When elsewhere, don’t use your work email to sign into any of the tens of thousands of publicly available AI applications. And never upload or provide any personal or private information that you don’t want absorbed into all those big, electronic, language-filled brains out there.
 
Because leaks are nearly inevitable.

    STAY UP TO DATE

Subscribe to Newsletter
DONATE & HELP US PROTECT YOUR PRIVACY RIGHTS

Comments are closed.

    Categories

    All
    2022 Year In Review
    2023 Year In Review
    2024 Year In Review
    Analysis
    Artificial Intelligence (AI)
    Call To Action
    Congress
    Congressional Hearings
    Congressional Unmasking
    Court Appeals
    Court Hearings
    Court Rulings
    Digital Privacy
    Domestic Surveillance
    Facial Recognition
    FISA
    FISA Reform
    FOIA Requests
    Foreign Surveillance
    Fourth Amendment
    Fourth Amendment Is Not For Sale Act
    Government Surveillance
    Government Surveillance Reform Act (GSRA)
    Insights
    In The Media
    Lawsuits
    Legal
    Legislation
    Letters To Congress
    NDO Fairness Act
    News
    Opinion
    Podcast
    PPSA Amicus Briefs
    Private Data Brokers
    Protect Liberty Act (PLEWSA)
    Saving Privacy Act
    SCOTUS
    SCOTUS Rulings
    Section 702
    Spyware
    Stingrays
    Surveillance Issues
    Surveillance Technology
    The GSRA
    The SAFE Act
    Warrantless Searches
    Watching The Watchers

    RSS Feed

FOLLOW PPSA: 
© COPYRIGHT 2024. ALL RIGHTS RESERVED. | PRIVACY STATEMENT
Photo from coffee-rank