As we labor to protect our personal and business information from governments and private actors, it helps to think of our data as running through pipes the way water does. Just like water, data rushes from place to place, but is prone to leak along the way. Now, as the AI revolution churns on, workplaces are getting complete overhauls of their data’s plumbing. Some information leaks are thus almost inevitable. So, just as you would do under a sink with a wrench, you should be careful where you poke around. A major new source of leakage is conversational AI tools, which are built on language in all its forms – words and sentences, but also financial information, transcripts, personal records, documents, reports, memos, manuals, books, articles, you name it. When an organization builds a conversational AI tool, many of these source items are proprietary, confidential, or sensitive in some way. Same with any new information you give the tool or ask it to analyze. It absorbs everything into its big, electronic, language-filled brain. (Technically, these are called “large language models,” or LLMs, but we still prefer “big, electronic, language-filled brains.”) So be careful where you poke around. As Help Net’s Mirko Zorz reminds us, companies should give employees clear guidelines about safely using generative AI tools. Here is our topline advice for using AI at work.
Finally, leave everything work-related at work (wherever work is). When elsewhere, don’t use your work email to sign into any of the tens of thousands of publicly available AI applications. And never upload or provide any personal or private information that you don’t want absorbed into all those big, electronic, language-filled brains out there. Because leaks are nearly inevitable. Comments are closed.
|
Categories
All
|