“The progress of science in furnishing the government with means of espionage is not likely to stop with wire-tapping.” Louis Brandeis, 1928 Protecting privacy in the Information Age was always going to be a tough proposition. Protecting privacy in the era of generative AI? Without the proper safeguards on your part, is nigh unto impossible. Every entry you make in ChatGPT could surface in public due to a subpoena or a warrant. So when ChatGPT asks you (cue the Viennese accent) “how do you feel about your mudder?” your response may well be read by an FBI agent or by a prosecutor in open court. Yet this technology is being used by some in exactly that way – as a therapist. Mostly hoping that no one would notice, ChatGPT parent OpenAI recently published a mea culpa of sorts, trying to “sorry/not-sorry” its way through the bad PR it’s received as a result of users harming themselves and others. Because “people using ChatGPT in the midst of acute crises” hasn’t gone well, OpenAI will now route to human reviewers any conversations in which ChatGPT users threaten harm to others (another privacy can of worms). OpenAI may ban such accounts, but it may also refer the matter to law enforcement. Generative AI is not a therapist. It is not a counselor. It is not a parent, a minister, a rabbi, a teacher, or a school administrator. AI isn’t even anyone’s friend, much less a lover. It is a very bad substitute for all of these utterly human roles. We misuse it at our peril. But generative AI is something else as well – a profitable branch of data science that corporations, educational institutions, governments, law enforcement agencies (and scammers!) are using to collect vastly more data about employees, customers, students, citizens, and future victims of criminal schemes. To the extent that we use it at all, we should be exceedingly wary of what we share. It is not, nor has it ever been, private. Americans have never been more surveilled than we are at this moment. Before generative AI, the surveillance apparatus was proceeding more or less in a linear fashion, like a twin-engine prop on a steadily rising course. That prop plane is now a supersonic jet thanks to generative AI. “Safety” is one of the many traps that the era of generative AI is increasingly setting for matters of privacy. When our fundamental right to be let alone (to quote Justice Brandeis) is traded away these days, it is most often done “in the name of” some noble-sounding cause – safety, national security, you name it. Until law catches up to reality, you would be well advised to be very careful with any private information you share with AI advisors like ChatGPT, especially if it is about your mother. Comments are closed.
|
Categories
All
|
RSS Feed