True story: A billionaire is enjoying dinner in a restaurant when his daughter unexpectedly walks in with a date he does not recognize. The billionaire surreptitiously snaps a picture of the young man and uses his phone to run the image through software from a leading facial recognition company in which he has invested. Within a short time, the father has the man’s name and can access vast amounts of information about him.
This is one detail in Kashmir Hill’s New York Times riveting investigative piece on Clearview AI, demonstrating how the company skirts the terms of service rules of big social media platforms and stretches the law to scrape data and thereby obtain a powerful capability in facial recognition.
Deploying facial recognition to identify strangers had generally been seen as taboo, a dangerous technological superpower that the world wasn’t ready for. It could help a creep ID you at a bar or let a stranger eavesdrop on a sensitive conversation and know the identities of those talking. It could galvanize countless name-and-shame campaigns, allow the police to identify protesters and generally eliminate the comfort that comes from being anonymous as you move through the world.
Hill also demonstrates how the company’s technology has been a boon in catching pedophiles and human traffickers. Somewhere between 600 and 3,000 law enforcement agencies use this technology. Clearview’s database grew from 20 million faces to more than 1 billion in 2018.
So if you are on Facebook, LinkedIn or the like, it’s likely that your face is already in Clearview’s database.
Does Clearview technology’s data scrapping violate the Computer Fraud and Abuse Act? A federal judge in the Ninth Circuit found that the copying of publicly available information does not violate this anti-hacking law. But the ACLU is using a tough Illinois statute to challenge Clearview.
PPSA will monitor and report regularly on this rapidly evolving issue.