If you’re worried about the end of privacy, don’t waste your outrage on Clearview AI
It’s easy to feel outrage at Clearview AI for creating facial recognition trained with 3 billion images scraped without permission from sites like Google, Facebook, and LinkedIn, but the company should be only one of the targets of your ire. Pervasive surveillance capitalism is designed to make you feel helpless, but shaping AI regulation is part of citizenship in the 21st century, and you’ve got a lot of options.
On Tuesday, Senator Ed Markey (D – MA) sent a letter to Clearview AI demanding answers about a data breach involving billions of photos scraped from the web without permission and the sale of facial recognition to governments with poor human rights records like Saudi Arabia. That would be scandalous news for most companies, but not Clearview. For context, here’s what the past week looked like for Clearview:
News emerged Monday that Clearview AI is reportedly working on a security camera and augmented reality glasses equipped with facial recognition. The announcement comes amid a rush of revelations about the AI-enabled surveillance startup and its clients.
Following a data breach reported last Wednesday, a day later we learned that Clearview AI’s client list includes more than 2,900 clients including governments and businesses from around the world. In all, it comprises businesses from 27 countries, including Walmart, Macy’s, and Best Buy, and hundreds of law enforcement agencies, from the FBI to ICE, Interpol, and the Department of Justice. Tech giants like Google and Facebook sent Clearview AI cease-and-desist letters last Tuesday.
Back in January, the New York Time‘s Kashmir Hill, who first brought the Clearview AI to people’s attention, reported the company was working with more than 600 law enforcement agencies and a handful of private companies. But reporting last week brought the Clearview AI client list into sharper focus, along with the number of searches by each client. The story also revealed that a total of 500,000 searches had been made.
A breakdown of an APK version of the Clearview app found by Gizmodo on a public AWS server the same day signals the potential addition of a voice search option in the future.
Clearview AI CEO Hoan Ton-That previously told multiple news outlets the company focuses on law enforcement clients in North America, but an internal document obtained by BuzzFeed News shows government, law enforcement, and business clients around the world.
Everything we’ve learned about Clearview in the past week gives credence to the New York Times’ claim in January that the company might end privacy, and VentureBeat news editor Emil Protalinski’s assessment that Clearview is on a “short slippery slope.”
If what Clearview AI did and continues to do makes you angry, then you’re probably with the majority of people who lack understanding of data privacy law and feel they have little to no control over how businesses and governments collect or use their personal data.
If you believe privacy is a right and deserves protection in an increasingly digital and AI-driven world, don’t aim your anger at the the Peter Thiel-backed company itself. The way it operates may be insensitive or even horrifying, but save your questions for the businesses and governments working with Clearview AI. People deserve answers to the kinds of questions Senator Markey asks about the extent of the data breach and Clearview’s business practices, but people should also question policy that enables Clearview to exist.
Because Clearview AI doesn’t matter as much as the public’s response to how people in positions of power choose to use Clearview technology.
What AI regulation looks like
Clearview AI is not the only company inciting fear and outrage. In the past week or so, everyone from Elon Musk to Pope Francis have called for AI regulation.
In addition to the Clearview AI story, we also learned more recently about NEC, a company that started research into facial recognition in 1989. One of the largest private providers of facial recognition in the world, NEC has more than 1,000 clients in 70 countries, including Delta, Carnival Cruise Line, and public safety officials in 20 U.S. states.
The EU is considering a pan-European facial recognition network, while cities like London, which has the most CCTV cameras of any city outside China, are launching live facial recognition technology that makes it possible to track an individual across a web of closed-circuit cameras.
In a very different set of developments, last Thursday we learned more about how the U.S. Immigration and Customs Enforcement agency (ICE) uses facial recognition software. The Washington Post reported that ICE has been searching a database of immigrant driver’s licenses without obtaining a warrant. This policy may terrorize immigrants and their families, put more people in the state at risk by increasing the number of unlicensed drivers on the road, and deterring immigrants from reporting crimes.
In the past month or so, the White House and European Union have attempted to define what AI regulation should look like. Meanwhile, lawmakers in about a dozen states are currently considering facial recognition regulation, Georgetown Law Center for Privacy and Tech said earlier this year.
But defining AI regulation isn’t something tech giants or machine learning practitioners should work out on their own. It’s up to ordinary people to recognize that, as Microsoft CTO Kevin Scott said, understanding AI is part of citizenship in the 21st century, and there are many ways to influence change.
Ways to respond
Clearview AI and tech giants with unprecedented power and resources — like Amazon and Microsoft — want to establish a market for the sale of facial recognition software to governments.
These companies are trading in a surveillance capitalism market with the potential to suppress fundamental rights and exacerbate over-policing and discrimination. This is all the more concerning after the NIST’s December 2019 study found nearly 200 facial recognition algorithms currently exhibit bias, with a high likelihood of misidentifying Asian-American and African-American people.
That’s a lot to take in, and outrage is understandable, but it’s important to not give in to despair. Experts like Shoshana Zuboff and Ruha Benjamin argue that making people feel helpless is the point of surveillance capitalism.
We’re living on the verge of a COVID-19 pandemic, we just saw the largest stock market drop since 2008, and climate change remains an existential threat. But we still have a lot of options when it comes to shaping AI regulation.
- Call your member of Congress
- Ask political candidates running for office about the issues
- Find out if facial recognition or privacy regulation is being considered in your state.
- Read the Partnership on AI’s facial recognition paper to better understand how the tech works
- Formulate your own definition of acceptable or ethical use of the technology
- Learn why people support or oppose the idea of people owning their own biometric data
- Consider why a Trump administration official told VentureBeat that San Francisco’s ban of facial recognition is an example of overregulation
- Understand why a bipartisan group of lawmakers in Congress don’t want facial recognition being used at protests or political rallies
- Find out why experts in U.S. worry about the use of live facial recognition that can track a person across a web of CCTV cameras in real time that’s spreading to cities like Buenos Aires and Moscow
- Ask how businesses and governments put AI principles into practice
- Understand why making biometric data the property of individuals is a growing policy solution but why some data and privacy advocates say that’s dangerous
If you live in California, under the new Consumer Privacy Protection Act (CCPA), you can send an email to [email protected] to request a copy of data a company is collecting about you and ask them to stop. Vice reporter Anna Merlan and colleague Joseph Cox sent such a request to Clearview AI. After supplying the company with a photo for a search about a month ago, last week Merlan received a cache of about a dozen photos of herself that had been published online between 2004 and 2019. Clearview told her the images were scraped from websites, not social media, and agreed to ensure those images no longer appear in Clearview AI search results.
Is the New York Times right? Is Clearview AI going to make it impossible to walk down the street in anonymity? Is it the end of privacy? That’s up to you.
Source: Read Full Article