Clearview AI, the American maker of a controversial facial recognition tool used mostly by police, is facing a downpour of legal complaints across Europe as of Thursday, alleging scopious privacy violations based on internal documents showing the company’s algorithm at work.
Complaints filed with privacy watchdogs in five countries—France, Austria, Italy, Greece and the United Kingdom—by a group of privacy and human rights organizations allege systemic illegality at Clearview, a New York City-based startup whose clients have reportedly included some 2,000 U.S. taxpayer-funded agencies.
“European data protection laws are very clear when it comes to the purposes companies can use our data for” said Ioannis Kouvakas, a legal officer at Privacy International, one of four groups behind the complaints. “Extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users.”
Echoing statements from a year ago, Clearview CEO Hoan Ton-That said by email that his company has no contracts “with any EU customer and is not currently available to EU customers,” while pointing to Clearview’s compliance with transparency requests from members of the public required under the law.
Clearview, the subject of several damning BuzzFeed News reports—describing encouragement of abuse, widespread unauthorized access by police, and plans to court repressive regimes overseas—was first revealed by the New York Times in January 2020 to have amassed a database of more than three billion images scraped from Facebook and elsewhere without consent. Searchable from a mobile app, the database serves as a well of human faces from which biometric profiles are drawn—ostensibly to help clients from Immigration and Customs Enforcement to the NBA attach names to as many people who’ve been photographed as possible.
Clearview claims to have helped thousands of U.S. law enforcement agencies, touting its ability to help police identify “sexual predators, protect the elderly from financial criminals, and keep communities safe.”
“National governments have expressed a dire need for our technology because they know it can help investigate crimes like money laundering and human trafficking, which know no borders,” Ton-That said.
According to BuzzFeed, Clearview’s product has been used or tested at more than 200 companies, including many retail giants such as Best Buy, Home Depot, and Walmart. A months-long investigation by the site this year found, what’s more, that police officers at at least dozens of departments had downloaded and used the app without their department’s knowledge.
“Clearview seems to misunderstand the internet as a homogeneous and fully public forum where everything is up for grabs,” said Privacy International’s Lucie Audibert, who views the company’s actions as threatening to the “numerous rights and freedoms” enabled by an open internet.
Thursday’s complaints accuse Clearview of vaccuming up countless photos of individuals inside E.U. countries, contravening a number of privacy protections; particularly those established under the GDPR, Europe’s data privacy law, and its United Kingdom equivalent.
Privacy International’s complaints in the U.K. and France are joined by simultaneous filings by the Hermes Centre for Transparency and Digital Human Rights in Italy, Homo Digitalis in Greece, and noyb - the European Center for Digital Rights in Austria.
Like its U.K. cousin, the GDPR requires companies and other entities to obtain a person’s “informed” consent before processing their personal data. The law covers anyone “in the Union,” whether the data holder resides there or not. Responding to transparency requests does not alleviate this requirement.
Clearview’s data practices were declared unlawful this year by the privacy authority in Hamburg, Germany’s second largest city. The ruling arose from a complaint filed by Matthias Marx, a German computer scientist and Chaos Computer Club member—photographs of whom Clearview used to generate a biometric profile without his knowledge or consent.
Marx was able to learn of his biometric profile by sending the company a data subject access request (DSAR), a legal tool in Europe for compelling companies to release copies of stored personal data to their owners. In January, after finding Clearview had violated the law, the Hamburg Data Protection Authority (DPA) ordered the unique mathematical value forming Marx’s biometric identity be deleted—though not the photos in which he is shown.
The DPA rejected multiple arguments Clearview offered up in its defense. Whereas the GDPR extends to non-European companies so long as they are “monitoring” people inside the E.U., Clearview denied that Marx had been monitored over any length of time. The company had merely, it said, provided a “snapshot of some photos available on the internet.”
In a rebuke, the DPA pointed to a photo of Marx scraped by Clearview from a stock image website. It included text not only identifying him as a “student” but placed him physically in Hamburg on a particular date. “Accordingly,” the DPA said, Clearview does not merely offer a snapshot “but evidently also archives sources over a period of time.”
Naming specific infractions, the authority stated that a person’s behavior is considered “montored” anytime it’s recorded in a “target[ed] manner and stored in the form of personal data.” “Systematic recording is not necessary,” it added. “The sensitivity of the monitored behaviour is irrelevant. The motive for the monitoring is also irrelevant.”
But disappointing both Marx and privacy groups long critical of Clearview, the DPA’s deletion order was narrowly focused—only he and biometric his profile were covered.
“This surveillance machine is terrifying,” Marx said at the time. “Almost one year after my initial complaint, Clearview AI doesn’t even have to delete the pictures that show me. And even worse, every individual must submit their own complaint. This shows that our data is not yet sufficiently protected and that there is a need for action against biometric surveillance.”
The new complaints, which regulators have three months to address, cite “various” additional data requests filed by other individuals, Privacy International said. The filers contend the documents exemplify a pattern of unlawful activity by Clearview across the region.
Last May, Clearview CEO Hoan Ton-That told the Wall Street Journal that it deletes data on people in the E.U. upon request. While law enforcement in the E.U. had tested its facial recognition technology, he said at the time, the company had no customers in the Union.
Nevertheless, marketing materials obtained by BuzzFeed three months prior showed that Clearview had touted plans for “rapid international expansion,” included in nine E.U. countries.
Update, 1pm: Statements by Clearview AI CEO Hoan Ton-That were added with context.