Privacy groups are demanding transparency following news that ID.me—the biometric identity verification system used by the IRS and over 27 states—has failed to be entirely transparent in how its facial recognition technology works.
In a LinkedIn post published on Wednesday, ID.me founder and CEO Blake Hall said the company verifies new enrolling users’ selfies against a database of faces in an effort to minimize identity theft. That runs counter to the more privacy-preserving ways ID.me has pitched its biometric products in the past and has drawn scrutiny from advocates who argue members of the public compelled to use ID.me for basic government tasks have unclear information.
On the company’s website and in white papers shared with Gizmodo, ID.me suggests its services rely on 1:1 face match systems that compare a user’s biometrics to a single document. That’s opposed to so-called 1:many facial recognition systems (the kind deployed by the likes of now-notorious firms like Clearview AI) that compare users to a database of (many) faces.
Privacy experts generally tend to agree 1:many more is susceptible to error and bias (though groups like the Electronic Frontier Foundation have expressed concerns over 1:1 as well). However, while ID.me has pitched itself primarily on the back of 1:1 face match, new comments from the company’s founder show, at least in some scenarios, the company does compare some users’ faces to a database rather than a single document. That potentially implicates millions of Americans who are being told by federal and state governments to sign up for the site to view their taxes online or file for unemployment benefits.
Specifically, ID.me told Gizmodo it uses 1:many face recognition when users first enroll in its system to prevent identity theft, which is in addition to the 1:1 check it users to verify someone’s identity. In other words, ID.me uses 1:1 to make sure you are you, and 1:many to make sure you’re not someone else.
The revelation of ID.me’s use of 1:many face recognition drew immediate criticisms from a wide range of privacy groups. One of those, digital rights nonprofit Fight For the Future, released a statement accusing the company of “lying about the scope of its facial recognition surveillance.” In an emailed statement Fight for the Future Campaign Director, Caitlin Seeley George said the revelations should make government agencies reconsider their partnerships with ID.me.
“The IRS needs to immediately halt its plan to use facial recognition verification, and all government agencies should end their contracts with ID.me,” Seeley George wrote. “We also think that Congress should investigate how this company was able to win these government contracts and what other lies it might be promoting.”
They weren’t alone. In an interview with Gizmodo, ACLU Senior Policy Analyst Jay Stanley expressed deep concern over what he described as a lack of transparency from ID.me, particularly given its close relationship with government services.
“The fact that they [ID.me] weren’t transparent about this is just another sign we’re making up important policies for how Americans relate to their government by letting private companies make things up as they go along in secret,” Stanley said. “If this company was a government agency they would be subject to FOIA and the Privacy Act and other checks and balances that have been developed over many decades to forestall the kinds of problems that can emerge.”
Stanley also expressed concerns over the database ID.me maintains to prevent fraud and whose face may make their way on it and who may be on it.
Meanwhile, in an email to Gizmodo, the Surveillance Technology Oversight Project, (STOP) which has raised previous concerns over ID.me’s relationship with the IRS, echoed Stanley’s concerns over transparency and warned the news of ID.me using 1:many facial recognition means the system could be more susceptible to bias than previously known.
“This dramatically expands the risk of racial and gender bias on the platform,” STOP Executive Director Albert Fox Cahn told Gizmodo. “More fundamentally, we have to ask why Americans should trust this company with our data if they are not honest about how our data is used. The IRS shouldn’t be giving any company this much power to decide how our biometric data is stored.”
In follow-up statements, ID.me reiterated that it checks new enrolling users against its own database of selfies, “to check for prolific attackers and members of organized crime who are stealing multiple identities.” The company says less than .1% of all users are flagged as potential identity thieves. If a user is flagged by the facial recognition system, they aren’t blocked outright but are instead redirected to a video chat verification with one of the company’s team members.
“Without this control in place to detect repeat attackers, criminals would victimize thousands of innocent people per day,” ID.me said. “Given the threat environment, the alternative is to either accept massive amounts of fraud or to simply take the programs offline altogether.”
News of ID.me’s facial recognition database comes one week after Gizmodo and other outlets wrote about the IRS’s decision to mandate ID.me’s verification process for anyone trying to access their IRS.com account. Since then, numerous activist groups, including the ACLU and STOP, have spoken out publicly against the problem.
The issue also gained Democratic Sen. Ron Wyden’s attention. In a tweet, Wyden said he was “very disturbed” that some taxpayers may feel like they need to submit to a facial recognition scene. “While e-filing returns remain unaffected, I’m pushing the IRS for greater transparency on this plan.”
Though this particular instance happens to narrowly focus on ID., Stanley, the ACLU attorney, said the transparency issues highlighted are evidence of an overall system in need of a review from top to bottom.
“The infrastructure here of having a for-profit company doing what is probably an essential government function [verifying identities] is a broken way to build this kind of identity proofing system.”