A U.S. Customs and Border Protection officer helps a passenger navigate facial recognition kiosks at a United Airlines gate before boarding a flight to Tokyo, at George Bush Intercontinental Airport, in Houston.
Photo: David J. Phillip / Associated Press

U.S. Homeland Security officials faced a barrage of questions on Wednesday from House lawmakers frustrated—as several put it—over the alleged lack transparency exhibited by Customs and Border Protection (CBP) when it comes to the legally murky use of face recognition tools at U.S. ports of entry.

The hearing before the House Homeland Security Committee was overcast by a series of news reports detailing the apparent mining of U.S. motorists’ photographs by Department of Homeland Security agents in multiple states without a judge’s approval, as well as a recent data breach that CBP tried, and largely failed, to downplay to the public.

Advertisement

Chairman Bennie Thompson, Democrat of Mississippi, framed the hearing as an opportunity to address the public’s unease over Homeland Security’s use of face recognition; in particular, whether CBP has acted in excess of its statutory authority by scanning the faces of U.S. citizens at the border—voluntarily or not.

Advertisement

Experts widely agree the technology is prone to error, especially when applied to women and people of color, as studies have repeatedly shown. It is not fair, Thompson said, that certain Americans are expected to “shoulder a disproportionate burden of the technology’s shortcomings.”

“Before the government deploys these technologies further, they must be scrutinized and the American public needs to be given a chance to weigh in,” said the chairman, echoing the unanimous opinion of a panel of academic and civil society experts who, while testifying before lawmakers this May, called on Congress to “push the pause button” on face recognition and its largely unregulated use by federal and state law enforcement agencies.

Advertisement

The central focus of Wednesday’s hearing was not the reports of sexual abuse and retaliation at an Arizona CBP facility revealed by NBC News late Tuesday, but the question of whether CBP has exceeding its legal authority to utilize face recognition at U.S. borders and airports. That authority does not extend to collecting biometric data on U.S. citizens, though the faces of many are being scanned nonetheless.

Advertisement

Under current law, CBP is permitted only to collect the biometric information of foreign nationals through what’s known as the Biometric Exit Program. However, CBP is nevertheless scanning the faces of Americans at ports of entry in a pilot program designed to determine whether travelers are or are not foreign nationals.

The difference, John Wagner, the deputy executive assistant commissioner of the agency’s Office of Field Operations, testified Wednesday, is that photos of Americans are deleted by CBP after a period of 12 hours. (In the interim, the photos are held by the agency’s Office of Biometric Identity Management.) Americans may also opt-out of participating in the facial comparison process, in which case, alternative means of verifying their identities and documents are used.

“U.S. citizens are clearly outside the scope of the biometric entry-exit tracking. The technology we’re using for the entry-exit program we’re also using to validate the identity of the U.S. citizen,” Wagner said. “Someone has to do that.”

Advertisement

CBP argues this specific use of face recognition does not equate to an expansion of its existing biometric entry-exit authority. But many members of Congress disagree. In a letter last month, nearly two dozen stated that CBP’s current procedures do not allow travelers “requisite advanced notice to make an informed decision on their willingness to participate” in the face recognition program.

The ultimate question of whether CBP is operating within the scope of the Biometric Exit Program remained unresolved as of Wednesday. Instead, Johnson and other oversight committee member requested access to the written policies and legal arguments CBP relied on to submit U.S. citizens to biometric scans.

Advertisement

“Using data that travelers voluntarily provide, we are simply automating the manual identity verification process done today,” Wagner said.

Advertisement

Face comparison, he added, allows CBP to better identify travelers using falsified documents or attempting to evade screening in order to enter the U.S., “including those who present public safety or national security threats,” in addition to visitors who’ve “overstayed their authorized period of admission.”

A separate highlight of the hearing was Wagner’s insistence that CBP has not experienced the same demographic-based errors repeatedly uncovered by expert studies conducted in the United States. An MIT study in January, for instance, determined that face recognition software marketed by Amazon to law enforcement agencies misidentified women as men 19 percent of the time. That number jumped to 31 percent for darker-skinned women. Other studies have had similar results.

Advertisement

“So I get it twice,” said Congresswoman Sheila Jackson Lee, a Democrat of Texas, who is African American. She pointed to a study conducted by the American Civil Liberties Union (ACLU) last year in which Amazon’s face recognition tool, Rekognition, falsely matched 28 members of Congress with criminal mugshots. Lawmakers in the study who are people of color only made 20 percent of the sample, yet were falsely matched 39 percent of the time.

“In a review of our data, we are not seeing any significant error rates that attributable to a specific demographic,” Wagner testified, attributing CBP’s allegedly bias-free system to the high quality of its photos. (Notably, ACLU’s test used professional headshots of lawmakers and police mugshots.)

Advertisement

Wagner’s assertions about CBP’s bias-free capabilities was repeatedly undermined by the testimony of another witness: Charles Romine, director of the information technology laboratory at the National Institute of Standards and Technology.

“The [impact] of those demographic effects is diminishing,” Romine testified, thanks largely to the advent of convolutional neural networks. But presumably, face recognition tools will always exhibit some form of bias. “It is unlikely that we will ever achieve a point where every single demographic is identical in performance across the board, whether that’s age, race, or sex,” he said.

Advertisement

The goal is to know, he said, “exactly how much the difference is.”