Clearview AI Is Working on Augmented Reality Goggles for Air Force Security

The shady face recognition firm is now landing contracts with the military and keeping details quiet.

We may earn a commission from links on this page.
Image for article titled Clearview AI Is Working on Augmented Reality Goggles for Air Force Security
Photo: Chris Jung (Getty Images)

Clearview AI, the shady face recognition firm which claims to have landed contracts with federal, state, and local cops across the country, has landed a roughly $50,000 deal with the U.S. military for augmented reality glasses.

First flagged by Tech Inquiry’s Jack Poulson, Air Force procurement documents show that it awarded a $49,847 contract to Clearview AI for the purposes of “protecting airfields with augmented reality facial recognition; glasses.” The contract is designated as part of the Small Business Innovation Research (SBIR) program, meaning that Clearview’s contract is to determine for the Air Force whether such applications are feasible.

Bryan Ripple, a media lead at the Air Force Research Laboratory Public Affairs, told Gizmodo via email that Clearview will conduct a three-month study under which “no glasses or units are being delivered under contract,” nor are any prototypes. Clearview, he wrote, stipulated “that security personnel are vulnerable while their hands are occupied with scanners and ID cards” and AR goggles would allow them to “remain hands-free and ready during this timeframe.”


“Clearview AI’s Augmented Reality (AR) Glasses perform facial recognition scanning to vet backgrounds and restrict unauthorized individuals from entering bases and flightlines,” Ripple wrote. “This 100% hands-free identity verification wearable device allows Defenders to keep their weapons at the ready, increase standoff and social distance, and confirm authorized base access using rapid and accurate facial biometrics while keeping threats distant. The results are improved safety at entry control points and for bases, faster identity verification without manual ID card checks, and cost savings by replacing the need for large permanent camera installations.”

In a promotional document shared by the Air Force, Clearview argued that in the time it takes to scan an ID card at the entry point to a military facility, “A criminal or terrorist can pull a gun, knife, or weapon during this brief but critical moment, kill the Defender, and access the base.” They argued the AR glasses would increase “standoff distance,” save guards time while vetting high volumes of traffic and allow them to maintain distance from anyone contagious with diseases.


Such a system wouldn’t be very far off from how Clearview’s technology already works. It’s just face-mounted. Users upload pictures into an app that is then compared against the company’s database of faces. Back in 2020, the New York Times reported that Clearview’s app contained code that would allow pairing with AR glasses, theoretically meaning users could walk around identifying anyone whose image had already been obtained by Clearview’s data-scraping operations.


Clearview has been the subject of massive controversy pretty much everywhere it pops up, and for good reason. The Huffington Post reported that its founder, Hoan Ton-That, and other individuals that worked for the company have “deep, longstanding ties” to far-right extremists. Whether Clearview obtained the photos it uses to populate its databases and train its face recognition algorithms legally is also a matter of dispute. Ton-That has bragged that its databases have billions of photos scraped from the public web. While mass-downloading publicly accessible data is legal in the U.S., some states have biometrics privacy laws on the books—most notably Illinois, where Clearview is battling an ACLU-backed lawsuit claiming the company was legally required to obtain the consent of people entered into its database.

In other countries, Clearview has run into more stringent opposition. In May 2021, regulators in France, Austria, Italy, Greece, and the United Kingdom collectively accused it of violating European data privacy laws. Clearview exited Canada entirely in 2020 after two federal privacy investigations, and Canadian privacy Commissioner Daniel Therrien said in February 2021 that Clearview’s technology broke laws requiring consent for collection of biometrics and constituted illegal mass surveillance. Canadian authorities demanded that Clearview delete images of its nationals from its database, with Australian regulators issuing similar demands later that year.


Ton-That insisted in an email statement to Gizmodo that the technology being tested with the Air Force does not include access to its troves of scraped images.

“We value the United States Air Force, and their position in defending the nation’s security and interests,” Ton-That wrote. “We continually research and develop new technologies, processes, and platforms to meet current and future security challenges, and look forward to any opportunities that would bring us together with the Air Force in that realm.”


“This particular technology remains in R&D, with the end goal being to leverage emerging capabilities to improve overall security,” he added. “The implementation is designed around a specific and controlled dataset, rather than Clearview AI’s 10B image dataset. Once realized, we believe this technology will be an excellent fit for numerous security situations.”

Face recognition is already being used by cops and the feds. Clearview, for example, has signed contracts with the FBI and U.S. Customs and Immigration Enforcement. That’s despite current face recognition tech’s reputation for being unreliable, easily abused for racial profiling, and generally invasive. The idea that police could get their hands on goggles that would allow them to run everyone they see against a face recognition database, for example, is pretty dystopian.


The U.S. military has expressed interest in AR for obvious reasons—the many ways in which digital overlays could enhance the productivity, efficiency, and lethality of troops—but the technology is in its nascent stages. The Air Force is currently testing the use of AR goggles to assist in aircraft maintenance training and operations, and it has done proof of concept work related to weapons training and virtual command centers. Last year, the U.S. Army delayed a $22 billion program to equip soldiers with AR goggles, the Integrated Visual Augmentation System (IVAS), saying it wouldn’t be ready for deployment until at least fall 2022.

IVAS is based on Microsoft HoloLens 2 and has been tested since 2019. According to Task and Purpose, it can be used for training, live language translation, face recognition, navigation, providing situational awareness, and projecting locations or objectives. It also contains the kind of high-resolution thermal and night sensors that previously would have been separate gear. Bloomberg reported earlier this month, however, that internal Pentagon assessments have deemed it as nowhere near ready for use in actual combat and only 5,000 goggles have actually been ordered yet. Testing to determine whether soldiers can rely on IVAS in combat scenarios won’t be carried out until May.


Update: 2/3/2022 at 4:55 p.m. ET: This article has been updated with additional information provided by the Air Force.