Microsoft Asks Congress to Regulate Face Recognition

Illustration for article titled Microsoft Asks Congress to Regulate Face Recognition
Photo: AP

Microsoft’s president and chief legal officer, Brad Smith, called for federal regulation of face recognition in a new blog post on Friday. Half of all adults already have their face in a federal database, and vendors are supplying face recognition technology to schools, airports, and baseball stadiums. Federal regulation could help meet numerous privacy concerns while also giving the public a voice in the tech’s advancement, he argues.

Advertisement

Smith calls for federal regulation because, in our currently unregulated state, leaving individual companies to make ethical decisions on face recognition “is an inadequate substitute for decision making by the public and its representatives in a democratic republic.”

“We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology,” Smith writes. “As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.”

Advertisement

Smith urges congress to convene a bipartisan commission of experts to essentially create a homegrown version of the GDPR, a regulatory framework that balances both the potential of face recognition with the need to prevent misuses. “This should build on recent work by academics and in the public and private sectors to assess these issues and to develop clearer ethical principles for this technology,” he writes.

The statement continues by prompting readers to consider whether we should press the government to install varying regulatory measures advanced by privacy experts. It’s widely know that face recognition software can be buggy and inaccurate on darker skinned people. Smith raised the question of a federal law defining a minimum performance level on accurate identifications, banning face recognition software with unacceptably high mis-identification rates. Another possibility: requiring police agencies to post public notices anywhere face recognition is used on the public. This would apply to retailers as well, who have quietly sought patents to identify shoppers and match them with details on their preferences.

“It may seem unusual for a company to ask for government regulation of its products, but there are many markets where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike,” Smith writes. Theoretically, setting a minimum benchmark for face recognition accuracy would push all suppliers to refine the tech, prompting fewer false positives. If shoppers are informed which stores use face recognition, they could avoid those locations, sending a clear message to companies on whether they consent to the practice.

The statement is as robust a discussion we’ve seen on the topic from Microsoft, which received lukewarm praise after announcing they’d begun addressing the racial disparities in its own face recognition software. Which brings us to Microsoft’s recent moral crisis prompted by its contract with ICE. Smith’s post brings up the backlash, reiterates that the company doesn’t currently supply ICE with face recognition technology, then moves on.

Advertisement

But consider the following anecdote from today’s NYT write-up:

April Isenhower, a Microsoft spokeswoman, declined to answer questions about whether the company provided facial recognition services to other government agencies. She also declined to discuss the company’s position on consumer consent for facial recognition.

Advertisement

Microsoft remains its own best example of the limits of asking Silicon Valley to self-disclose anything incriminating. Still, face recognition is just one of a whole suite of technologies—including body cameras, drones and so-called smart policing, in need of regulation, public input and, what’s missing from Smith’s blogpost, the ability for the American people to say no.

UPDATED 6PM EST: Updated with comment from the Neema Singh Guliani, ACLU legislative counsel, who calls for an end to face recognition:

“Congress should take immediate action to put the brakes on this technology with a moratorium on its use, given that it has not been fully debated and its use has never been explicitly authorized. And companies like Microsoft, Amazon, and others should be heeding the calls from the public, employees, and shareholders to stop selling face surveillance technology to governments.”

Advertisement

Of course I have pages. I had pages five years ago. How anyone can believe I don’t defies belief.

Share This Story

Get our newsletter

DISCUSSION

I was going to write a flippant piece of fiction... but this deserves a real opinion, not some bad joke designed to poke at conservatives.

As a computer scientist with an AI hobby... I agree with the ACLU. This technology is too inaccurate. When you are using a purpose-built system designed for local authentication... yes, it will work 90-95% of the time. Which is good enough to log into your computer, but is it good enough to send someone to jail for life, or even to a firing squad for execution? I hope we never ask that question in the first place and decide not to go down this path. The reason is two fold: The first is that we are not a fair and just society; thus any algorithms we design will be biased in the same way we are biased. Second... In a fair and just society, such algorithms would be unnecessary because the system would be constructed in a very different way where all judgements were based on action-based evidence and the level of harm done by that action.

When we consider the potential for inaccuracy - we have to remember using even purpose-built equipment requires very accurate measurements and data. Using random surveillance... using systems that might be modified to produce inaccurate results at best or were never designed for this use... And, of course, organizations asking for the algorithm to be “tweaked” to provide a wider range of matches to ‘cover all their bases’ as it were... you can see where this could go really bad really fast.

There are some algorithms out there that given proper equipment or enough of a sample can strip down every layer of clothing a person is wearing, what they have in their pockets and concealed in their pants... and even provide an image of what lies under masks and even body armor all based on motion and phase-detection. But I think very much that the government would ignore anything that was “accurate” because what the government wants to do is prosecute “targets”, not identify anomalies.

Such an algorithm, if it existed, would tag people based on having weapons or concealing items. But the number of white people it would identify would be far greater than all others - more white people conceal carry... more white people have odds and ends and things they conceal. They carry more suspicious objects because they are not subject to as much scrutiny as minorities. Such an algorithm wouldn’t notice things like ethnicity and skin color... it would simply catch undercover/off-duty cops and people who conceal carry/carry items they’re not supposed to carry.

And I’d still argue against such an algorithm even if it were very accurate - people wear clothing not just to protect them from the elements but also to protect them from the eyes of other people. I believe in a fundamental right to privacy that should not be violated even if there is a potential that someone intending to commit a crime is allowed to go free. Its getting too close to thought-police for my liking. And everyone has thoughts that occasionally go to bad places for an instant before they shake it off and return to everyday life.

A person should be judged only and solely on what they actually do... not what they might do or think. And in this country, you are innocent until proven guilty which requires evidence of actions. Therefore our privacy deserves to be preserved at all costs, even if that cost is that we don’t catch a person before they attempt to take a life.

Here’s a generic example:

You might think that some of the ideas the KKK had were good... but that’s a very different thing from going out and burning a cross in a neighbors yard or hanging that neighbor from a tree, or even talking up the KKK to your friends and family.

Would I like you as a person if I knew you had those thoughts even if you kept them silent in your own mind? No.

Do I think you should be judged for thinking them without sharing them? No.

Do I think you should be judged if you turn those thoughts to action, participate in hate marches and perpetrate violence, and/or generally encourage others to indulge in such thoughts? Hell yes - you deserve it the moment they stop being just “thoughts” and start becoming actions.

Telling people about your hatreds and sharing them with others is an action. Even if the government can’t arrest you and prosecute you for it, you can and should lose your access to local society because of it. And you deserve what you get from it. There’s laws against the other actions you should be prosecuted for. And there will be evidence. Actions leave evidence. Thoughts do not.

In a fair and just society we would apply those laws equally. And in a fair and just society, evidence would be gathered, and there would be rehabilitation and necessary repercussions for acts of harm equal to the severity of that harm. You would not need algorithms - you’d just need people who were fair that gave a damn about other people and protecting/nurturing that society to be the best place it can possibly be.

Our society is not fair and just. And these automated algorithms in such a society are just a lazy way of finding excuses to justify prejudices or enhance inequality because of the way the results are interpreted by the people in charge. And we are all lessened when we allow this state of affairs to continue without challenge.