Less than a day after Lemonade tweeted about its facial recognition-powered fraud-detection systems, the insurance startup has fully backtracked. On Wednesday, the company put out a formal apology about what it’s saying were “poorly worded” claims, and took down what it’s now calling an “awful thread.”
For those that have never filed a claim using Lemonade before, the entire process is seemingly tailor-made for people who hate filling out forms. Instead, the company largely relies on an AI chatbot (named “Jim”) that walks you through a basic questionnaire before asking you to flip on your camera so that same chatbot can analyze your face for signs of potential fraud—and potentially decline your claim as a result.
In the now-deleted tweet thread, Lemonade bragged that the company picks up on more than 1,000 “non-verbal” cues that “traditional insurers” might not be able to pick up on, fully steamrolling over the obvious dystopian implications that come with this sort of tech. Thankfully, experts in the tech ethics community were quick to call the company out.
“Imagine that you’re an autistic person whose home just burned down. You’ve just gone through the most stressful event of your life—and now you have to worry about policing your own speech patterns so your insurance company doesn’t flag your claim as fraud,” said one follower. Others pointed out that this system was almost tailor-made to flag people who might be in a state of shock following, say, a house fire, or after having their car stolen.
There’s also the fact that facial recognition systems like the one touted out by Lemonade have—by and large—failed people of color. There’s a crushing amount of evidence detailing how the facial recognition tech that’s used in housing applications, airport security, and the entire criminal justice system inadvertently misclassify non-white faces. Naturally, Lemonade’s followers were sure to mention that as well.
The full apology that Lemonade posted on its site doesn’t address any of those issues, beyond saying that the company doesn’t use facial-recognition tech based on outdated concepts like “phrenology or physiognomy.”
The company went on to add that it “never, and will never” let its AI systems auto-reject claims. This statement is a pretty significant step back from what Lemonade was telling the SEC when it went public less than a year ago. If you comb through those documents, Lemonade lays out that in “approximately a third” of the claim cases its platform handles, their Jim chatbot carries the entire process “without any human involvement.”
When contacted about whether Jim had ever inadvertently auto-declined an applicant, a Lemonade spokesperson assured Gizmodo that wasn’t the case.
“We get the apparent contradiction [between what was said in the blog and what’s written in the SEC disclosures],” the spokesperson said. “In these sorts of documents and our marketing materials, we’ll use terms like ‘AI’ or ‘automation’ to cover more broad concepts.”
The spokesperson went on to explain that in cases where there’s “true claims”—i.e., a claim that fully falls within someone’s actual policy—then there’s always a human on the other end of the line, reviewing that application.
But when it came to claims of ableism, racial bias, or the other ills caused by facial recognition software, the company had nothing to add.