Controversial AI Service for Flagging 'Risky' Babysitters Hits Pause After Backlash

Illustration for article titled Controversial AI Service for Flagging 'Risky' Babysitters Hits Pause After Backlash
Screenshot: Predictim

Predictim, a service that sells parents the opportunity to screen potential babysitters by crawling social media, the web, and criminal databases, has announced it’ll be taking a step back and rethinking its approach after a wave of bad press and public criticism.


We’ve reached out to Predictim to ask what prompted it to temporarily shut its service down and what changes it plans to make but did not receive an immediate reply. On its website, there’s a statement that reads:

We have been overwhelmed by the interest, press coverage, and input regarding our project. To be honest, this attention came earlier than we expected, and certainly before we had fully launched our contemplated services. We received some very helpful feedback on ways we could make Predictim even better. Clearly, people are hungry for better ways to make decisions in marketplaces where character, reputation, and trustworthiness are important. As a result, we have decided to pause our full launch and put our heads down to focus on evaluating how we offer our service and making changes to address some of the suggestions we received. While we are not offering any services at this time, please stay tuned and check back often for updates: we will be back!

Predictim allowed parents to submit a potential babysitter candidate to the company and its automated system spit out a simple score on the candidate’s “risk” levels and flagged social media posts. When Gizmodo’s Brian Merchant tried the service out for himself, he found that his personal babysitter received a ranking of “Moderate Risk” (3 out 5) for “Disrespectfulness.” The posts the system flagged seemed like harmless Twitter jokes. When Merchant tried it again with a friend who is quite vulgar online, the potty-mouthed candidate received a better risk score. Notably, Merchant’s babysitter is black, and the friend is white, but executives insisted their algorithms include no bias and a human review is involved for those who receive low scores.

A Washington Post article from November brought the company viral infamy and criticism from AI experts. Predictim also suddenly found itself blocked from using Facebook’s and Twitter’s APIs for violating their terms of service. Predictim’s CTO, Joel Simonoff, told Gizmodo that it only used public data so that development had no effect on its operations.

When Merchant tried to log in to his account this afternoon, Predictim’s website returned a 404 error message. The company is encouraging anyone interested in the service when it comes back to join its waiting list.

The company’s founders told Gizmodo that they felt they’d been misrepresented in the press and some outlets “twisted” their words. But the fact is, no matter what changes they make to the platform, many people are going to be uncomfortable with the premise as long as it’s normalizing the idea that people’s employment will hinge on a machine’s evaluation of some shit posts on Twitter.





Guess they didn’t Predictim this outcome...

Eurgh. I’ll see myself out.