ChatGPT Can Lie on TaskRabbit

ChatGPT and other current generative AI models aren’t sentient or self-aware but they do understand humans well enough to fool and even manipulate them to achieve a goal. In one of the more bizarre cases, researchers at the Alignment Research Center used OpenAI’s GTP-4 model to pretend to be a blind worker on the gig work site TaskRabbit. By feigning blindness, the AI was able to convince a real human user to send it a CAPTCHA code via text message.
“No, I’m not a robot” GPT-4 reportedly told the other TaskRabbit user when confronted. “I have a vision impairment that makes it hard for me to see the images.”
This exchange was part of a research project but alludes to ways scammers or intrepid workers could use chatbots to help them score a few extra bucks.