Skip to content

Reliance on chatbots could worsen ‘automation bias’

Photo: Pool
Photo: Pool (Getty Images)

Okay, on this point, Kissinger and kin make a fair point. There’s a wide variety of academic literature available exploring the concept of “automation bias,” a phenomenon where humans over rely on seemingly automated systems to make decisions. Whether its computer order kiosks at McDonalds or sentencing algorithms systems used by prosecutors to predict recidivism rates and issue out prison terms, humans have a long history of turning to machines in name of speed, efficiency, and reducing perceived human error.

That reliance on AI systems, even pretty dumb ones, can potentially blind people to whole new sets of errors or biases presented by the seemingly objective machines. ChatGPT and its compatriots could make those issues far worse due to their pesky habit of confidently blurting out blatant bullshit as truth. AI researchers call these algorithmic lies “hallucinations,” or, as Kissinger notes, “stochastic parroting.”

“What triggers these errors and how to control them remain to be discovered,” the authors write.

That side effect of LLMs training structure and it’s lack of citations means it could counterintuitively actually become more difficult to figure out what’s actually true.