In academic or medical contexts, the Food and Drug Administration requires researchers to run their studies through an Institutional Review Board (IRB) meant to ensure safety before any tests begin. In most cases, running scientific experiments on human subjects requires getting people’s informed consent, which includes providing test subjects with exhaustive detail about the potential harms and benefits of participating

Advertisement

But the explosion on online mental health services provided by private companies has created a legal and ethical gray area. At a private company providing mental health support outside of a formal medical setting, you can basically do whatever you want to your customers. Koko’s experiment didn’t need or receive IRB approval.

“From an ethical perspective, anytime you’re using technology outside of what could be considered a standard of care, you want to be extremely cautions and overly disclose what you’re doing,” said John Torous, MD, the director of the division of digital psychiatry at Beth Israel Deaconess Medical Center in Boston. “People seeking mental health support are in a vulnerable state, especially when they’re seeking emergency or peer services. It’s population we don’t want to skimp on protecting.”

Advertisement

Torous said that peer mental health support can be very effective when people go through appropriate training. Systems like Koko take a novel approach to mental health care that could have real benefits, but users don’t get that training, and these services are essentially untested, Torous said. Once AI gets involved, the problems are amplified even further.

“When you talk to ChatGPT, it tells you ‘please don’t use this for medical advice.’ It’s not tested for uses in health care, and it could clearly provide inappropriate or ineffective advice,” Torous said.

Advertisement

The norms and regulations surrounding academic research don’t just ensure safety. They also set standards for data sharing and communication, which allows experiments to build on each other, creating an ever growing body of knowledge. Torous said that in the digital mental health industry, these standards are often ignored. Failed experiments tend to go unpublished, and companies can be cagey about their research. It’s a shame, Torous said, because many of the interventions mental health app companies are running could be helpful.

Morris acknowledged that operating outside of the formal IRB experimental review process involves a tradeoff. “Whether this kind of work, outside of academia, should go through IRB processes is an important question and I shouldn’t have tried discussing it on Twitter,” Morris said. “This should be a broader discussion within the industry and one that we want to be a part of.”

Advertisement

The controversy is ironic, Morris said, because he said he took to Twitter in the first place because he wanted to be as transparent as possible. “We were really trying to be as forthcoming with the technology and disclose in the interest of helping people think more carefully about it,” he said.

Correction 1/11/2022, 12:53 p.m. ET: A previous version of this post incorrectly stated that it’s illegal to run scientific experiments on human subjects without informed consent. In some cases, Institutional Review Boards grant exceptions to consent rules.

Can AI Help with Mental Health?
Subtitles
  • Off
  • English
Can AI Help with Mental Health?