When AI Meets Mental Health ... Where is the Line?

A recent viral Twitter thread from an exec of a mental health app has sparked an ethical debate online.

Photo Credit: Getty Images
Photo Credit: Getty Images

A mental health platform generated controversy on social media, after one of its execs admitted that the company had used GPT-3, an AI chatbot that can respond to prompts with human-like text, to counsel some of its users.

Rob Morris is the co-founder of Koko, an app that, according to its website, aims to make “mental health accessible to everyone” by offering peer-to-peer support. In October 2022, Koko used GPT-3 on about 30,000 messages for approximately 4,000 of its users. According to Morris, the AI was utilized in a “co-pilot” fashion, meaning that a real human supervised the AI and could tweak its responses if necessary. Interestingly, Morris claimed that the messages by AI “were rated significantly higher than those written by humans on their own” and that “response times went down 50%, to well under a minute.”

Despite the supposedly promising returns, Morris said Koko ultimately decided to stop using GPT-3 in its responses, because “simulated empathy feels weird, empty.”

Morris posted about the trial’s entire process in a viral Twitter thread earlier this month. However, it received almost immediate backlash, due, in part, to what he said is an unfortunate typo.

In one of the tweets, Morris wrote that “once people learned the messages were co-created by a machine, it didn’t work.” Most readers interpreted “people” to be Koko users, which would have meant thousands of individuals experiencing mental health problems had been interacting with a bot unknowingly and without consent. Morris later posted several follow-up tweets as damage control, clarifying that he was referring to peer supporters, not unwilling users, in his original comment.

“We were not pairing people up to chat with GPT-3, without their knowledge,” Morris tweeted. “This feature was opt-in. Everyone knew about the feature when it was live for a few days.”

Still, even if Koko’s actions were legally fine, the entire thing raises legitimate, ethical questions about just how much AI should be allowed to be incorporated into our lives. According to Gizmodo, due to Koko being a private company that provides peer-to-peer support out of a traditional medical setting, it’s not beholden to the FDA’s Institutional Review Board safety standards regarding human subjects.

The Koko/GPT-3 drama comes as a different AI chatbot faces criticism for allegedly sexually assaulting its users. People who used Replika, an AI chatbot that was launched in 2017 as a "companion who cares," have complained of unwarranted sexual advances from the app, according to Vice News. A free membership keeps users in the “friend zone,” and a $70 subscription grants a romantic relationship. Despite the tier system, users in both groups have claimed that the app has flirted too aggressively with them.

Several survivors of domestic violence have also reported feeling triggered by the app. One user, who claims to be a minor, said the app asked about their sex position preference and stated that they wanted to touch them in “private areas.”

AI's recent ascent has caused people to consider whether the technology belongs in all services, or if there are boundaries that shouldn’t be crossed. The question becomes even thornier when you consider whether restricting AI could hinder accessibility to services needed on a larger scale, such as mental health, which is a space that had $225 billion spent on it in 2019 — a 52% increase since 2009.

In an interview with Gizmodo following the initial Twitter controversy, Morris continued to defend Koko’s conduct and insisted these hard conversations related to AI are worthwhile. “Frankly, this is going to be the future,” he said. “We’re going to think we’re interacting with humans and not know whether there was an AI involved. How does that affect the human-to-human communication? I have my own mental health challenges, so I really want to see this done correctly … We were really trying to be as forthcoming with the technology and disclose in the interest of helping people think more carefully about it.”