Chatbot therapy is risky – But not useless.

Chatbot therapy is risky: A.W. Ohlheiser is a senior technology reporter for Vox, writing about technology’s impact on people and society. They have also covered online culture and misinformation in the Washington Post, Slate, and the Columbia Journalism Review, among other places. He holds an MA in Religious Studies and Journalism from NYU.

I couldn’t find a therapist when I first thought I might need one or when I got the energy to start Googling therapists with offices near me. I didn’t realize it until a month later after seeing my depression test results, my doctor rescheduled my next appointment, got a list of doctors, and helped me email each of them asking if they received information. work hurts. A year before my medical research ended thanks to a friend who moved and gave me the name of the person treating him.

I’m lucky: my full-time job includes health insurance. I live in an area with many mental health specialists, and have access to out-of-network doctors. Many people who seek mental health treatment do so without the government, social or financial resources we have.

Due to the nationwide health crisis and physician shortage in the United States, this lack of access — not to mention a health care system that, for many, can make it difficult to find in-network providers — is a problem that needs urgent help. the solution is As with all such problems, there are those who say that technology is the solution.

Enter AI. As generative AI chatbots have been launched among a wider range of users, some practitioners have started using easy-to-use tools, such as ChatGPT. Vice spoke to some of these users earlier this year and noted that anecdotal reports of people praising their experiences with chatbots have circulated on social media. A Redditor even wrote a guide to “jailbreaking” ChatGPT to bypass surveillance chatbots in providing mental health advice.

But ChatGPT is not meant to be a human therapist. It is not bound by the confidentiality or accountability requirements that guide the practice and ethics of human physicians. While there are consequences when a chatbot generates resources for a research paper, the consequences are less severe than the potential consequences of a chatbot giving dangerous or dangerous medical advice. true for people with mental illness.

This does not necessarily mean that AI is useless as a mental health resource. Betsy Stead, a psychologist and postdoctoral researcher at the Stanford Institute for Human-Centered AI, says research on AI and medicine should be organized by the same metrics used in psychology to evaluate healing. : what improves patient outcomes? Stead, who is the first author of a paper on the effective integration of AI in mental health care, hopes that AI can help patients and doctors receive and provide better care in better outcomes. But it’s not as easy as launching ChatGPT.

If you’re wondering where medical AI stands today – or what it is – we have the answer.

Chatbot therapy is risky

What is AI medicine?

The term “AI therapist” is used to mean two different things. First, there are apps specific to psychotherapy, some of which are available to the public and some of which are not. And then there are AI chatbots that describe themselves as therapeutic. These apps have been around long before tools like ChatGPT. For example, Voebot is a service launched in 2017 designed to provide support based on cognitive behavioral therapy; During epidemics, it was popular as a cheaper health aid and cheaper than medicine.

Recently, there has been a proliferation of free or cheaper chatbots that can provide pre-natural conversational interactions, thanks to a great language model as the basis of ChatGPT. Some are turning to this new generation of AI-powered tools for mental health support, a task they weren’t designed to do. Others do it by accident. Last January, the co-founder of health platform Coco announced that it had provided AI feedback to thousands of users who felt like they were talking to a real person.

It is worth noting that the conversation about chatbots and therapy is conducted with research on the role of AI in mental therapy except for the simulation of therapy sessions. For example, AI tools can help human doctors do things like organize their prescriptions and ensure they maintain proven medical standards that have a track record of improving patient outcomes.

Why do people think chatbot therapy is risky, even though they weren’t designed for that?

There are several theories as to why so many people seeking treatment respond to AI chatbots. Maybe they will receive emotional or social support from this bot. But the level of support can vary from person to person, and is certainly influenced by their mental health needs and their treatment expectations – as well as what the app provides. for them.

Therapy means very different things to different people, and people come to therapy for different reasons. Says clinical psychologist and co-founder of ADHD-specific startup Lara Honos-Webb. but aims to help people manage the situation. Those who find ChatGPT useful can approach the tool on a “problem, solution” level, he said. Such tools may seem particularly good at shaping thoughts or providing “behavioral activation,” such as a list of healthy activities to try. Stead added that, from a research perspective. Experts don’t really know what people think is right for them in these cases.

“Apart from very subjective qualitative reports of what some people do, and then others post on Reddit about their experiences, we don’t have a good account of what’s going on out there,” he said.

Why Chatbot therapy is risky?

There is an obvious concern that Chatbot therapy is risky: Privacy is a big issue. This includes managing training data used to improve generative AI tools to simulate medications. As well as the privacy of users who get sensitive medical information in chatbots when seeking help. Many of these systems also have built-in biases as they exist today, often reflecting and reinforcing the massive systemic inequalities that already exist in society.

But the biggest reason of why Chatbot therapy is risky – whether poorly understood or provided by software not designed for mental health – is that it could harm people by not providing the support and care they deserve. Therapy is more than just transcripts of chats and suggestions. Honos-Webb, who uses AI tools like ChatGPT to organize her thoughts while writing articles about ADHD. Notes that doctors pick up on many cues and nuances that AI isn’t ready to pick up on.

Stead, in his paper, points out that although the big language model has “promise” in guiding some of the skills needed in psychotherapy. There is a difference between “medical skills” and “using them effectively”. He expressed particular concern about the system’s ability to handle complex cases, including suicide, drug abuse, or special life events.

Honos-Webb cites the example of an elderly woman who was recently diagnosed with an eating disorder. One level of therapy might focus specifically on this behavior: If someone isn’t eating, what can help them eat? But a good doctor will take most of it. Over time, these doctors and patients can make connections between recent events: perhaps the patient’s wife just retired. He was angry that suddenly he was always at home, taking his place.

“A lot of treatments respond to the situation, what you see, what you observe,” explains Honos-Webb. And the success of this work is directly related to the relationship between the doctor and the patient.

But can AI help solve the healthcare access crisis?

Ethically applied AI can be a valuable tool to help people improve their outcomes when seeking health care. But Stead points out that the reasons behind this crisis are beyond technology and require solutions that are not just new apps.

When we asked Stead about AI’s role in solving the US health care access crisis. He said, “I believe we need universal health care.” A lot needs to happen in the AI space. “

“That said, I think there’s a huge opportunity to expand and fill the gap with this tool,” he added.

FAQ’s

What are the risks of chatbots?

spoofing/impersonating someone else, tampering with data, and data theft. 

Is AI chatbot safe to use?

Attackers can also hack into systems and cause a chatbot to spread malware or ransomware to users’ devices.

Can you use chatbot for therapy?

chatbots in mobile MH care have demonstrated effectiveness in broadening traditional therapy in a cost-effective and convenient manner

Add a Comment

Your email address will not be published. Required fields are marked *