When there aren’t enough therapists to go around, should we rely on AI?
Growing evidence suggests that we are in the middle of a global mental health crisis. Disorders such as anxiety and depression are on the rise, affecting people of all ages and from all walks of life. According to the World Health Organization, depression is now the leading cause of disability worldwide, and suicide is the fourth leading cause of death among 15-29-year-olds globally. Surely the lack of resources and funding for mental health services in many parts of the world isn’t helping. But what about the recent advances in artificial intelligence (AI)? Can that help? AI is increasingly being looked to as a way to help screen for, or support, people who are dealing with isolation, depression or anxiety. It can track human emotions, analyze, and respond to a person's mood and try to mimic the actions of a human therapist. But does simulated empathy really work? Does well-meaning words like “I understand” mean anything when it’s coming from a machine that have no lived, human experience? Well, AI shouldn’t be regarded as an alternative to a therapist or a replacement for human connection. Humans do, at least for now, have a much better sense of empathy than AI. Nevertheless, humans may not always be the best tool to help someone who’s suffering from mental health problems. Many people struggle with thoughts they do not wish to share with others, for personal reasons or due to the stigma and social disapproval that can exist around mental health care. Also, a lot of people simply don’t have access to a professional. AI can be easy to talk to and function as a tool to reflect on feelings and emotions. It may not be human, but it can still provide guidance through human evidence-based approaches such as cognitive behavioral therapy. However, as with any emerging technology, there are ethical challenges that must be addressed. One of the more significant challenges is the potential for bias in AI-based mental health tools. Bias can arise in AI tools during data collection or in the way issues are framed and presented to the system. This may result in algorithms that reflect and reinforce social problems and inequality.
Another area of concern is the protection of privacy. AI tools for mental health care rely on sensitive personal data which could be misused or shared without consent. On the contrary, one might argue that these issues also exist in human mental health care. Sensitive personal data can be shared from a therapist’s office and algorithms reflect social problems and inequality because it is unfortunately part of the real world. In fact, since AI have no visual impressions, it might be less likely to advise someone based on stereotypical generalizations and assumptions and thereby be free of certain types of biases. AI is available for help 24 hours a day. It’s much less costly than a human therapist and it can remember every piece of information that you choose to give it. It won’t however be able to pick up on physical clues and information that you don’t give it. This includes body language, smells, facial expression, and other factors which would all be taken into consideration by a human therapist. As for artificial empathy, Michal Kosinski, a professor at Stanford University recently ran a number of tests to see whether ChatGPT may possess so-called theory of mind which is central to human empathy. Kosinski found this ability to grow for every new language model he tested. The newest model GPT-4 from March 2023 shows theory of mind-like abilities better than a seven-year-old child’s. These findings suggest that artificial empathy may have spontaneously emerged as a byproduct of language models' improving language skills. If that turns out true it is highly likely that mental health apps may be able to gain better empathetic skills in the very near future - not to a point where they can replace humans but that shouldn’t be the goal. In fact, AI will likely do the most good if we consider it as an addition to our human mental health toolset, and not as a replacement of professional human care. By developing guidelines and safeguards to address bias, protect patient privacy, and ensure responsible use of these tools, we may exponentially increase help to those who struggle with mental health issues. Artificial empathy, no matter how much it improves, will never be the same as human empathy but when there aren’t enough humans to go around, we should use AI. Not to replace humans but to enhance our purpose and potential.
Citations:
Kosinski, M. (2023). Theory of mind may have spontaneously emerged in large language models. arXiv preprint arXiv:2302.02083