A profound relational revolution is underway, not orchestrated by tech developers but driven by users themselves. Many of the 400 million weekly users of ChatGPT are seeking more than just assistance with emails or information on food safety; they are looking for emotional support.
“Therapy and companionship” have emerged as two of the most frequent applications for generative AI globally, according to the Harvard Business Review. This trend marks a significant, unplanned pivot in how people interact with technology.
Part of me is ok with this in that any avenue to get mental health resources can be better than nothing. What worries me is that people will use ChatGPT for this sort of thing and these models will not be good help.
AI will reinforce delusional thinking. This is definitely not good.
more delusional people means more people that can make good music
I’ll admit I tried talking to a local deepseek about a minor mental health issue one night when I just didn’t want to wake up/bother my friends. Broke the AI within about 6 prompts where no matter what I said it would repeat the same answer word-for-word about going for walks and eating better. Honestly, breaking the AI and laughing at it did more for my mental health than anything anyone could have said, but I’m an AI hater. I wouldn’t recommend anyone in real need use AI for mental health advice.
Honestly of they could program a halfway decent AI therapist then art least it could take some of the load off our already insufficient mental health professionals by dealing with the lighter-weight cases, leaving the psychotherapists free to deal with the especially sick people.
The real problem becomes when bad or non scientific advice gets regurgitated to people over and over.
It’s actually much worse than that.
So… AI.