I’m seeing more and more stories about people having mental health conversations with various GenAI tools.
For example, Meta wants their AI tool to be your friend, to combat the “loneliness epidemic”. Story after story show people having conversations with GenAI tools about their mental state.
A Harvard Business Review survey says that “therapy / companionship” is the number one reason that people are using GenAI in 2025, up from number two in 2024. They also show that “Finding purpose” is at number three, which is related to our mental state.
While it’s likely that a properly tuned GenAI tool could do something positive about mental health, I’m quite confident that this won’t be coming from a for-profit company today. There are significant ethical considerations that have to be addressed if this is to be a net positive to society and we’ve seen time and time again what happens when ethical points compete against revenue goals. Ethics lose.
While I’m not a therapist, I do spend a lot of time studying therapy, psychology and neuroscience. Enough to know how dangerous this path is.
If you find yourself getting angry or depressed after talking to AI agents, reach out to a therapist instead. You’ll be glad you did.
Update: The anger I’m talking about is not frustration that the tool isn’t giving you the answer you expected. That would be a legitimate feeling if the tool isn’t doing what you want. I’m specifically talking about being angry about other things because of things that the tool has told you. When you are being encouraged to feel a certain way.
See also:
- This article by Rolling Stone that is simultaneously better researched / written and also more shocking.
- This article on CNN about how conversations with a chat bot may have led to a teens suicide.
- This study on the “therapy” responses that GenAI tools provide. Spoiler: the results aren’t great today.
- This early case in front of the US courts, where the judge found that GenAI is not covered by “freedom of speech”.