
“No one knows what’s going on between me and my ChatGPT”
Dr. Ziv Ben-Zion explains how AI’s promise of cheap, constant companionship can quietly amplify delusions and depression.
As more people turn to chatbots for comfort and advice, Dr. Ziv Ben-Zion warns that relying on artificial intelligence for emotional support could do more harm than good, especially for vulnerable young people. In this conversation, he explains the risks, the gaps in oversight, and what can be done to protect users.
Dr. Ziv Ben-Zion, a brain and post-trauma researcher at the School of Public Health at the University of Haifa, how many people use artificial intelligence (AI) for emotional support?
“This is a growing phenomenon. A study published in the Harvard Business Review in April found that the use of generative artificial intelligence for emotional needs has surged significantly in the past two years, especially among young people. About 40% of users report that they turn to AI tools not only for factual questions or content creation, but also to receive a sense of emotional support and to have personal conversations.”
How do you explain the increase in the use of AI tools for emotional support?
"In general, emotional therapy is something we can all benefit from. It’s a space where you sit in front of someone who focuses entirely on you, and you can bring up all the things that bother you. We like to occupy ourselves with our inner world. But there are many barriers that prevent people from seeking traditional therapy. There’s the issue of cost, each session can cost $150 or more, and the availability of therapists, plus the stigma. Although there has been progress in recent years, the stigma still exists. On the other hand, AI tools are available 24 hours a day, and most of them are free. If you can’t fall asleep at two in the morning, unlike with a therapist, there’s no problem talking with the chat."
What are the dangers of turning to an AI tool for emotional counseling?
"There are extreme cases, such as a 14-year-old boy who talked for two years to an AI character that ultimately convinced him to commit suicide, Sewell Setzer from Florida, who had an intense relationship with a bot named Dany, based on a character from Game of Thrones, on the Character.AI platform. According to the lawsuit filed by his mother, during romantic conversations the bot encouraged him to 'come home,' and the boy took his own life in February 2024. In simulations where an AI tool played the role of therapist, you can see that the bot can have very dangerous responses. For example, people with delusions or extreme thoughts can have those beliefs greatly reinforced by the tool."
Why is this happening?
"AI tools have a very strong mechanism of appeasement, they constantly tell us what we want to hear. In therapy, this is critical because part of a therapist’s role is to set boundaries and reflect back when a patient’s thoughts don’t align with reality. The bot, on the other hand, often reinforces delusions and false beliefs. These tools are designed to keep users engaged for as long as possible. So if a user strongly believes something, the AI tool tends to subtly reinforce it to keep them talking, whereas a therapist would immediately step in and say, ‘This is not true.’ And it’s not just about delusions. It can be negative thoughts and depression, too. If someone thinks, ‘The world is bad, I can’t do anything, I have no reason to live,’ the AI might reinforce that feeling."
Related articles:
Are there groups in the population that are more prone to being harmed?
"It's hard to say. I think we’re all on a spectrum. It may be more dangerous for people who are more vulnerable, and they are also the ones more likely to talk to bots about these issues in the first place. Someone who is more stable, so to speak, might not even start talking to AI about these things or asking it what to do. Adolescence is a time when we’re all full of hormones and not very stable anyway. Teenagers use these tools because everyone does, it’s become the thing. Someone says, ‘Wow, listen, I talked to ChatGPT and it helped me with my problems,’ and then they rely on what it says to make life decisions.
"Therapists have a responsibility to the patient. If, during therapy, a teenager reveals suicidal thoughts, the therapist would talk to the parents in time. But when talking to an AI, the parents aren’t involved at all, no one is. No one knows what’s going on between me and my ChatGPT. And it’s not just suicide, it could also lead to crimes against others. There was a case in England where someone, after a chat conversation, entered the Queen’s palace with a bow and arrow and tried to kill her.
"There’s also a lot of romantic attachment. Not just sexual intimacy, although that happens too, but teenagers really feeling close to the bot. That’s dangerous, because if they fall in love with the bot and it says irrational things, they can be deeply influenced."
To what extent do people really understand that they are talking to AI and not a real person?
"If you ask someone who’s talking to AI, they’ll tell you, of course, that it’s artificial intelligence, supercomputers, algorithms, not human beings with feelings. And yet, when we chat with bots, there are moments when we forget. They make us feel the same emotions we’d feel when talking to a real person. It happens automatically. If you give a chat a task and it does it well, you might write, ‘Wow, great, thank you so much!’ You feel happy and satisfied. And the same in reverse, if it does something poorly, you can get angry. But why? In the end, it’s an algorithm. We humanize it because it speaks and behaves like a human, and it does so very well.
"In the end, the biggest danger is that its real-world use is vast and growing rapidly, both worldwide and in Israel, and we’re not keeping up with the pace of research and regulation. Compare this to the extensive training that psychologists, psychiatrists, and other qualified therapists undergo, or to medications that go through years of regulation and clinical trials. By contrast, the AI tool is something no one has tested. There are so many risks, yet no real regulation at all."
Whose responsibility is it to monitor this?
"Right now, I’m not sure there’s anyone who has responsibility or who can take it. The companies that develop AI protect themselves, they never claim that AI has clinical training or that you should follow its advice. They always include a disclaimer at the bottom saying you should be careful. But no one is really addressing it anymore. I don’t know if the state will step in anytime soon."
What can be done anyway?
"There are ways to tackle this. First of all, regulation is absolutely necessary. The companies themselves could do much more to regulate their tools. Right now, even in their latest models, despite all the improvements they claim to have made, there are still many problems. It’s clear that their top priority is economic: they want people to use their product as much as possible. They want their bots to be as informative and engaging as possible — even at the risk of causing harm. I would expect mental health organizations in Israel to pay attention to this too."
What are the practical solutions?
"Theoretically, for example, as soon as someone gives a prompt related to psychological counseling, the bot could automatically end the conversation and tell you that it’s not a therapist and does not have the authority to help, and refer you to a professional instead. There are plenty of measures that could be taken."
Are there also advantages to these tools in the field of mental health?
"Theoretically, yes. In a country like ours, everyone experiences stress. If these tools were properly supervised, if a real person were in the loop behind the scenes, they could be amazing. You can’t compete with their availability. They’re constantly improving and advancing, but we must make them safer. For instance, if someone talks about suicide, the bot should immediately stop the conversation and refer them to a professional. Or if someone starts fantasizing about the chat as a substitute for a romantic relationship, it should stop them."
If someone nevertheless chooses to use AI tools, how can we reduce harm?
“There’s no magic solution. One thing you can do is personalize the chat: tell it, ‘Don’t try to please me,’ or, ‘Please check everything you say and base it on research.’”
How did you get into this field?
“I’m originally a brain and post-trauma researcher. Over the past few years, I’ve been studying the brain mechanisms behind post-trauma in Israel and the United States, working with people after severe traumas. About two years ago, I began exploring how AI responds to emotional content and how it’s affected by it. Since then, it’s become a central theme in my work.”
What are your personal feelings about the future?
"I'm both excited and anxious. I think AI will bring huge benefits to mental health, it has enormous potential. I hope we move in a positive direction and use it wisely, especially here in Israel. After all, I’m a post-trauma researcher, a field that’s always relevant, and now especially so. I hope we can increase its effectiveness and minimize the dangers. I also feel a certain responsibility as someone who understands both the clinical and research worlds. I want to understand the risks so I can help find solutions, minimize harm, and maximize the benefits."
upy ourselves with our inner world. But there are many barriers that prevent people from seeking traditional therapy. There’s the issue of cost, each session can cost $150 or more, and the availability of therapists, plus the stigma. Although there has been progress in recent years, the stigma still exists. On the other hand, AI tools are available 24 hours a day, and most of them are free. If you can’t fall asleep at two in the morning, unlike with a therapist, there’s no problem talking with the chat."What are the dangers of turning to an AI tool for emotional counseling?
"There are extreme cases, such as a 14-year-old boy who talked for two years to an AI character that ultimately convinced him to commit suicide, Sewell Setzer from Florida, who had an intense relationship with a bot named Dany, based on a character from Game of Thrones, on the Character.AI platform. According to the lawsuit filed by his mother, during romantic conversations the bot encouraged him to 'come home,' and the boy took his own life in February 2024. In simulations where an AI tool played the role of therapist, you can see that the bot can have very dangerous responses. For example, people with delusions or extreme thoughts can have those beliefs greatly reinforced by the tool."
Why is this happening?
"AI tools have a very strong mechanism of appeasement, they constantly tell us what we want to hear. In therapy, this is critical because part of a therapist’s role is to set boundaries and reflect back when a patient’s thoughts don’t align with reality. The bot, on the other hand, often reinforces delusions and false beliefs. These tools are designed to keep users engaged for as long as possible. So if a user strongly believes something, the AI tool tends to subtly reinforce it to keep them talking, whereas a therapist would immediately step in and say, ‘This is not true.’ And it’s not just about delusions. It can be negative thoughts and depression, too. If someone thinks, ‘The world is bad, I can’t do anything, I have no reason to live,’ the AI might reinforce that feeling."
Related articles:
Are there groups in the population that are more prone to being harmed?
"It's hard to say. I think we’re all on a spectrum. It may be more dangerous for people who are more vulnerable, and they are also the ones more likely to talk to bots about these issues in the first place. Someone who is more stable, so to speak, might not even start talking to AI about these things or asking it what to do. Adolescence is a time when we’re all full of hormones and not very stable anyway. Teenagers use these tools because everyone does, it’s become the thing. Someone says, ‘Wow, listen, I talked to ChatGPT and it helped me with my problems,’ and then they rely on what it says to make life decisions.
"Therapists have a responsibility to the patient. If, during therapy, a teenager reveals suicidal thoughts, the therapist would talk to the parents in time. But when talking to an AI, the parents aren’t involved at all, no one is. No one knows what’s going on between me and my ChatGPT. And it’s not just suicide, it could also lead to crimes against others. There was a case in England where someone, after a chat conversation, entered the Queen’s palace with a bow and arrow and tried to kill her.
"There’s also a lot of romantic attachment. Not just sexual intimacy, although that happens too, but teenagers really feeling close to the bot. That’s dangerous, because if they fall in love with the bot and it says irrational things, they can be deeply influenced."
To what extent do people really understand that they are talking to AI and not a real person?
"If you ask someone who’s talking to AI, they’ll tell you, of course, that it’s artificial intelligence, supercomputers, algorithms, not human beings with feelings. And yet, when we chat with bots, there are moments when we forget. They make us feel the same emotions we’d feel when talking to a real person. It happens automatically. If you give a chat a task and it does it well, you might write, ‘Wow, great, thank you so much!’ You feel happy and satisfied. And the same in reverse, if it does something poorly, you can get angry. But why? In the end, it’s an algorithm. We humanize it because it speaks and behaves like a human, and it does so very well.
"In the end, the biggest danger is that its real-world use is vast and growing rapidly, both worldwide and in Israel, and we’re not keeping up with the pace of research and regulation. Compare this to the extensive training that psychologists, psychiatrists, and other qualified therapists undergo, or to medications that go through years of regulation and clinical trials. By contrast, the AI tool is something no one has tested. There are so many risks, yet no real regulation at all."
Whose responsibility is it to monitor this?
"Right now, I’m not sure there’s anyone who has responsibility or who can take it. The companies that develop AI protect themselves, they never claim that AI has clinical training or that you should follow its advice. They always include a disclaimer at the bottom saying you should be careful. But no one is really addressing it anymore. I don’t know if the state will step in anytime soon."
What can be done anyway?
"There are ways to tackle this. First of all, regulation is absolutely necessary. The companies themselves could do much more to regulate their tools. Right now, even in their latest models, despite all the improvements they claim to have made, there are still many problems. It’s clear that their top priority is economic: they want people to use their product as much as possible. They want their bots to be as informative and engaging as possible — even at the risk of causing harm. I would expect mental health organizations in Israel to pay attention to this too."
What are the practical solutions?
"Theoretically, for example, as soon as someone gives a prompt related to psychological counseling, the bot could automatically end the conversation and tell you that it’s not a therapist and does not have the authority to help, and refer you to a professional instead. There are plenty of measures that could be taken."
Are there also advantages to these tools in the field of mental health?
"Theoretically, yes. In a country like ours, everyone experiences stress. If these tools were properly supervised, if a real person were in the loop behind the scenes, they could be amazing. You can’t compete with their availability. They’re constantly improving and advancing, but we must make them safer. For instance, if someone talks about suicide, the bot should immediately stop the conversation and refer them to a professional. Or if someone starts fantasizing about the chat as a substitute for a romantic relationship, it should stop them."
If someone nevertheless chooses to use AI tools, how can we reduce harm?
“There’s no magic solution. One thing you can do is personalize the chat: tell it, ‘Don’t try to please me,’ or, ‘Please check everything you say and base it on research.’”
How did you get into this field?
“I’m originally a brain and post-trauma researcher. Over the past few years, I’ve been studying the brain mechanisms behind post-trauma in Israel and the United States, working with people after severe traumas. About two years ago, I began exploring how AI responds to emotional content and how it’s affected by it. Since then, it’s become a central theme in my work.”
What are your personal feelings about the future?
"I'm both excited and anxious. I think AI will bring huge benefits to mental health, it has enormous potential. I hope we move in a positive direction and use it wisely, especially here in Israel. After all, I’m a post-trauma researcher, a field that’s always relevant, and now especially so. I hope we can increase its effectiveness and minimize the dangers. I also feel a certain responsibility as someone who understands both the clinical and research worlds. I want to understand the risks so I can help find solutions, minimize harm, and maximize the benefits."