This site uses cookies to ensure the best viewing experience for our readers.
Freud is turning over in his grave: Can ChatGPT be used as a psychologist?

Freud is turning over in his grave: Can ChatGPT be used as a psychologist?

There is nothing more worthless than offering mental help by a bot. In order to test if it was possible, the entrepreneur Rob Morris decided to conduct an "experiment" on teenagers in need with ChatGPT. His poor judgment ended up proving why not everything can or should be attempted to be solved with technology

Viki Auslender | 08:33  15.01.2023

Over last weekend, entrepreneur Rob Morris shared an experiment he conducted in a Twitter thread that went viral. He conducted the experiment using the company he founded called Koko, which is a platform that provides mental assistance which, according to Morris, has about 2 million users, "most of them teenagers". Morris, a former data scientist at Airbnb, asked those who are supposed to provide the assistance to use ChatGPT to formulate responses to those in distress seeking emotional support. Why? Because he can.

Morris's conclusions are clear: the response time was cut by 50% and the answers were "highly rated". Despite these "impressive" metrics, the experiment - which he says was carried out on 4,000 people - was soon stopped because "once people learned the messages were co-created by a machine, it didn’t work. Simulated empathy feels weird, empty."

Robot Robot Robot

To explain why people resisted mental counseling by a text generator Morris used as an example the difference between getting an e-card versus a physical card. "Even if the words are the same in both cases, we might appreciate the effort that comes from going to the store, picking a card, mailing it, etc."

Let's put aside, just for a moment, ethical criticism of the "experiment". Let's not discuss, just this once, that there is nothing here that resembles "research" but rather "product development"; Let's ignore the fact that Morris, the data scientist, does not possess the professional skills to deploy such an "experiment" on thousands of people seeking mental help; And we will not dwell on the fact that he carried it out in an unsupervised or approved manner on vulnerable people who did not give their full consent (this is what Morris suggests, although he provides conflicting versions on the matter). If we put all the ethical questions aside, we can turn to the most important lesson from the "experiment" - it is worthless.

To explain why the experiment is downright worthless, we need to start by explaining ChatGPT. The language model from OpenAI has been intensively trained on huge datasets so that it can probabilistically determine, given a certain input text, what vocabulary it knows will come next. Because it is, well - a computer, it knows how to produce a reaction significantly faster than a person, sometimes it even reacts 50% faster. The sophisticated model also knows how to build convincing, or as Morris wrote, "elegant" textual combinations, which may have contributed to the "high rating" it received (although Morris does not explain how comments were scored).

But here the story ends. The results that ChatGPT produces are not related and do not communicate intentionally to any idea about the world. The strings of words it produces have no intention, and no matter how much more information we put into the language generator, it will never understand "how the world works", symbolic connections or abstract ideas; It is unable to connect abstract ideas with others, nor to separate them; It has no ability to continuously understand laws, rules and expectations in the world and therefore it "lies", "invents" and "hallucinates" non-stop.

What does medical, physical and mental care require? All the basic principles described above. Why does the answer feel "cheap" and "fake"? This has nothing to do with buying a greeting card in a physical store, but that while the chatbot is an excellent tool for imitation, auto-completion and pastiche, when we talk to each other we are not copying or just trying to connect pieces of text. We constantly investigate, without even noticing, hidden or implied meanings, what is said but also what is not said. We try to understand if we have been told everything, if information is being withheld from us and if so why, and draw conclusions from that context.

With this range of abilities, experiences, feelings and thoughts we build a complex model of the world, and through it we produce an effective and intentional response to specific situations. Sometimes the response is slower than a computer, but only it and nothing else can be considered a productive response.

At no point would a reasonable person with healthy intuition about the complexity of the human soul, and the care it needs, think there is value in referring people in mental distress to a language generator, a dull irresponsible tool, generating gross mistakes (which cannot be anticipated), that does not understand abstract systems of world. This experiment should never have happened, not on the general population, and especially not on those in mental distress, on teenagers and others who did not give their explicit consent to it.

It is such a basic conclusion that the question needs to be asked - why was Morris so wrong? Why does he think that words that come out of a person are "the same" as those of a machine, and all that is different is the "effort" like going to the store and choosing a greeting card.

In order to try to answer the question, we should not - unlike Morris or ChatGPT - pretend to understand his mind or what happened in his mind, but only to infer from his behavior and the thread he wrote. From these it appears that there is the expectation, or at least the assumption, that artificial intelligence "can" fulfill a need for care and empathy.

Morris asks himself in the thread whether machines will ever be able to overcome the hollow and unempathetic feeling they produce, and immediately replied "probably", perhaps because they will really be perfected that way "especially if they establish rapport with the user over time," he writes.

His argument is fundamentally flawed because, as mentioned, even if a computer were to better imitate empathy, it would still not understand the inner state of humans or abstract systems of the world. Therefore, even if the model is perfected, it can still not be trusted, or believed, and it is definitely important to avoid using it in anything substantial like a tool for mental therapy.

Related articles:

Morris shares a rather depressing but not particularly original insight into the world. For years, the attitude in Silicon Valley has strongly promoted that every need in the physical world is a problem that can be solved with the help of technology. Is there a shortage of doctors, judges or welfare workers? We will deploy artificial intelligence systems to help with productivity. This means that cases will be reviewed quickly and, through statistical insight into the world, treatment, levels of punishment or prioritization among social cases will be proposed.

But interpersonal relationships, social needs, care, education, and work, do not require a solution, but a response. They require more human attention, more care, a more complex understanding of the world. Autocomplete or fancy "copy and paste" don’t produce the depth we need, and not everything can or should be attempted to be solved with technology.

share on facebook share on twitter share on linkedin share on whatsapp share on mail