He talks, listens and gives advice as if it were the one and only. But behind the familiar voice and face of Dirk De Wachter there is no human being. Artificial intelligence. Researchers at Thomas More developed a hyper-realistic digital clone of the famous psychiatrist to expose the dark side of AI therapy. The result can be seen tonight Pano on VRT 1.
Researchers from Thomas More University of Applied Sciences created a digital version of psychiatrist Dirk De Wachter for the interpretation magazine Pano. With their avatar they demonstrate how deceptively realistic an AI chatbot can feel and the risks associated with it. “Many people do not realize that, via chatbots, they share their most personal thoughts with the largest technology companies in the world,” warns Tom Van Daele (People and Welfare Research Group).
“The advice you receive is also not always reliable. And if you really need professional help, a chatbot cannot assess it or refer you specifically.”
Privacy and reliability under pressure
The emergence of AI chatbots as an accessible alternative to therapy raises important questions about data privacy, reliability and professional standards of care. While traditional therapists are bound by strict professional ethics and confidentiality, commercial AI chatbots operate in a gray area.
Tom Van Daele, clinical psychologist, has been researching the use of technology such as virtual reality and AI in mental health care for years. “By conducting research, I have become convinced that selective and thoughtful use of technological applications by professional care providers can indeed make a difference. But these must be tools that support the professional, not replace them.”
Dries Van Craen, extended reality researcher and software developer (Expertise Center for Sustainable Entrepreneurship and Digital Innovation), pushes the boundaries of what is possible with artificial intelligence. He built the digital version of Dirk De Wachter: “The evolution of artificial intelligence is happening very quickly. A few months ago not all components were present, but it has now become easy to clone people digitally.”
“It is a matter of time before you will no longer be able to tell the difference between a real person and an AI avatar. We urgently need to think about how we will deal with this as a society.”
Opportunities and limits
Both researchers emphasize that AI is not necessarily harmful in mental health care. When used responsibly, under professional guidance and with transparency about the limitations, digital tools can be a valuable addition. The key lies in guarding the boundaries: what can AI do and, above all, what can it not do?
More information:
- Broadcast: October 29 at 9:25 PM on VRT 1
- View the Pano report here
- Read an interview with Tom Van Daele about this subject here
- Listen to the podcast with Dries Van Craen about building conversational AI avatars here (from October 30)
- Read more about Thomas More’s research into AI, technology and mental health here
Tom Van Daele and Dries Van Craen are available for interviews and additional explanation.
About research at Thomas More
Thomas More is the largest player in Flanders for practice-oriented research. We conduct high-quality research that responds to needs in the field and anticipates future challenges. We not only provide insights, but also concrete solutions: prototypes, manuals, methodologies, advice and standards. To achieve this, we work together with companies and governments, but also with organizations and end users.
