New Study Challenges AI as a Substitute for Human Doctors
read more
A groundbreaking study published in the international journal BMJ on December 20, 2024, has shed light on the reliability of artificial intelligence (AI) in medical diagnostics. As AI tools are increasingly used for their speed and efficiency in analyzing medical data, the research raises critical concerns about their long-term dependability.
Assumptions Challenged by the Study
The study challenges the assumption that AI will soon replace human doctors. It highlights how cognitive impairments found in leading chatbots may undermine their reliability in medical diagnostics and erode patients’ trust in AI-based healthcare systems.
Evaluating AI Tools with MoCA Test
To assess the cognitive abilities of these AI tools, scientists tested publicly available large language model (LLM)-driven chatbots, including OpenAI’s ChatGPT, Anthropic’s Sonnet, and Alphabet’s Gemini. The Montreal Cognitive Assessment (MoCA) test, traditionally used to evaluate cognitive function in older adults, was adapted to assess AI performance in areas such as attention, memory, language, spatial reasoning, and executive function.
MoCA Scores: Insights from the Study
A score of 26 out of 30 on the MoCA is considered normal for humans. The AI models’ scores varied: ChatGPT 4.0 achieved the 26-point threshold, ChatGPT 4 and Sonnet scored 25, while Gemini 1.0 scored significantly lower at 16. One MoCA task, requiring attention to a series of letters, was adapted for the AI models by presenting the letters in written form and asking the models to identify a specific letter.
Cognitive Challenges in AI
A key observation from the study was the AI models’ difficulty with visual abstraction and executive function tasks, which are crucial for accurate medical diagnosis. The researchers emphasize that their findings are observational and direct comparisons between AI and human brains are complex. However, the results indicate that AI systems may have inherent flaws that could hinder their effectiveness in clinical settings.
The Role of AI in Healthcare
The study suggests that AI should be viewed as a tool to assist, not replace, human physicians. While AI can process vast amounts of data quickly, human expertise remains indispensable for interpretation. The research underscores the need for regular updates and retraining of AI models to maintain accuracy, as well as the potential for periodic “cognitive check-ups” to ensure ongoing reliability.
Conclusion: Balancing AI and Human Expertise
As the healthcare industry continues to integrate AI technologies, it is crucial to strike a balance between leveraging AI’s capabilities and maintaining the invaluable input of human medical professionals. This approach will help ensure high-quality, patient-centered care, grounded in both advanced technology and human expertise.
We invite you to share your thoughts on this important topic. Please comment below and let us know what you believe about the future of AI in healthcare. Don’t forget to subscribe to our newsletter to receive the latest updates and insights directly to your inbox. Share this article on social media to spread the word and engage in the conversation.