Pathologists & AI: Perception Study | Quantitative Research

by drbyos

Principal results

This study surveyed 224 pathologists regarding their knowledge, attitudes, and behavioral intentions toward AIADS. The results show that most pathologists have a certain level of knowledge about AIADS, hold positive attitudes, and express a willingness to use it. Participants who have used AIADS scored higher in knowledge, attitude, and behavioral intention than those who have not, and their attitudes toward AIADS were clearer. The primary reasons for supporting the use of AIADS in clinical diagnosis were its potential to improve diagnostic efficiency. However, the main reason why participants distrust AIADS and its limitations focuses on “insufficient diagnostic accuracy”. Logistic regression analysis indicated that behavioral intention was associated with key factors, including ethnicity, hospital level, AIADS usage experience, knowledge, and attitude. Mediation analysis further suggested that, among users of AIADS, the association between knowledge and behavioral intention was partially explained by attitude.

Pathologists generally support the application of AIADS in pathology, believing they can enhance diagnostic efficiency and provide valuable supplementary information. Our results align with Sarwar et al. [13]showing generally positive respondent attitudes toward AI in diagnostic pathology. And studies have shown that AI demonstrates excellent diagnostic performance in pathology [12]improving diagnostic accuracy and reducing inter-observer variability [29]. The cervical cytology AI system developed by Xue et al. significantly improved the sensitivity and specificity of junior cytopathologists (0.857 vs 0.657, 0.840 vs 0.737; both P < .001), while also reducing reading time (218 seconds vs 30 seconds; P < .001) [30]. However, this study found that despite pathologists’ support for AI, there remain concerns about its diagnostic accuracy. Currently, the effectiveness of AI in medical applications remains controversial [31–33]particularly in the field of pathology, where, despite rapid development, several challenges persist [34]. The quality of pathology slides is influenced by factors such as sample preparation, staining, and other variables [35,36]while AI systems rely heavily on large annotated datasets, which are often lacking [37]. Additionally, annotation is difficult, and there is considerable variability between experts [33]which leads to poorer performance of AI in rare cases. AI training often uses experienced clinicians as the “gold standard”, but even their accuracy is limited [38]especially in complex cases, where AI may perform worse than seasoned clinicians. These issues constrain the development of AI in pathology, and improvements in diagnostic accuracy are still needed.

Research showed that pathologists generally have a high willingness to use AIADS, although some remain cautious, indicating that various factors influence their behavioral intention. The study found that pathologists in tertiary hospitals have a lower acceptance of AIADS compared to those in primary hospital, likely due to differences in resources and value perceptions. This study found no significant impact of gender, work experience, or age on the willingness to use AIADS. A review by Lambert et al. [19] indicated that gender does not significantly affect acceptance, while the impact of age and experience varies across studies. This study found that higher knowledge levels positively influence willingness to use AI, consistent with the Technology Acceptance Model, which suggests that attitudes and intentions toward new technology are largely shaped by understanding [39]. This finding is consistent with existing research indicating that attitude is a significant factor influencing the willingness to use AI [16–18]. In practice, clinicians’ willingness to adopt AI is shaped by various interrelated factors. Understanding these factors is key to developing effective strategies and training programs that boost AI adoption.

This study found that pathologists who had not used AIADS expressed a stronger willingness to use such systems compared to those with prior experience—a result contrary to conventional expectations. As shown in multivariable analysis (see S8 Table), after adjusting for knowledge and/or attitude scores, the association between prior usage and willingness to use reversed significantly, indicating that non-users had a notably higher willingness. This suggests that hands-on experience may indirectly influence behavioral intention through its effects on cognition and attitude. Specifically, although the user group scored higher in knowledge and attitude, there was no significant difference in the behavioral intention scores between the two groups (see Table 1). Notably, the non-users exhibited a greater difference in the scores of “behavioral intention-knowledge” (0.81 vs. 0.11) and “behavioral intention-attitude” (0.78 vs. 0.47), indicating that, despite having a weaker foundation in cognition and attitude, their willingness to use was relatively stronger.

Possible reasons for this phenomenon include performance and reliability issues with AIADS in real-world practice. This study found that over half of the users encountered diagnostic errors, and most felt the need to frequently verify results, indicating concerns regarding accuracy and slide quality. Additionally, insufficient integration with clinical workflows may increase operational burdens, as research shows that users are unaccustomed to the new reading methods. Furthermore, the lack of training and technical support further hinders the user experience [40,41]. The higher willingness among non-users may stem from anticipation and “technology optimism bias” [42,43]though its translation into sustained usage depends heavily on the system’s actual performance and integration. Therefore, promoting AIADS requires not only advocacy but also focused efforts on enhancing algorithmic accuracy, improving workflow integration, and establishing systematic training and feedback mechanisms to convert initial willingness into sustained adoption. Future implementation strategies should prioritize real-world reliability, usability, and user experience.

This study examined the mediating role of attitude in the association between knowledge and behavioral intention. The results indicate that attitude served as a significant mediator in this association in the initial models. However, when prior experience with AIADS was included as a covariate, the overall mediation effect was no longer significant, suggesting that prior use may alter the pathway linking knowledge, attitude, and intention. Subgroup analysis further indicated that this moderating pattern reflects population heterogeneity: among participants without prior AIADS experience (accounting for 62.1% of the sample), attitude was not a significant mediator. In contrast, among those with AIADS experience, attitude showed a strong and significant mediating role. These findings suggest that hands-on experience with AIADS may influence the mediating role of attitude, potentially facilitating the association between knowledge and intention via stronger attitude. This highlights that fostering AI adoption may depend not only on increasing knowledge, but also on building practical experience and attitude—key factors that may help translate knowledge into clinical use.

Studies indicate that pathologists lack sufficient legal and ethical knowledge regarding AI. Research consistently shows that clinicians are underprepared in this area [44–46]. The main reasons include: rapid development of AI technology outpacing medical education, which lacks relevant ethics and legal courses; limited understanding of AI’s workings and algorithms, making it difficult for clinicians to identify ethical risks. To address these issues, we recommend implementing systematic and institutional educational interventions at multiple levels. First, AI ethics and legal courses should be integrated into core medical education, particularly within standardized residency training and continuing education programs for pathologists, emphasizing case-based learning and practical application. Second, professional associations, such as the Chinese Society of Pathology, should spearhead the development of official guidelines and industry standards to provide clear behavioral protocols for practitioners and address knowledge gaps and legal uncertainties at a macro level. Finally, health authorities should introduce regulations that clarify accountability frameworks for AI applications in healthcare and promote the certification and implementation of continuing education courses focusing on AI functionality, data privacy, and practical methods for prudent use of AI-assisted decision-making.

The application of AIADS in clinical practice may lead to diagnostic errors, making responsibility attribution a complex issue [47]. This study indicates that physicians assign responsibility for errors in a descending order to pathologists, AI developers, and hospitals, aligning with current findings [26,48] and suggesting that a singular accountability model is inadequate. Accordingly, a shared responsibility model is proposed to achieve fair and effective risk allocation. Within this framework, clinicians, as final decision-makers, are responsible for critically verifying AI outputs and exercising independent judgment; AI developers must ensure algorithmic transparency, explainability, and provide comprehensive validation reports; and healthcare institutions should undertake technical validation and internal oversight to ensure appropriate use of AI systems within controlled environments. This model aligns each party’s responsibilities with their role in the AI application chain, offering a feasible approach for future legal traceability and risk management. To operationalize this model, legislative measures should be advanced to clarify legal responsibility distribution in AI-assisted healthcare—for instance, by leveraging medical device regulations to define the legal status of AI as a “medical device”. Simultaneously, liability attribution and compensation mechanisms must be refined to reflect the multi-stakeholder nature of AI-related incidents. Furthermore, improving the interpretability and decision-making transparency of AI systems will not only strengthen physicians’ trust but also facilitate rapid attribution and root-cause analysis when errors occur.

This study employed a nationwide survey covering hospitals of different tiers across various regions in mainland China, the demographic characteristics of which closely mirrored those of large-scale national surveys [49,50]enhancing the representativeness of our sample. By investigating pathologists’ attitudes toward AIADS, it offers valuable insights for promoting the application of AI in healthcare and deepens the understanding of pathologists’ interactions with AIADS and their needs. However, several limitations should be noted. First, due to the convenience sampling and online recruitment strategy, a precise response rate could not be calculated, which may introduce selection bias. Second, the treatment of ordinal Likert scale responses as continuous variables, which assumes equal intervals between categories, an assumption that may not fully reflect participants’ interpretations of the scale. Third, the cross-sectional design precludes causal inference, and reverse causality remains a possible explanation for the observed associations. Additionally, the sample is primarily drawn from secondary and tertiary hospitals, with fewer pathologists from primary and below hospitals, which may introduce selection bias and fail to fully reflect the conditions in grassroots healthcare settings. Finally, the small sample size for subgroup analyses may lead to unstable results. Future studies could employ a probabilistic sampling approach, expand the sample size—particularly among pathologists in primary care settings—and incorporate long-term follow-up to further validate the findings and support the development of AI in the healthcare sector.

Related Posts

Leave a Comment