Aims
Uncertainty around medical liability and accountability remains a major barrier to the effective implementation of AI in endoscopy. To date, no malpractice cases have involved AI-enabled endoscopy. As systems become increasingly automated, responsibility becomes more complex, involving clinicians, hospitals and manufacturers. A better understanding of public perception around AI and liability will be critical for clinical adoption and the development of legal frameworks. This study examines how laypersons assign responsibility across stakeholders at different levels of AI automation in endoscopy.
Methods
An online survey was conducted via a dedicated research platform (Prolific). Adults > 18 years old from the USA and Europe were randomly presented with three AI endoscopy harm scenarios representing increasing levels of automation. Participants rated responsibility for four stakeholder groups [doctor, hospital, AI manufacturer and a no-fault compensation scheme] using a 7-point Likert scale. Scenario 1 involved a computer aided quality (CAQ) tool reporting adequate mucosal visualization followed by a missed colorectal cancer (CRC). Scenario 2 involved a computer aided diagnosis (CADx) system misclassifying an adenoma as a hyperplastic polyp, leaving an adenoma in situ, which later progressed to CRC. Scenario 3 described a capsule endoscopy tool with which physicians check only images flagged by the system as abnormal, failing to identify a significant gastrointestinal bleed.
Results
502 respondents completed the survey (USA: 250; Europe: 252). Responsibility scores varied significantly by both AI automation level and stakeholder (p < 0.001). Doctors received the highest overall mean responsibility score of any stakeholder (5.15, p < 0.001). However, accountability patterns shifted significantly with increasing AI automation. Doctors’ mean responsibility declined steadily, reaching 3.32 at the highest automation level (p < 0.001). In contrast, hospitals and manufacturers showed the opposite trend. Their mean responsibility scores peaked at the highest automation level, at 5.73 and 5.83, respectively (p < 0.001). Ratings for the no-fault compensation scheme remained largely unchanged.
Conclusions
This is the first study to examine public perceptions of liability for AI-related harms in endoscopy. Participants perceived that increasing AI automation shifts responsibility away from clinicians and toward hospitals and manufacturers, though clinicians were assigned high responsibility ratings across all scenarios. As automation increases, hospitals and manufacturers should strengthen governance and monitoring to ensure safety and accountability. These findings underscore the need to preserve a clear human-in-the-loop role within AI-supported endoscopy and to establish structured, transparent liability frameworks to guide clinical practice.