This media is currently not available.
A Multi-Method Explainability Framework for Segmentation in Pancreatic EUS: Enhancing Transparency, Clinical Trust and Diagnostic Safety
Poster Abstract

Artificial intelligence systems for endoscopic ultrasound (EUS) increasingly achieve high diagnostic accuracy for detecting pancreatic masses1, yet their “black box” nature limits clinical acceptance. Clinicians must understand why and where an AI model identifies a lesion, yet existing explainability techniques (e.g., CAM-based methods) were designed for image classification, not for segmentation-based object detection. When applied to EUS, these methods often highlight irrelevant features such as depth markers, calipers or acoustic artifacts, producing misleading interpretations and reducing trust. As no dedicated explainability approach exists for segmentation models in abdominal EUS, there is a need for a reliable, clinically meaningful framework that provides anatomically accurate and artifact-free explanations.

We developed a dedicated explainability framework for the PANCRAIEUS model segmentation architecture used in pancreatic EUS, based on the initial training data of 32713 frames recorded in 202 patients1. The framework integrates several adapted methods (GradCAM, EigenCAM, occlusion analysis and class-specific attention) technically optimized for the multi-scale, segmentation-based design. Building on this, we introduced two new segmentation-guided strategies. The Hard Mask method restricts heatmaps strictly to detected structures, ensuring that explanations correspond only to pancreas, tumor or cyst regions. The Soft Focus method maintains overall anatomical context while gradually reducing irrelevant areas, producing fluid, intuitive visualizations suitable for real-time interpretation. Both methods were systematically assessed for anatomical fidelity, artifact suppression and clinical interpretability.

When applied to representative EUS images and videos, standard CAM-based techniques frequently produced activations on image text, bright acoustic interfaces or shadow boundaries. In contrast, the segmentation-guided methods consistently generated precise, clinically coherent explanations. Hard Mask offered the strongest anatomical specificity, sharply delineating lesion borders and fully eliminating artifact-driven activations, making it useful for documentation and regulatory purposes. Soft Focus produced smoother gradients that preserved situational awareness while still suppressing irrelevant regions, making it more intuitive for teaching and potential live use. Across cases, the combined multi-method approach provided a more complete understanding of model behavior than any single technique alone.

Method Focus on Relevant Anatomy Artifact Avoidance Clinical Usefulness
GradCAM Moderate Low Moderate
EigenCAM Moderate–High Moderate Moderate
Occlusion High High Lower (slow)
Hard Mask Very High Very High Excellent
Soft Focus High High

Excellent 

(best visual continuity)

This work presents the first explainability framework tailored to segmentation in pancreatic EUS and demonstrates its ability to produce accurate, clinically trustworthy visual explanations. By integrating multiple adapted techniques with two novel segmentation-guided approaches, the framework enhances transparency, supports safer clinical implementation of AI, and aligns with emerging regulatory expectations for model interpretability. Further work will explore real-time integration and prospective clinical evaluation.