Aims
Intestinal metaplasia (IM) is a gastric precancerous condition. Patients with advanced stages of IM are considered at higher risk for developing gastric cancer (GC), and endoscopic surveillance in this population has proven beneficial. The diagnosis of IM is histological; however, electronic chromoendoscopy enables targeted biopsies through real-time detection. Artificial intelligence (AI) can support IM detection, particularly in cases with patchy distribution, and for endoscopists who are not specialized in upper gastrointestinal (GI) endoscopy. Our previous pilot study demonstrated that a machine learning (ML) system based on EfficientNet-B0 achieved approximately 90% accuracy in recognizing IM using blue light imaging (BLI) in the gastric corpus (1). The present study aimed to compare different ML systems for the identification of IM using BLI endoscopic images of the antrum and corpus.
Methods
We retrospectively analyzed a dataset of prospectively collected endoscopic images from gastroscopies performed at Sant’Andrea Hospital, Sapienza University of Rome, between January 2020 and March 2024. Several ML approaches were tested and compared to develop an automated multi-task model: binary image classification into healthy vs. non-healthy categories (EfficientNet-B0), detection of IM-affected areas (YOLOv5 and HistologyYOLO), and interpretability assessment (Gradient-weighted Class Activation Mapping, GradCAM).
Results
The dataset comprised 2017 images from 279 patients. EfficientNet-B0 achieved a test accuracy of 0.90, precision of 0.92, and recall of 0.91. The HistologyYOLO model showed an accuracy of 0.52, precision of 0.47, and perfect recall (1.00), while YOLOv5 achieved a precision of 0.32 and recall of 0.29. GradCAM provided satisfactory qualitative interpretability by highlighting the regions influencing the network’s decisions, complementing the classifier despite the absence of quantitative validation.
Conclusions
EfficientNet-B0 showed optimal performance, which can be further enhanced by the application of GradCAM to identify the most influential image regions guiding the model’s decision-making process.