International Society for Minimally Invasive Cardiothoracic Surgery

ISMICS Home ISMICS Home Past & Future Meetings Past & Future Meetings

Back to 2025 Thoracic Abstracts


Expanding The Generalizability Of An Ai-based Model For Pulmonary Vasculature Segmentation In Robotic Lung Surgery
Edward kim1, Syed Abdul Khader2, Sandeep Manjana2, Mohan Murari1, Jawad Rao1, Ritika Dinesh1, Elif Polat1, Andres Bravo1, Aditya Ahuja1, Danielle Brichett1, Alex Nees1, Rohin Bajaj1, Omer Mescioglu1, Lana Schumacher1.
1Tufts Medical Center, Boston, MA, USA, 2Plaksha University, Sahibzada Ajit Singh Nagar,, India.

BACKGROUND: Accurate identification of the pulmonary artery (PA) is critical for minimizing surgical risks. Deep neural networks trained for image segmentation can be effective tools for identifying surgical anatomy in thoracoscopic videos. Further expanding from our previous work, we expand on AI model's ability to identify PA in left lower lobe.
METHODS: Twelve patients with biopsy-proven Stage I non-small-cell lung cancer (NSCLC) of the right lower lobe (RLL) underwent robotic lobectomy with mediastinal lymph node dissection. Complete videos of each operation were obtained, and video fragments of the pulmonary artery were identified. Annotation masks of the pulmonary artery were created using Computer Vision Annotation Tool (CVAT), and the labeled data was used to train a state-of-the-art instance segmentation algorithm called Mask Regional-Convolutional Neural Network (Mask R-CNN) (Figure 1). The model was trained using approximately 300 frames. To test generalization to the left lower lobe (LLL), the model was applied and evaluated on five unseen LLL thoracoscopic video data. Model performance was assessed using the Mean Intersection over Union (mIoU) metric.
RESULTS: Annotation masks of the pulmonary artery were created in 4,139 images across twelve cases of robotic right lower lobectomy. 2,893 and 684 annotated images across ten cases were used for training and validation, respectively (Figure 2). A mean IoU of 87.7% was achieved in the validation dataset. Once the model was trained on RUL, we applied the model to five, unseen, LLL cases, which had fewer frames (150 frames) than the RLL dataset. Despite this, our model achieved an average mIoU of 74.8%, with scores ranging from 64.5% to 87.4%. These results suggest that, even with fewer frames in the LLL dataset, the model can effectively generalize to different lobes and maintain high segmentation accuracy.
CONCLUSIONS: This study demonstrates that our AI model, originally developed for RLL pulmonary artery segmentation, is generalizable to LLL, even with fewer training frames. These results confirm the potential for AI model refinement between different lobes. A larger dataset will continue to improve AI model training and overall accuracy.


Back to 2025 Thoracic Abstracts