Skip to main content

Advertisement

Log in

Multicentric exploration of tool annotation in robotic surgery: lessons learned when starting a surgical artificial intelligence project

  • 2022 EAES Oral
  • Published:
Surgical Endoscopy Aims and scope Submit manuscript

Abstract

Background

Artificial intelligence (AI) holds tremendous potential to reduce surgical risks and improve surgical assessment. Machine learning, a subfield of AI, can be used to analyze surgical video and imaging data. Manual annotations provide veracity about the desired target features. Yet, methodological annotation explorations are limited to date. Here, we provide an exploratory analysis of the requirements and methods of instrument annotation in a multi-institutional team from two specialized AI centers and compile our lessons learned.

Methods

We developed a bottom-up approach for team annotation of robotic instruments in robot-assisted partial nephrectomy (RAPN), which was subsequently validated in robot-assisted minimally invasive esophagectomy (RAMIE). Furthermore, instrument annotation methods were evaluated for their use in Machine Learning algorithms. Overall, we evaluated the efficiency and transferability of the proposed team approach and quantified performance metrics (e.g., time per frame required for each annotation modality) between RAPN and RAMIE.

Results

We found a 0.05 Hz image sampling frequency to be adequate for instrument annotation. The bottom-up approach in annotation training and management resulted in accurate annotations and demonstrated efficiency in annotating large datasets. The proposed annotation methodology was transferrable between both RAPN and RAMIE. The average annotation time for RAPN pixel annotation ranged from 4.49 to 12.6 min per image; for vector annotation, we denote 2.92 min per image. Similar annotation times were found for RAMIE. Lastly, we elaborate on common pitfalls encountered throughout the annotation process.

Conclusions

We propose a successful bottom-up approach for annotator team composition, applicable to any surgical annotation project. Our results set the foundation to start AI projects for instrument detection, segmentation, and pose estimation. Due to the immense annotation burden resulting from spatial instrumental annotation, further analysis into sampling frequency and annotation detail needs to be conducted.

Graphical abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Loftus TJ, Vlaar APJ, Hung AJ, Bihorac A, Dennis BM, Juillard C et al (2022) Executive summary of the artificial intelligence in surgery series. Surgery 171(5):1435–1439

    Article  PubMed  Google Scholar 

  2. Shvets AA, Rakhlin A, Kalinin AA, Iglovikov VI (2018) Automatic instrument segmentation in robot-assisted surgery using deep learning. In: 2018 17th IEEE international conference on machine learning and applications (ICMLA). pp 624–628

  3. Meireles OR, Rosman G, Altieri MS, Carin L, Hager G, Madani A et al (2021) SAGES consensus recommendations on an annotation framework for surgical video. Surg Endosc 35(9):4918–4929

    Article  PubMed  Google Scholar 

  4. Ward TM, Mascagni P, Ban Y, Rosman G, Padoy N, Meireles O et al (2021) Computer vision in surgery. Surgery 169(5):1253–1256

    Article  PubMed  Google Scholar 

  5. Ward TM, Fer DM, Ban Y, Rosman G, Meireles OR, Hashimoto DA (2021) Challenges in surgical video annotation. Comput Assist Surg (Abingdon) 26(1):58–68

    Article  Google Scholar 

  6. Allan M, Kondo S, Bodenstedt S, Leger S, Kadkhodamohammadi R, Luengo I et al (2020) 2018 robotic scene segmentation challenge. arXiv [cs.CV]. Available from: http://arxiv.org/abs/2001.11190

  7. Maier-Hein L, Mersmann S, Kondermann D, Bodenstedt S, Sanchez A, Stock C et al (2014) Can masses of non-experts train highly accurate image classifiers? A crowdsourcing approach to instrument segmentation in laparoscopic images. Med Image Comput Comput Assist Interv 17(Pt 2):438–445

    PubMed  Google Scholar 

  8. Twinanda AP, Shehata S, Mutter D, Marescaux J, de Mathelin M, Padoy N (2017) EndoNet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imaging 36(1):86–97

    Article  PubMed  Google Scholar 

  9. Fuchs HF, Müller DT, Leers JM, Schröder W, Bruns CJ (2019) Modular step-up approach to robot-assisted transthoracic esophagectomy-experience of a German high volume center. Transl Gastroenterol Hepatol 4:62

    Article  PubMed  PubMed Central  Google Scholar 

  10. Hashimoto DA, Rosman G, Meireles OR (2021) Artificial intelligence in surgery: understanding the role of AI in surgical practice. McGraw-Hill Education, New York

    Google Scholar 

  11. Maier-Hein L, Eisenmann M, Reinke A, Onogur S, Stankovic M, Scholz P et al (2018) Why rankings of biomedical image analysis competitions should be interpreted with care. Nat Commun 9(1):5217

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. Kornilov AS, Safonov IV (2018) An overview of watershed algorithm implementations in open source libraries. J Imaging 4(10):123

    Article  Google Scholar 

  13. Reinke A, Tizabi MD, Sudre CH, Eisenmann M, Rädsch T, Baumgartner M et al (2021) Common limitations of image processing metrics: a picture story. arXiv [eess.IV]. Available from: http://arxiv.org/abs/2104.05642

  14. Chen L-C, Zhu Y, Papandreou G, Schroff F, Adam H (2018) Encoder–decoder with atrous separable convolution for semantic image segmentation. arXiv [csCV]. Available from: https://arxiv.org/abs/1802.02611

  15. Roß T, Reinke A, Full PM, Wagner M, Kenngott H, Apitz M et al (2021) Comparative validation of multi-instance instrument segmentation in endoscopy: results of the ROBUST-MIS 2019 challenge. Med Image Anal 70:101920

    Article  PubMed  Google Scholar 

  16. Maier-Hein L, Wagner M, Ross T, Reinke A, Bodenstedt S, Full PM et al (2020) Heidelberg colorectal data set for surgical data science in the sensor operating room. arXiv [cs.CV]. Available from: http://arxiv.org/abs/2005.03501

  17. Rahman MM, Davis DN (2013) Addressing the class imbalance problem in medical datasets. Int J Mach Learn Comput 3:224–228

    Article  Google Scholar 

  18. Madad Zadeh S, Francois T, Calvet L, Chauvet P, Canis M, Bartoli A et al (2020) SurgAI: deep learning for computerized laparoscopic image understanding in gynaecology. Surg Endosc 34(12):5377–5383

    Article  PubMed  Google Scholar 

  19. Mascagni P, Vardazaryan A, Alapatt D, Urade T, Emre T, Fiorillo C et al (2022) Artificial intelligence for surgical safety: automatic assessment of the critical view of safety in laparoscopic cholecystectomy using deep learning. Ann Surg 275(5):955–961

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The authors are grateful to Francesco Cisternino, Esma Bensiali and Federica Ferraguti for their support in image post-processing. We also thank Joni Dambre for technical input in annotation methodolgies, and Saar Vermijs for general support in data collection. The authors are grateful for organizational help and support of Margot Troch and the initial annotation exploration by Matthias Boeykens. Lastly, the authors thank Flanders Innovation and Entrepreneurship for funding this research.

Funding

Flanders Innovation & Entrepreneurship Agency (Reference HBC.2020.2252).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pieter De Backer.

Ethics declarations

Disclosures

Pieter De Backer has no conflict of interest or financial ties to disclose, funding by the Flanders Innovation & Entrepreneurship Agency—HBC.2020.2252. Jennifer A. Eckhoff has no conflict of interest or financial ties to disclose, Educational Funding Olympus Corporation, Japan. Jente Simoens, Dolores T. Müller, Charlotte Allaeys, Heleen Creemers, Amélie Hallemeesch, Kenzo Mestdagh, Charles Van Praet, Charlotte Debbaut, Karel Decaestecker, Christiane J. Bruns, Ozanan Meireles, Alexandre Mottrie, and Hans F. Fuchs have no conflict of interest or financial ties to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 31238 kb)

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

De Backer, P., Eckhoff, J.A., Simoens, J. et al. Multicentric exploration of tool annotation in robotic surgery: lessons learned when starting a surgical artificial intelligence project. Surg Endosc 36, 8533–8548 (2022). https://doi.org/10.1007/s00464-022-09487-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00464-022-09487-1

Keywords

Navigation