Skip to main content

Advertisement

Log in

Deep learning visual analysis in laparoscopic surgery: a systematic review and diagnostic test accuracy meta-analysis

  • Review Article
  • Published:
Surgical Endoscopy Aims and scope Submit manuscript

Abstract

Background

In the past decade, deep learning has revolutionized medical image processing. This technique may advance laparoscopic surgery. Study objective was to evaluate whether deep learning networks accurately analyze videos of laparoscopic procedures.

Methods

Medline, Embase, IEEE Xplore, and the Web of science databases were searched from January 2012 to May 5, 2020. Selected studies tested a deep learning model, specifically convolutional neural networks, for video analysis of laparoscopic surgery. Study characteristics including the dataset source, type of operation, number of videos, and prediction application were compared. A random effects model was used for estimating pooled sensitivity and specificity of the computer algorithms. Summary receiver operating characteristic curves were calculated by the bivariate model of Reitsma.

Results

Thirty-two out of 508 studies identified met inclusion criteria. Applications included instrument recognition and detection (45%), phase recognition (20%), anatomy recognition and detection (15%), action recognition (13%), surgery time prediction (5%), and gauze recognition (3%). The most common tested procedures were cholecystectomy (51%) and gynecological—mainly hysterectomy and myomectomy (26%). A total of 3004 videos were analyzed. Publications in clinical journals increased in 2020 compared to bio-computational ones. Four studies provided enough data to construct 8 contingency tables, enabling calculation of test accuracy with a pooled sensitivity of 0.93 (95% CI 0.85–0.97) and specificity of 0.96 (95% CI 0.84–0.99). Yet, the majority of papers had a high risk of bias.

Conclusions

Deep learning research holds potential in laparoscopic surgery, but is limited in methodologies. Clinicians may advance AI in surgery, specifically by offering standardized visual databases and reporting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Topol EJ (2019) High-performance medicine: the convergence of human and artificial intelligence. Nat Med 25:44–56

    Article  CAS  Google Scholar 

  2. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp 1097–1105

  3. Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E (2019) Convolutional neural networks for radiologic images: a radiologist’s guide. Radiology 290:590–606

    Article  Google Scholar 

  4. De Fauw J, Ledsam JR, Romera-Paredes B, Nikolov S, Tomasev N, Blackwell S, Askham H, Glorot X, O’Donoghue B, Visentin D (2018) Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med 24:1342–1350

    Article  Google Scholar 

  5. Milea D, Najjar RP, Zhubo J, Ting D, Vasseneix C, Xu X, Aghsaei Fard M, Fonseca P, Vanikieti K, Lagrèze WA (2020) Artificial intelligence to detect papilledema from ocular fundus photographs. N Engl J Med 382:1687–1695

    Article  Google Scholar 

  6. Brinker TJ, Hekler A, Enk AH, Berking C, Haferkamp S, Hauschild A, Weichenthal M, Klode J, Schadendorf D, Holland-Letz T (2019) Deep neural networks are superior to dermatologists in melanoma image classification. Eur J Cancer 119:11–17

    Article  Google Scholar 

  7. Soffer S, Klang E, Shimon O, Nachmias N, Eliakim R, Ben-Horin S, Kopylov U, Barash Y (2020) Deep learning for wireless capsule endoscopy: a systematic review and meta-analysis. Gastrointest Endosc 92:831–839

    Article  Google Scholar 

  8. Hashimoto DA, Rosman G, Rus D, Meireles OR (2018) Artificial intelligence in surgery: promises and perils. Ann Surg 268:70–76

    Article  Google Scholar 

  9. McInnes MD, Moher D, Thombs BD, McGrath TA, Bossuyt PM, Clifford T, JrmF C, Deeks JJ, Gatsonis C, Hooft L (2018) Preferred reporting items for a systematic review and meta-analysis of diagnostic test accuracy studies: the PRISMA-DTA statement. JAMA 319:388–396

    Article  Google Scholar 

  10. Moons KG, de Groot JA, Bouwmeester W, Vergouwe Y, Mallett S, Altman DG, Reitsma JB, Collins GS (2014) Critical appraisal and data extraction for systematic reviews of prediction modelling studies: the CHARMS checklist. PLoS Med 11:e1001744

    Article  Google Scholar 

  11. Luo W, Phung D, Tran T, Gupta S, Rana S, Karmakar C, Shilton A, Yearwood J, Dimitrova N, Ho TB, Venkatesh S, Berk M (2016) Guidelines for developing and reporting machine learning predictive models in biomedical research: a multidisciplinary view. J Med Internet Res 18:e323

    Article  Google Scholar 

  12. Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, Leeflang MM, Sterne JA, Bossuyt PM (2011) QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med 155:529–536

    Article  Google Scholar 

  13. Fleuren LM, Klausch TL, Zwager CL, Schoonmade LJ, Guo T, Roggeveen LF, Swart EL, Girbes AR, Thoral P, Ercole A (2019) Machine learning for the prediction of sepsis, a systematic review and meta-analysis of diagnostic test accuracy. Intensive Care Med 46:383–400

    Article  Google Scholar 

  14. Kwong MT, Colopy GW, Weber AM, Ercole A, Bergmann JH (2019) The efficacy and effectiveness of machine learning for weaning in mechanically ventilated patients at the intensive care unit: a systematic review. Bio-Design Manuf 2:31–40

    Article  Google Scholar 

  15. Islam MS, Hasan MM, Wang X, Germack HD (2018) A systematic review on healthcare analytics: application and theoretical perspective of data mining. Healthcare 6(2):54

    Article  Google Scholar 

  16. de la Fuente LE, Muñoz García Á, Santos Del Blanco L, Fraile Marinero JC, Pérez Turiel J (2020) Automatic gauze tracking in laparoscopic surgery using image texture analysis. Comput Methods Programs Biomed 190:105378

    Article  Google Scholar 

  17. Kletz S, Schoeffmann K, Husslein H (2019) Learning the representation of instrument images in laparoscopy videos. Healthc Technol Lett 6:197–203

    Article  Google Scholar 

  18. Reitsma JB, Glas AS, Rutjes AW, Scholten RJ, Bossuyt PM, Zwinderman AH (2005) Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews. J Clin Epidemiol 58:982–990

    Article  Google Scholar 

  19. Macaskill P, Gatsonis C, Deeks J, Harbord R, Takwoingi Y (2010) Cochrane handbook for systematic reviews of diagnostic test accuracy. Version 09 0. The Cochrane Collaboration, London

  20. Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Bruynseels A, Mahendiran T, Moraes G, Shamdas M, Kern C (2019) A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digital Health 1:e271–e297

    Article  Google Scholar 

  21. Ioannidis JP, Patsopoulos NA, Evangelou E (2007) Uncertainty in heterogeneity estimates in meta-analyses. BMJ 335:914–916

    Article  Google Scholar 

  22. Kletz S, Schoeffmann K, Benois-Pineau J, Husslein H (2019) Identifying surgical instruments in laparoscopy using deep learning instance segmentation. In: 2019 International Conference on Content-Based Multimedia Indexing (CBMI), pp 1–6

  23. Twinanda AP, Shehata S, Mutter D, Marescaux J, Mathelin Md, Padoy N (2017) EndoNet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imaging 36:86–97

    Article  Google Scholar 

  24. Leibetseder A, Petscharnig S, Primus MJ, Kletz S, Münzer B, Schoeffmann K, Keckstein J (2018) Lapgyn4: a dataset for 4 automatic content analysis problems in the domain of laparoscopic gynecology. In: Proceedings of the 9th ACM Multimedia Systems Conference, pp 357–362

  25. Nwoye CI, Mutter D, Marescaux J, Padoy N (2019) Weakly supervised convolutional LSTM approach for tool tracking in laparoscopic videos. Int J Comput Assist Radiol Surg 14:1059–1067

    Article  Google Scholar 

  26. Jin A, Yeung S, Jopling J, Krause J, Azagury D, Milstein A, Li FF (2018) Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision, pp 691–699

  27. Hu XW, Yu LQ, Chen H, Qin J, Heng PA (2017) AGNet: Attention-guided network for surgical tool presence detection. deep learning in medical image analysis and multimodal learning for clinical decision support, pp 186–194

  28. Mishra K, Sathish R, Sheet D (2017) Learning latent temporal connectionism of deep residual visual abstractions for identifying surgical tools in laparoscopy procedures. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp 2233–2240

  29. Varytimidis C, Rapantzikos K, Loukas C, Kollias S (2016) Surgical video retrieval using deep neural networks

  30. Wang S, Raju A, Huang J (2017) Deep learning based multi-label classification for surgical tool presence detection in laparoscopic videos. In: 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pp 620–623

  31. Yamazaki Y, Kanaji S, Matsuda T, Oshikiri T, Nakamura T, Suzuki S, Hiasa Y, Otake Y, Sato Y, Kakeji Y (2020) Automated surgical instrument detection from laparoscopic gastrectomy video images using an open source convolutional neural network platform. J Am Coll Surg 230:725-732.e721

    Article  Google Scholar 

  32. Zhang B, Wang S, Dong L, Chen P (2020) Surgical tools detection based on modulated anchoring network in laparoscopic videos. IEEE Access 8:23748–23758

    Article  Google Scholar 

  33. Choi B, Jo K, Choi S, Choi J (2017) Surgical-tools detection based on Convolutional Neural Network in laparoscopic robot-assisted surgery. Conf Proc IEEE Eng Med Biol Soc 2017:1756–1759

    Google Scholar 

  34. Madad Zadeh S, Francois T, Calvet L, Chauvet P, Canis M, Bartoli A, Bourdel N (2020) SurgAI: deep learning for computerized laparoscopic image understanding in gynaecology. Surg Endosc 34:5377–5383

    Article  Google Scholar 

  35. Sahu M, Mukhopadhyay A, Szengel A, Zachow S (2017) Addressing multi-label imbalance problem of surgical tool detection using CNN. Int J Comput Assist Radiol Surg 12:1013–1020

    Article  Google Scholar 

  36. Redmon J, Farhadi A (2018) Yolov3: an incremental improvement. arXiv preprint arXiv:180402767

  37. Hashimoto DA, Rosman G, Witkowski ER, Stafford C, Navarette-Welton AJ, Rattner DW, Lillemoe KD, Rus DL, Meireles OR (2019) Computer vision analysis of intraoperative video: automated recognition of operative steps in laparoscopic sleeve gastrectomy. Ann Surg 270:414–421

    Article  Google Scholar 

  38. Chittajallu DR, Dong B, Tunison P, Collins R, Wells K, Fleshman J, Sankaranarayanan G, Schwaitzberg S, Cavuoto L, Enquobahrie A (2019) XAI-CBIR: Explainable AI system for content based retrieval of video frames from minimally invasive surgery videos. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp 66–69

  39. Jalal NA, Alshirbaji TA, Möller K (2019) Predicting surgical phases using CNN-NARX neural network. Biomed Tech 64:S188

    Google Scholar 

  40. Kitaguchi D, Takeshita N, Matsuzaki H, Takano H, Owada Y, Enomoto T, Oda T, Miura H, Yamanashi T, Watanabe M, Sato D, Sugomori Y, Hara S, Ito M (2019) Real-time automatic surgical phase recognition in laparoscopic sigmoidectomy using the convolutional neural network-based deep learning approach. Surg Endosc 34:4924–4931

    Article  Google Scholar 

  41. Loukas C (2018) Surgical phase recognition of short video shots based on temporal modeling of deep features. arXiv preprint arXiv:180707853

  42. Chen Y, Tang P, Zhong K, Han L, Qi B, Sun Q (2019) Semi-supervised surgical workflow recognition based on convolution neural network. Basic Clin Pharmacol Toxicol 124:52

    Google Scholar 

  43. Jalal NA, Alshirbaji TA, Möller K (2018) Evaluating convolutional neural network and hidden markov model for recognising surgical phases in sigmoid resection. Biomed Tech 63:S251

    Google Scholar 

  44. Petscharnig S, Schöffmann K, Benois-Pineau J, Chaabouni S, Keckstein J (2018) Early and late fusion of temporal information for classification of surgical actions in laparoscopic gynecology. In: 2018 IEEE 31st International Symposium on Computer-Based Medical Systems (CBMS), pp 369–374

  45. Tokuyasu T, Iwashita Y, Matsunobu Y, Kamiyama T, Ishikake M, Sakaguchi S, Ebe K, Tada K, Endo Y, Etoh T, Nakashima M, Inomata M (2020) Development of an artificial intelligence system using deep learning to indicate anatomical landmarks during laparoscopic cholecystectomy. Surg Endosc. https://doi.org/10.1007/s00464-020-07548-x

    Article  PubMed  Google Scholar 

  46. Harangi B, Hajdu A, Lampe R, Torok P (2017) Recognizing ureter and uterine artery in endoscopic images using a convolutional neural network. In: 2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS), pp 726–727

  47. Gibson E, Robu MR, Thompson S, Edwards E, Schneider C, Gurusamy K, Davidson B, Hawkesa DJ, Barratta DC, Clarkson MJ (2017) Deep residual networks for automatic segmentation of laparoscopic videos of the liver. Medical Imaging 2017: Image-Guided Procedures, Robotic Interventions, and Modeling

  48. Chittajallu DR, Basharat A, Tunison P, Horvath S, Wells KO, Leeds SG, Fleshman JW, Sankaranarayanan G, Enquobahrie A (2019) content based retrieval of video segments from minimally invasive surgery videos using deep convolutional video descriptors and iterative query refinement. Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling

  49. Müenzer B, Primus MJ, Kletz S, Petscharnig S, Schoeffmann K (2017) Static vs. dynamic content descriptors for video retrieval in laparoscopy. In: 2017 IEEE International Symposium on Multimedia (ISM), pp 216–223

  50. Petscharnig S, Schoffmann K (2018) Learning laparoscopic video shot classification for gynecological surgery. Multimedia Tools Appl 77:8061–8079

    Article  Google Scholar 

  51. Twinanda AP, Yengera G, Mutter D, Marescaux J, Padoy N (2019) RSDNet: Learning to predict remaining surgery duration from laparoscopic videos without manual annotations. IEEE Trans Med Imaging 38:1069–1078

    Article  Google Scholar 

  52. Bodenstedt S, Wagner M, Mündermann L, Kenngott H, Müller-Stich B, Breucha M, Mees ST, Weitz J, Speidel S (2019) Prediction of laparoscopic procedure duration using unlabeled, multimodal sensor data. Int J Comput Assist Radiol Surg 14:1089–1095

    Article  Google Scholar 

  53. Fleuren LM, Klausch TL, Zwager CL, Schoonmade LJ, Guo T, Roggeveen LF, Swart EL, Girbes AR, Thoral P, Ercole A (2020) Machine learning for the prediction of sepsis: a systematic review and meta-analysis of diagnostic test accuracy. Intensive Care Med 46:1–18

    Google Scholar 

  54. Krittanawong C, Virk HUH, Bangalore S, Wang Z, Johnson KW, Pinotti R, Zhang H, Kaplin S, Narasimhan B, Kitai T (2020) Machine learning prediction in cardiovascular diseases: a meta-analysis. Sci Rep 10:1–11

    Article  Google Scholar 

  55. Maier-Hein L, Wagner M, Ross T, Reinke A, Bodenstedt S, Full PM, Hempe H, Mindroc-Filimon D, Scholz P, Tran TN (2020) Heidelberg colorectal data set for surgical data science in the sensor operating room. arXiv preprint arXiv:200503501

  56. Stauder R, Ostler D, Kranzfelder M, Koller S, Feußner H, Navab N (2016) The TUM LapChole dataset for the M2CAI 2016 workflow challenge. arXiv preprint arXiv:161009278

  57. Stauder R, Okur A, Peter L, Schneider A, Kranzfelder M, Feussner H, Navab N (2014) Random forests for phase detection in surgical workflow analysis. In: International Conference on Information Processing in Computer-Assisted Interventions, Springer, pp 148–157

Download references

Funding

This work received no specific source of funding.

Author information

Authors and Affiliations

Authors

Contributions

RA, NH, SS, YZ, YB, IA, DR, MG, EK were involved in the study concept and manuscript writing. RA, NH, SS, EK were involved in the data analysis and data interpretation. All authors have read and approved the final version of the manuscript.

Corresponding author

Correspondence to Roi Anteby.

Ethics declarations

Disclosures

Roi Anteby, Nir Horesh, Shelly Soffer, Yaniv Zager, Yiftach Barash, Imri Amiel, Danny Rosin, Mordechai Gutman, and Eyal Klang have no conflicts of interest or financial ties to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 734 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Anteby, R., Horesh, N., Soffer, S. et al. Deep learning visual analysis in laparoscopic surgery: a systematic review and diagnostic test accuracy meta-analysis. Surg Endosc 35, 1521–1533 (2021). https://doi.org/10.1007/s00464-020-08168-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00464-020-08168-1

Keywords

Navigation