Skip to main content

Trustworthy or flawed clinical prediction rule?

The Original Article was published on 12 December 2017

We read with interest the recently published paper by Hilder et al. [1], where the authors present the PRESET-Score, a new clinical prediction rule for patients with acute respiratory distress syndrome treated with extracorporeal membrane oxygenation (ECMO). While the topic is clinically relevant and interesting, we are worried that spurious findings, biased results, and overstated findings are presented.

First, both the new and the four existing scores assessed are at high risk of being underpowered. Multivariable risk prediction models should be based on an effective sample size (lowest number of events/non-events) of at least 10, often more, per predictor variable assessed [2, 3]. Using 11 variables and 41 non-events (3.7 per predictor) results in overfitting of the development sample and inflated performance estimates [2]. This will be evident upon use of the score in other populations.

Second, comparing the performance of the new score with four existing scores using the development dataset is against recommendations [2], as this is biased to favor the new score due to overfitting. For comparison with other scores, an independent cohort not used to develop any of the scores must be used [2].

Third, internal validation is performed to quantify overfitting, and should be done by bootstrap resampling of the development dataset [2]. The authors state that they used logistic regression analysis to “reassess” the score, which essentially is a recalibration resulting in a new model generating new predictions. This is neither internal nor external validation, which requires assessment of predictions made by the score without modifications in a new sample [2].

Fourth, it is recommended to assess calibration by graphical methods or regressions of the predicted versus observed outcomes [2, 4], not by the Hosmer-Lemeshow Ĉ-test, as P > 0.05 is more likely to indicate lack of power than proper model fit when used on small samples.

While we agree that clinical prediction rules may be valuable for clinicians considering ECMO, it is a prerequisite that such scores are developed and validated using appropriate methodology [2] and sufficient sample sizes, and that all relevant features are transparently reported with adequate discussion of the limitations [5]. Developing and sufficiently validating a clinical prediction rule for this highly selected patient group likely requires a large, multicentre collaboration to ensure trustworthy predictions that will benefit patients and relatives, the healthcare system, researchers, and society.

Abbreviations

ECMO:

Extracorporeal membrane oxygenation

References

  1. Hilder M, Herbstreit F, Adamzik M, Beiderlinden M, Bürschen M, Peters J, et al. Comparison of mortality prediction models in acute respiratory distress syndrome undergoing extracorporeal membrane oxygenation and development of a novel prediction score: the PREdiction of Survival on ECMO Therapy-Score (PRESET-Score). Crit Care. 2017. https://doi.org/10.1186/s13054-017-1888-6.

  2. Labarère J, Bertrand R, Fine MJ. How to derive and validate clinical prediction models for use in intensive care medicine. Intensive Care Med. 2014. https://doi.org/10.1007/s00134-014-3227-6.

  3. Courvoisier DS, Combescure C, Agoritsas T, Gayet-Ageron A, Perneger TV. Performance of logistic regression modeling: Beyond the number of events per variable, the role of data structure. J Clin Epidemiol. 2011. https://doi.org/10.1016/j.jclinepi.2010.11.012.

  4. Van Calster B, Nieboer D, Vergouwe Y, De Cock B, Pencina MJ, Steyerberg EW. A calibration hierarchy for risk models was defined: From utopia to empirical data. J Clin Epidemiol. 2016. https://doi.org/10.1016/j.jclinepi.2015.12.005.

  5. Collins GS, Reitsma JB, Altman DG, Moons KGM. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement. BMJ. 2014. https://doi.org/10.1136/bmj.g7594.

Download references

Acknowledgements

Not applicable.

Funding

None.

Availability of data and materials

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

AG and MHM were responsible for the conception of the letter and drafted the manuscript. All authors revised, read, and approved submission of the final manuscript.

Corresponding author

Correspondence to Morten Hylander Møller.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional information

See related research by Hilder et al., https://ccforum.biomedcentral.com/articles/10.1186/s13054-017-1888-6.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Granholm, A., Perner, A., Jensen, A.K.G. et al. Trustworthy or flawed clinical prediction rule?. Crit Care 22, 31 (2018). https://doi.org/10.1186/s13054-018-1961-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13054-018-1961-9