Prospective predictive performance comparison between clinical gestalt and validated COVID-19 mortality scores ============================================================================================================== * Adrian Soto-Mota * Braulio Alejandro Marfil-Garza * Santiago Castiello-de Obeso * Erick Jose Martinez Rodriguez * Daniel Alberto Carrillo Vazquez * Hiram Tadeo-Espinoza * Jessica Paola Guerrero Cabrera * Francisco Eduardo Dardon-Fierro * Juan Manuel Escobar-Valderrama * Jorge Alanis-Mendizabal * Juan Gutierrez-Mejia ## Abstract Most COVID-19 mortality scores were developed at the beginning of the pandemic and clinicians now have more experience and evidence-based interventions. Therefore, we hypothesized that the predictive performance of COVID-19 mortality scores is now lower than originally reported. We aimed to prospectively evaluate the current predictive accuracy of six COVID-19 scores and compared it with the accuracy of clinical gestalt predictions. 200 patients with COVID-19 were enrolled in a tertiary hospital in Mexico City between September and December 2020. The area under the curve (AUC) of the LOW-HARM, qSOFA, MSL-COVID-19, NUTRI-CoV, and NEWS2 scores and the AUC of clinical gestalt predictions of death (as a percentage) were determined. In total, 166 patients (106 men and 60 women aged 56±9 years) with confirmed COVID-19 were included in the analysis. The AUC of all scores was significantly lower than originally reported: LOW-HARM 0.76 (95% CI 0.69 to 0.84) vs 0.96 (95% CI 0.94 to 0.98), qSOFA 0.61 (95% CI 0.53 to 0.69) vs 0.74 (95% CI 0.65 to 0.81), MSL-COVID-19 0.64 (95% CI 0.55 to 0.73) vs 0.72 (95% CI 0.69 to 0.75), NUTRI-CoV 0.60 (95% CI 0.51 to 0.69) vs 0.79 (95% CI 0.76 to 0.82), NEWS2 0.65 (95% CI 0.56 to 0.75) vs 0.84 (95% CI 0.79 to 0.90), and neutrophil to lymphocyte ratio 0.65 (95% CI 0.57 to 0.73) vs 0.74 (95% CI 0.62 to 0.85). Clinical gestalt predictions were non-inferior to mortality scores, with an AUC of 0.68 (95% CI 0.59 to 0.77). Adjusting scores with locally derived likelihood ratios did not improve their performance; however, some scores outperformed clinical gestalt predictions when clinicians’ confidence of prediction was <80%. Despite its subjective nature, clinical gestalt has relevant advantages in predicting COVID-19 clinical outcomes. The need and performance of most COVID-19 mortality scores need to be evaluated regularly. * COVID-19 * prognosis ### Significance of this study #### What is already known about this subject? * Multiple scores have been designed or repurposed to predict survival in patients with COVID-19; however, all of them were designed or validated during the early days of the pandemic and COVID-19 healthcare has greatly improved since then. * Clinical gestalt has been proven to accurately predict survival in other clinical contexts. #### What are the new findings? * The observed area under the curve of all scores was significantly lower than originally reported. * No score was significantly better than clinical gestalt predictions. #### How might these results change the focus of research or clinical practice? * The need and performance of most COVID-19 mortality scores need to be re-evaluated with regularity. ## Introduction ### Background Many prediction models have been developed for COVID-191–5 and their applications in healthcare range from bedside counseling to triage systems.6 However, most have been developed within specific clinical contexts1 2 or validated with data from the early months of the pandemic.4 5 Since then, health systems have implemented protocols and adaptations to cope with surge in hospitalization rates,7 and now clinicians have more knowledge and experience in managing these patients. Additionally, other non-biological factors such as critical care availability have been found to strongly influence the prognosis of patients with COVID-19.8 9 These frequently intangible factors (eg, the experience of the staff with specific healthcare tasks) impact prognosis but are ignored by mortality scores. Prediction models are context-sensitive10; therefore, to preserve their accuracy, they must be applied in contexts as similar as possible to the ones where they were derived from. Considering that healthcare systems and settings are quite different around the world, there are many examples of scores requiring adjustments or local adaptations.11 12 Predicting is an everyday activity in most medical fields, and in other scenarios clinicians’ subjective predictions have been observed to be as accurate as mathematically derived models.13–15 However, the opposite has been observed as well; for example, clinicians tend to overestimate the long-term survival of oncological patients.16 This work aimed to compare the predictive performance of different mortality prediction models for COVID-19 (some of them in the same hospital they were developed) against their original performance and clinical gestalt predictions. ## Methods ### Study design This observational prospective study was carried out in a tertiary hospital in Mexico City, fully dedicated to providing COVID-19 healthcare, between October and December 2020. ### Selection of subjects Data from 200 consecutive hospital admissions (for RT-PCR-confirmed COVID-19 infection) were obtained between October and December 2020. We excluded from the analysis all patients without a documented clinical outcome (eg, if they had not been discharged at the moment of data collection, transferred to another hospital, voluntarily discharged). A total of 166 patients were included in the analysis because 34 patients were either transferred to other hospitals or voluntarily discharged. The most frequent criteria for hospital admission were requiring supplemental oxygen to reach oxygen saturation >90%, respiratory rate >20, need for ventilation (non-invasive or invasive), severity of pneumonia based on CT, hemodynamic instability, and impossibility of home isolation. A total of 24 internal medicine residents with more than 6 months of experience (all residency programs in Mexico start every year on March 1) in COVID-19 healthcare participated in the study. Their median years of hospital experience was 2 (IQR 1–3). ### Measurements Clinical gestalt predictions and all necessary data to calculate prognostic scores were obtained at hospital admission from October to December 2020. Internal medicine residents in charge of collecting clinical history, physical examination, and initial imaging and laboratory work-up were asked the following questions once all initial imaging and laboratory reports were available: * How likely do you think this patient will die from COVID-19? (as a percentage). * How confident are you of that prediction? (as a percentage). To obtain the earliest and best informed clinical gestalt prediction available, we asked only the resident in charge of each patient’s hospital admission. While it is likely that clinical gestalt scores vary between evaluators, inviting more evaluators would require evaluating the same patient at different times (giving a ‘predictive advantage’ to later scorers who would be able to see if a patient is improving after their initial therapeutic interventions) and would allow predictions with different levels of information (from evaluators who did not spend the same amount of time directly examining the patient). To test the hypothesis that updating the statistical weights of a score with local data could help preserve its original accuracy, we developed a second version of the LOW-HARM score (LOW-HARM V.2 score) using positive and negative likelihood ratios derived from cohorts of Mexican patients4 8 (instead of only positive likelihood ratios from Chinese patients17 18 as in the original version). The likelihood ratios (LR+/LR−) used to calculate the LOW-HARM V.2 score were as follows: oxygen saturation <88%=2.61/0.07; previous diagnosis of hypertension=2.37/0.65; elevated troponin (>20 pg/mL)=15.6/0.62; elevated creatine phosphokinase (>223 U/L)=2.37/0.88; leukocyte count >10.0 cells ×109/L=5.6/0.48; lymphocyte count <800 cells/µL (<0.8 cells/mm3)=2.24/0.48; and serum creatinine >1.5 mg/dL=19.1/0.6. All previously validated scores were calculated by the research team. ### Outcomes The primary outcome of this study was the area under the curve (AUC) of each COVID-19 mortality prediction method. To test the hypothesis that the predictive performance of already validated scores declined over time, we chose the LOW-HARM,4 MSL-COVID-19, and NUTRI-CoV5 scores because all these three were validated with data from Mexican patients with COVID-19. To rule out that this was a phenomenon exclusive of scores developed with Mexican data, we re-evaluated the accuracy of NEWS21 and qSOFA2 scores and the neutrophil to lymphocyte ratio to predict mortality from COVID-19.19 To test the hypothesis that scores outperformed clinical gestalt predictions when their confidence was ‘low’ (below or equal to the median perceived confidence; ie, <80%), we conducted a comparative AUC analysis of cases below or above this threshold. ### Analysis Clinical and demographic data were analyzed using mean or median (depending on their distribution) and SD or IQR as dispersion measures. Shapiro-Wilk tests were used to assess if variables were normally distributed. R V.4.0.3 packages ‘caret’ for confusion matrix calculations and ‘pROC’ for receiver operating characteristic curve (ROC) analysis and STATA V.12 software were used for statistical analysis. AUC differences were analyzed using DeLong’s method with the STATA function ‘roccomp’.20 A p value of <0.05 for inferring statistical significance was used in all statistical tests. Missing data were handled by mean substitution. ### Sample size rationale We calculated sample size using ‘easyROC’,21 an open R-based web tool used to estimate sample sizes for direct and non-inferior AUC comparisons using Obuchowski’s method22; to detect non-inferiority with >0.05 maximal AUC difference with the reported AUC of LOW-HARM (0.96, 95% CI 0.94 to 0.98), a case allocation ratio of 0.7 (because the mortality at our center is ~0.3), a power of 0.8, and a significance cut-off level of 0.05, 159 patients would be needed. To detect >0.1 difference between AUCs, 99 patients would be needed with the rest of the parameters held constant. To allow a patient loss rate of ~25%, we obtained data from 200 consecutive hospital admissions. ### Patient and public involvement Patients or the public were not involved in the design, or conduct, or reporting, or dissemination plans of our research. ## Results ### Characteristics of study subjects We included 166 patients in our study. Of these, 47 (28.3%) died, while 119 (71.7%) survived. The general demographics and clinical characteristics of these populations are shown in table 1. As expected, decreased peripheral saturation, ventilatory support, cardiac injury, renal injury, leukocytosis, and lymphocytosis were more prevalent in the group of patients who died during their hospitalization. View this table: [Table 1](/content/70/2/415/T1) Table 1 Patient demographics and clinical data ### Main results Table 2 shows the median scores and their IQR for each prediction tool. As expected, there was a more pronounced mean difference between groups in scores that were based on a 100-point scale (clinical gestalt, LOW-HARM scores). Table 2 shows the originally reported AUC versus the AUC we observed in our data. View this table: [Table 2](/content/70/2/415/T2) Table 2 Distribution and accuracy of selected mortality prediction tools ### Performance characteristics of selected predictive models and AUC comparisons Figure 1 shows the performance characteristics of the selected predictive models. Overall, we found a statistically significant difference between predictive models (p*=*0.002). However, we did not find statistically significant differences between clinical gestalt and other prediction tools. ![Figure 1](/https://d3hme472k3gd2d.cloudfront.net/content/jim/70/2/415/F1.medium.gif) [Figure 1](/content/70/2/415/F1) Figure 1 AUC comparison of selected mortality prediction tools. AUC, area under the curve. As expected, we found that the confidence of prediction increased in cases in which the predicted probability of death was clearly high or clearly low (figure 2). ![Figure 2](/https://d3hme472k3gd2d.cloudfront.net/content/jim/70/2/415/F2.medium.gif) [Figure 2](/content/70/2/415/F2) Figure 2 Clinical gestalt prediction and confidence of prediction. We found a moderate-strong, bimodal correlation between the confidence of prediction and the predicted probability of death at <50% predicted probability of death (Pearson’s r=0.60, p*<*0.0001) and at >50% predicted probability of death (Pearson’s r=0.50, p*=*0.0002). We further explored the performance characteristics of the selected predictive models in specific contexts (online supplemental appendix table 1). Figure 3 shows the results of the analysis including cases in which the certainty of prediction was below and above 80%. Overall, we found a statistically significant difference between predictive models in both settings. In cases in which the confidence of prediction was ≤80%, both versions of the LOW-HARM scores showed a larger AUC compared with clinical gestalt (figure 3B and online supplemental appendix table 1). ### Supplementary data [[jim-2021-002037supp001.pdf]](pending:yes) ![Figure 3](/https://d3hme472k3gd2d.cloudfront.net/content/jim/70/2/415/F3.medium.gif) [Figure 3](/content/70/2/415/F3) Figure 3 AUC comparison of selected mortality prediction tools according to confidence of prediction. (A) AUC comparison of selected mortality prediction tools in cases where the confidence of prediction was >80%. (B) AUC comparison of selected mortality prediction tools in cases where the confidence of prediction was ≤80%. AUC, area under the curve. An additional analysis restricted to cases in which the certainty of prediction was ≤80% and the predicted probability of death was ≤30% (ie, median value for all cases) found a statistically significant difference between predictive models (p*=*0.0005). Similarly, individual comparisons showed a larger statistically significant AUC differences between clinical gestalt and both versions of the LOW-HARM score (online supplemental appendix table 1). ## Discussion Outcome prediction plays an important role in everyday clinical practice. This work highlights the inherent limitations of statistically derived scores and some of the advantages of clinical gestalt predictions. In other scenarios where using predictive scores is frequent, more experienced clinicians can always ponder their sometimes subjective, yet quite valuable insight. However, with the COVID-19 pandemic, clinicians of all levels of training started their learning curve at the same time. In this study, we had the unique opportunity of re-evaluating more than one score (two of them in the same setting and for the same purpose they were designed for), while testing the accuracy of clinical gestalt, in a group of clinicians who started their learning curve for managing a disease at the same time (experience and training within healthcare teams are usually mixed for other diseases). Additionally, we explored the accuracy of clinical gestalt across different degrees of prediction confidence. To our knowledge, this is the first time that this type of analysis is done for subjective clinical predictions and proved to be quite insightful. The fact that clinical gestalt’s accuracy correlates with confidence in prediction suggests that while there is value in subjective predictions, it is also important to ask ourselves about how confident we are about our predictions. Interestingly, our results suggest clinical gestalt predictions are particularly prone to being positively biased, and that clinicians were more likely to correctly predict which patients would survive than which patients would die (figure 2 and online supplemental figure 1). This is consistent with other studies that have found that clinicians tend to overestimate the effectiveness of their treatments and therefore patient survival.16 ### Supplementary data [[jim-2021-002037supp002.pdf]](pending:yes) Since it is expected that scores will lose at least some of their predictive accuracy when used outside the context they were developed in, it has already been reported that local adaptations improve or help retain their predictive performance. In this work, we tried to evaluate if by updating the likelihood ratio values used in the calculation of the LOW-HARM score with data from Mexican patients we could mitigate its loss of accuracy. However, despite the AUC of the LOW-HARM V.2 score being slightly larger than the AUC of the original LOW-HARM score, the difference was not statistically significant nor significantly more accurate than clinical gestalt predictions. This highlights the fact that scores are far from being final or perfect tools even after implementing local adjustments. ### Limitations Even when some of the results in this study can prove insightful for other clinical settings and challenges, our results cannot be widely extrapolated due to the local setting of our work and the highly heterogeneous nature of COVID-19 healthcare systems. Additionally, it is likely that emerging variants, vaccination, or the seasonality of contagion waves23 will continue to influence the predictive capabilities of all predictive models. Additionally, our sample size was calculated to detect non-inferiority between prediction methods. On the other hand, it is possible that, despite having comparable experience with COVID-19, overall clinical experience still influences the accuracy of clinical gestalt predictions. We were not able to account for this source of variability because of how our hospital’s patient admission workflows are designed (senior attendings usually meet patients after their initial work-up is complete and their prediction would also be informed by the success or failure of the early therapeutic interventions). Furthermore, individual consistency cannot be accurately estimated as, on average, each clinician evaluated seven patients only. Nonetheless, 87.5% of the residents (21 of 24) provided at least one prediction per quartile, and we did not observe any of them consistently registering high nor low clinical gestalt scores. Specifically designed studies are needed to better investigate the relationship between subjective confidence, accuracy, and positive bias. Clinical predictions will always be challenging because all medical fields are in constant development and clinical challenges are highly dynamic phenomena. All scores had lower predictive accuracy than in their original publications and none of them showed better predictive performance than clinical gestalt predictions; however, scores could still outperform clinical gestalt when confidence in clinical gestalt predictions is perceived to be low. These results remind us that prognostic scores require constant re-evaluation even after being properly validated and adjusted and that no score can or should ever substitute careful medical assessments and thoughtful clinical judgment. Despite its inherent subjectivity, clinical gestalt immediately incorporates context-specific factors, and in contrast to statistically derived models it is likely to improve its accuracy over time. ## Data availability statement Data are available upon reasonable request. Anonymized data for research purposes will be available upon request to the corresponding author. ## Ethics statements ### Patient consent for publication Not required. ### Ethics approval This study was approved by the Ethics Committee for Research on Humans of the National Institute of Medical Sciences and Nutrition Salvador Zubirán on August 25, 2020 (reg no DMC‐3369‐20‐20‐1-1a). ## Acknowledgments All authors wish to thank the invaluable support of the National Institute of Medical Sciences and Nutrition Salvador Zubirán Emergency Department staff. ## Footnotes * Contributors AS-M led, designed, collected, and analyzed research data. BAM-G, SCdO, and JG-M designed and analyzed research data. EMR, DACV, HT-E, JPGC, FED-F, JME-V, and JA-M collected and analyzed research data. All authors contributed to elaborating this manuscript and approved this version. * Funding BAMG is currently supported by the patronage of the National Institute of Medical Sciences and Nutrition Salvador Zubirán and by the Foundation for Health and Education Dr Salvador Zubirán (FunSaEd), and the CHRISTUS Excellence and Innovation Center. * Competing interests None declared. * Provenance and peer review Not commissioned; externally peer reviewed. * Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. This article is made freely available for personal use in accordance with BMJ’s website terms and conditions for the duration of the covid-19 pandemic or until otherwise determined by BMJ. You may use, download and print the article for any lawful, non-commercial purpose (including text and data mining) provided that all copyright notices and trade marks are retained. [https://bmj.com/coronavirus/usage](https://bmj.com/coronavirus/usage) ## References 1. Rigoni M, Torri E, Nollo G, et al. NEWS2 is a valuable tool for appropriate clinical management of COVID-19 patients. Eur J Intern Med 2021;85:118–20.[doi:10.1016/j.ejim.2020.11.020](http://dx.doi.org/10.1016/j.ejim.2020.11.020)pmid:http://www.ncbi.nlm.nih.gov/pubmed/33358535 [PubMed](/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) 2. Liu S, Yao N, Qiu Y, et al. Predictive performance of SOFA and qSOFA for in-hospital mortality in severe novel coronavirus disease. Am J Emerg Med 2020;38:2074–80.[doi:10.1016/j.ajem.2020.07.019](http://dx.doi.org/10.1016/j.ajem.2020.07.019)pmid:http://www.ncbi.nlm.nih.gov/pubmed/33142178 [PubMed](/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) 3. Ma A, Cheng J, Yang J, et al. Neutrophil-to-lymphocyte ratio as a predictive biomarker for moderate-severe ARDS in severe COVID-19 patients. Crit Care 2020;24.[doi:10.1186/s13054-020-03007-0](http://dx.doi.org/10.1186/s13054-020-03007-0) 4. Soto‐Mota A, Marfil‐Garza BA, Martínez Rodríguez E, et al. The low‐harm score for predicting mortality in patients diagnosed with COVID‐19: a multicentric validation study. J Am Coll Emerg Physicians Open 2020;1:1436–43.[doi:10.1002/emp2.12259](http://dx.doi.org/10.1002/emp2.12259) 5. Bello-Chavolla OY, Antonio-Villa NE, Ortiz-Brizuela E, et al. Validation and repurposing of the MSL-COVID-19 score for prediction of severe COVID-19 using simple clinical predictors in a triage setting: the Nutri-CoV score. PLoS One 2020;15:e0244051.[doi:10.1371/journal.pone.0244051](http://dx.doi.org/10.1371/journal.pone.0244051)pmid:http://www.ncbi.nlm.nih.gov/pubmed/33326502 [CrossRef](/lookup/external-ref?access_num=10.1371/journal.pone.0244051&link_type=DOI) [PubMed](/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) 6. White DB, Lo B. A framework for rationing ventilators and critical care beds during the COVID-19 pandemic. JAMA 2020;323:1773.[doi:10.1001/jama.2020.5046](http://dx.doi.org/10.1001/jama.2020.5046)pmid:http://www.ncbi.nlm.nih.gov/pubmed/32219367 [CrossRef](/lookup/external-ref?access_num=10.1001/jama.2020.5046&link_type=DOI) [PubMed](/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) 7. OECD/European Union. How resilient have European health systems been to the COVID-19 crisis? In: Health at a glance: Europe 2020: state of health in the EU cycle, 2020: 23–81. 8. Olivas-Martínez A, Cárdenas-Fragoso JL, Jiménez JV, et al. In-Hospital mortality from severe COVID-19 in a tertiary care center in Mexico City; causes of death, risk factors and the impact of hospital saturation. PLoS One 2021;16:e0245772.[doi:10.1371/journal.pone.0245772](http://dx.doi.org/10.1371/journal.pone.0245772)pmid:http://www.ncbi.nlm.nih.gov/pubmed/33534813 [CrossRef](/lookup/external-ref?access_num=10.1371/journal.pone.0245772&link_type=DOI) [PubMed](/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) 9. Najera H, Ortega-Avila AG. Health and institutional risk factors of COVID-19 mortality in Mexico, 2020. Am J Prev Med 2021;60:471–7.[doi:10.1016/j.amepre.2020.10.015](http://dx.doi.org/10.1016/j.amepre.2020.10.015)pmid:http://www.ncbi.nlm.nih.gov/pubmed/33745520 [PubMed](/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) 10. Khan Z, Hulme J, Sherwood N. An assessment of the validity of SOFA score based triage in H1N1 critically ill patients during an influenza pandemic. Anaesthesia 2009;64:1283–8.[doi:10.1111/j.1365-2044.2009.06135.x](http://dx.doi.org/10.1111/j.1365-2044.2009.06135.x)pmid:http://www.ncbi.nlm.nih.gov/pubmed/19860754 [CrossRef](/lookup/external-ref?access_num=10.1111/j.1365-2044.2009.06135.x&link_type=DOI) [PubMed](/lookup/external-ref?access_num=19860754&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) [Web of Science](/lookup/external-ref?access_num=000271626100003&link_type=ISI) 11. Fronczek J, Polok K, Devereaux PJ, et al. External validation of the revised cardiac risk index and national surgical quality improvement program myocardial infarction and cardiac arrest calculator in noncardiac vascular surgery. Br J Anaesth 2019;123:421–9.[doi:10.1016/j.bja.2019.05.029](http://dx.doi.org/10.1016/j.bja.2019.05.029)pmid:http://www.ncbi.nlm.nih.gov/pubmed/31256916 [PubMed](/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) 12. Carr E, Bendayan R, Bean D, et al. Evaluation and improvement of the National Early Warning Score (NEWS2) for COVID-19: a multi-hospital study. BMC Med 2021;19:23.[doi:10.1186/s12916-020-01893-3](http://dx.doi.org/10.1186/s12916-020-01893-3)pmid:http://www.ncbi.nlm.nih.gov/pubmed/33472631 [CrossRef](/lookup/external-ref?access_num=10.1186/s12916-020-01893-3&link_type=DOI) [PubMed](/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) 13. Ros MM, van der Zaag-Loonen HJ, Hofhuis JGM, et al. Survival prediction in severely ill patients study—the prediction of survival in critically ill patients by ICU physicians. Crit Care Explor 2021;3:e0317.[doi:10.1097/CCE.0000000000000317](http://dx.doi.org/10.1097/CCE.0000000000000317)pmid:http://www.ncbi.nlm.nih.gov/pubmed/33458684 [PubMed](/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) 14. Donzé J, Rodondi N, Waeber G, et al. Scores to predict major bleeding risk during oral anticoagulation therapy: a prospective validation study. Am J Med 2012;125:1095–102.[doi:10.1016/j.amjmed.2012.04.005](http://dx.doi.org/10.1016/j.amjmed.2012.04.005)pmid:http://www.ncbi.nlm.nih.gov/pubmed/22939362 [CrossRef](/lookup/external-ref?access_num=10.1016/j.amjmed.2012.04.005&link_type=DOI) [PubMed](/lookup/external-ref?access_num=22939362&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) 15. Nazerian P, Morello F, Prota A, et al. Diagnostic accuracy of physician's gestalt in suspected COVID-19: prospective bicentric study. Acad Emerg Med 2021;28:404–11.[doi:10.1111/acem.14232](http://dx.doi.org/10.1111/acem.14232)pmid:http://www.ncbi.nlm.nih.gov/pubmed/33576155 [PubMed](/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) 16. Cheon S, Agarwal A, Popovic M, et al. The accuracy of clinicians' predictions of survival in advanced cancer: a review. Ann Palliat Med 2016;5:22–9.[doi:10.3978/j.issn.2224-5820.2015.08.04](http://dx.doi.org/10.3978/j.issn.2224-5820.2015.08.04)pmid:http://www.ncbi.nlm.nih.gov/pubmed/26841812 [PubMed](/lookup/external-ref?access_num=26841812&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) 17. Zhou F, Yu T, Du R, et al. Clinical course and risk factors for mortality of adult inpatients with COVID-19 in Wuhan, China: a retrospective cohort study. Lancet 2020;395:1054–62.[doi:10.1016/S0140-6736(20)30566-3](http://dx.doi.org/10.1016/S0140-6736(20)30566-3)pmid:http://www.ncbi.nlm.nih.gov/pubmed/32171076 [CrossRef](/lookup/external-ref?access_num=10.1016/S0140-6736(20)30566-3&link_type=DOI) [PubMed](/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) 18. Yan L, Zhang H-T, Goncalves J, et al. An interpretable mortality prediction model for COVID-19 patients. Nat Mach Intell 2020;2:283–8.[doi:10.1038/s42256-020-0180-7](http://dx.doi.org/10.1038/s42256-020-0180-7) 19. Liu J, Liu Y, Xiang P. Neutrophil-To-Lymphocyte ratio predicts severe illness patients with 2019 novel coronavirus in the early stage. medR 2020.[doi:10.1101/2020.02.10.20021584](http://dx.doi.org/10.1101/2020.02.10.20021584) 20. DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics 1988;44:837.[doi:10.2307/2531595](http://dx.doi.org/10.2307/2531595)pmid:http://www.ncbi.nlm.nih.gov/pubmed/3203132 [CrossRef](/lookup/external-ref?access_num=10.2307/2531595&link_type=DOI) [PubMed](/lookup/external-ref?access_num=3203132&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) [Web of Science](/lookup/external-ref?access_num=A1988Q069100016&link_type=ISI) 21. Goksuluk D, Korkmaz S, Zararsiz G, et al. EasyROC: an interactive web-tool for ROC curve analysis using R language environment. R J 2016;8:213–30.[doi:10.32614/RJ-2016-042](http://dx.doi.org/10.32614/RJ-2016-042) [CrossRef](/lookup/external-ref?access_num=10.32614/RJ-2016-042&link_type=DOI) 22. Obuchowski NA. Roc analysis. AJR Am J Roentgenol 2005;184:364–72.[doi:10.2214/ajr.184.2.01840364](http://dx.doi.org/10.2214/ajr.184.2.01840364)pmid:http://www.ncbi.nlm.nih.gov/pubmed/15671347 [CrossRef](/lookup/external-ref?access_num=10.2214/ajr.184.2.01840364&link_type=DOI) [PubMed](/lookup/external-ref?access_num=15671347&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom) [Web of Science](/lookup/external-ref?access_num=000226868300003&link_type=ISI) 23. Birkmeyer JD, Barnato A, Birkmeyer N, et al. The impact of the COVID-19 pandemic on hospital admissions in the United States. Health Aff 2020;39:2010–7.[doi:10.1377/hlthaff.2020.00980](http://dx.doi.org/10.1377/hlthaff.2020.00980) [PubMed](/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjim%2F70%2F2%2F415.atom)