Skip to main content

Main menu

  • Online first
    • Online first
  • Current issue
    • Current issue
  • Archive
    • Archive
  • Submit a paper
    • Online submission site
    • Information for authors
  • About the journal
    • About the journal
    • Editorial board
    • Information for authors
    • FAQs
    • Thank you to our reviewers
      • Thank you to our reviewers
    • American Federation for Medical Research
  • Help
    • Contact us
    • Feedback form
    • Reprints
    • Permissions
    • Advertising
  • BMJ Journals

User menu

  • Login

Search

  • Advanced search
  • BMJ Journals
  • Login
  • Facebook
  • Twitter
JIM

Advanced Search

  • Online first
    • Online first
  • Current issue
    • Current issue
  • Archive
    • Archive
  • Submit a paper
    • Online submission site
    • Information for authors
  • About the journal
    • About the journal
    • Editorial board
    • Information for authors
    • FAQs
    • Thank you to our reviewers
    • American Federation for Medical Research
  • Help
    • Contact us
    • Feedback form
    • Reprints
    • Permissions
    • Advertising

Prospective predictive performance comparison between clinical gestalt and validated COVID-19 mortality scores

Adrian Soto-Mota, Braulio Alejandro Marfil-Garza, Santiago Castiello-de Obeso, Erick Jose Martinez Rodriguez, Daniel Alberto Carrillo Vazquez, Hiram Tadeo-Espinoza, Jessica Paola Guerrero Cabrera, Francisco Eduardo Dardon-Fierro, Juan Manuel Escobar-Valderrama, Jorge Alanis-Mendizabal, Juan Gutierrez-Mejia
DOI: 10.1136/jim-2021-002037 Published 25 January 2022
Adrian Soto-Mota
1Metabolic Diseases Research Unit, National Institute of Medical Sciences and Nutrition Salvador Zubirán, Mexico City, Mexico
2Internal Medicine, National Institute of Medical Sciences and Nutrition Salvador Zubiran, Mexico
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Adrian Soto-Mota
Braulio Alejandro Marfil-Garza
2Internal Medicine, National Institute of Medical Sciences and Nutrition Salvador Zubiran, Mexico
3CHRISTUS-LatAm Hub – Excellence and Innovation Center, Monterrey, Mexico
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Santiago Castiello-de Obeso
4Experimental Psychology, University of Oxford, Oxford, UK
5Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Erick Jose Martinez Rodriguez
2Internal Medicine, National Institute of Medical Sciences and Nutrition Salvador Zubiran, Mexico
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Daniel Alberto Carrillo Vazquez
2Internal Medicine, National Institute of Medical Sciences and Nutrition Salvador Zubiran, Mexico
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hiram Tadeo-Espinoza
2Internal Medicine, National Institute of Medical Sciences and Nutrition Salvador Zubiran, Mexico
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jessica Paola Guerrero Cabrera
2Internal Medicine, National Institute of Medical Sciences and Nutrition Salvador Zubiran, Mexico
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Francisco Eduardo Dardon-Fierro
2Internal Medicine, National Institute of Medical Sciences and Nutrition Salvador Zubiran, Mexico
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Juan Manuel Escobar-Valderrama
2Internal Medicine, National Institute of Medical Sciences and Nutrition Salvador Zubiran, Mexico
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jorge Alanis-Mendizabal
2Internal Medicine, National Institute of Medical Sciences and Nutrition Salvador Zubiran, Mexico
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Juan Gutierrez-Mejia
2Internal Medicine, National Institute of Medical Sciences and Nutrition Salvador Zubiran, Mexico
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • eLetters
  • Info & Metrics
  • PDF
Loading

Abstract

Most COVID-19 mortality scores were developed at the beginning of the pandemic and clinicians now have more experience and evidence-based interventions. Therefore, we hypothesized that the predictive performance of COVID-19 mortality scores is now lower than originally reported. We aimed to prospectively evaluate the current predictive accuracy of six COVID-19 scores and compared it with the accuracy of clinical gestalt predictions. 200 patients with COVID-19 were enrolled in a tertiary hospital in Mexico City between September and December 2020. The area under the curve (AUC) of the LOW-HARM, qSOFA, MSL-COVID-19, NUTRI-CoV, and NEWS2 scores and the AUC of clinical gestalt predictions of death (as a percentage) were determined. In total, 166 patients (106 men and 60 women aged 56±9 years) with confirmed COVID-19 were included in the analysis. The AUC of all scores was significantly lower than originally reported: LOW-HARM 0.76 (95% CI 0.69 to 0.84) vs 0.96 (95% CI 0.94 to 0.98), qSOFA 0.61 (95% CI 0.53 to 0.69) vs 0.74 (95% CI 0.65 to 0.81), MSL-COVID-19 0.64 (95% CI 0.55 to 0.73) vs 0.72 (95% CI 0.69 to 0.75), NUTRI-CoV 0.60 (95% CI 0.51 to 0.69) vs 0.79 (95% CI 0.76 to 0.82), NEWS2 0.65 (95% CI 0.56 to 0.75) vs 0.84 (95% CI 0.79 to 0.90), and neutrophil to lymphocyte ratio 0.65 (95% CI 0.57 to 0.73) vs 0.74 (95% CI 0.62 to 0.85). Clinical gestalt predictions were non-inferior to mortality scores, with an AUC of 0.68 (95% CI 0.59 to 0.77). Adjusting scores with locally derived likelihood ratios did not improve their performance; however, some scores outperformed clinical gestalt predictions when clinicians’ confidence of prediction was <80%. Despite its subjective nature, clinical gestalt has relevant advantages in predicting COVID-19 clinical outcomes. The need and performance of most COVID-19 mortality scores need to be evaluated regularly.

Significance of this study

What is already known about this subject?

  • Multiple scores have been designed or repurposed to predict survival in patients with COVID-19; however, all of them were designed or validated during the early days of the pandemic and COVID-19 healthcare has greatly improved since then.

  • Clinical gestalt has been proven to accurately predict survival in other clinical contexts.

What are the new findings?

  • The observed area under the curve of all scores was significantly lower than originally reported.

  • No score was significantly better than clinical gestalt predictions.

How might these results change the focus of research or clinical practice?

  • The need and performance of most COVID-19 mortality scores need to be re-evaluated with regularity.

Introduction

Background

Many prediction models have been developed for COVID-191–5 and their applications in healthcare range from bedside counseling to triage systems.6 However, most have been developed within specific clinical contexts1 2 or validated with data from the early months of the pandemic.4 5 Since then, health systems have implemented protocols and adaptations to cope with surge in hospitalization rates,7 and now clinicians have more knowledge and experience in managing these patients. Additionally, other non-biological factors such as critical care availability have been found to strongly influence the prognosis of patients with COVID-19.8 9 These frequently intangible factors (eg, the experience of the staff with specific healthcare tasks) impact prognosis but are ignored by mortality scores.

Prediction models are context-sensitive10; therefore, to preserve their accuracy, they must be applied in contexts as similar as possible to the ones where they were derived from. Considering that healthcare systems and settings are quite different around the world, there are many examples of scores requiring adjustments or local adaptations.11 12

Predicting is an everyday activity in most medical fields, and in other scenarios clinicians’ subjective predictions have been observed to be as accurate as mathematically derived models.13–15 However, the opposite has been observed as well; for example, clinicians tend to overestimate the long-term survival of oncological patients.16

This work aimed to compare the predictive performance of different mortality prediction models for COVID-19 (some of them in the same hospital they were developed) against their original performance and clinical gestalt predictions.

Methods

Study design

This observational prospective study was carried out in a tertiary hospital in Mexico City, fully dedicated to providing COVID-19 healthcare, between October and December 2020.

Selection of subjects

Data from 200 consecutive hospital admissions (for RT-PCR-confirmed COVID-19 infection) were obtained between October and December 2020. We excluded from the analysis all patients without a documented clinical outcome (eg, if they had not been discharged at the moment of data collection, transferred to another hospital, voluntarily discharged). A total of 166 patients were included in the analysis because 34 patients were either transferred to other hospitals or voluntarily discharged. The most frequent criteria for hospital admission were requiring supplemental oxygen to reach oxygen saturation >90%, respiratory rate >20, need for ventilation (non-invasive or invasive), severity of pneumonia based on CT, hemodynamic instability, and impossibility of home isolation.

A total of 24 internal medicine residents with more than 6 months of experience (all residency programs in Mexico start every year on March 1) in COVID-19 healthcare participated in the study. Their median years of hospital experience was 2 (IQR 1–3).

Measurements

Clinical gestalt predictions and all necessary data to calculate prognostic scores were obtained at hospital admission from October to December 2020. Internal medicine residents in charge of collecting clinical history, physical examination, and initial imaging and laboratory work-up were asked the following questions once all initial imaging and laboratory reports were available:

  • How likely do you think this patient will die from COVID-19? (as a percentage).

  • How confident are you of that prediction? (as a percentage).

To obtain the earliest and best informed clinical gestalt prediction available, we asked only the resident in charge of each patient’s hospital admission. While it is likely that clinical gestalt scores vary between evaluators, inviting more evaluators would require evaluating the same patient at different times (giving a ‘predictive advantage’ to later scorers who would be able to see if a patient is improving after their initial therapeutic interventions) and would allow predictions with different levels of information (from evaluators who did not spend the same amount of time directly examining the patient).

To test the hypothesis that updating the statistical weights of a score with local data could help preserve its original accuracy, we developed a second version of the LOW-HARM score (LOW-HARM V.2 score) using positive and negative likelihood ratios derived from cohorts of Mexican patients4 8 (instead of only positive likelihood ratios from Chinese patients17 18 as in the original version).

The likelihood ratios (LR+/LR−) used to calculate the LOW-HARM V.2 score were as follows: oxygen saturation <88%=2.61/0.07; previous diagnosis of hypertension=2.37/0.65; elevated troponin (>20 pg/mL)=15.6/0.62; elevated creatine phosphokinase (>223 U/L)=2.37/0.88; leukocyte count >10.0 cells ×109/L=5.6/0.48; lymphocyte count <800 cells/µL (<0.8 cells/mm3)=2.24/0.48; and serum creatinine >1.5 mg/dL=19.1/0.6.

All previously validated scores were calculated by the research team.

Outcomes

The primary outcome of this study was the area under the curve (AUC) of each COVID-19 mortality prediction method. To test the hypothesis that the predictive performance of already validated scores declined over time, we chose the LOW-HARM,4 MSL-COVID-19, and NUTRI-CoV5 scores because all these three were validated with data from Mexican patients with COVID-19. To rule out that this was a phenomenon exclusive of scores developed with Mexican data, we re-evaluated the accuracy of NEWS21 and qSOFA2 scores and the neutrophil to lymphocyte ratio to predict mortality from COVID-19.19

To test the hypothesis that scores outperformed clinical gestalt predictions when their confidence was ‘low’ (below or equal to the median perceived confidence; ie, <80%), we conducted a comparative AUC analysis of cases below or above this threshold.

Analysis

Clinical and demographic data were analyzed using mean or median (depending on their distribution) and SD or IQR as dispersion measures. Shapiro-Wilk tests were used to assess if variables were normally distributed.

R V.4.0.3 packages ‘caret’ for confusion matrix calculations and ‘pROC’ for receiver operating characteristic curve (ROC) analysis and STATA V.12 software were used for statistical analysis. AUC differences were analyzed using DeLong’s method with the STATA function ‘roccomp’.20 A p value of <0.05 for inferring statistical significance was used in all statistical tests. Missing data were handled by mean substitution.

Sample size rationale

We calculated sample size using ‘easyROC’,21 an open R-based web tool used to estimate sample sizes for direct and non-inferior AUC comparisons using Obuchowski’s method22; to detect non-inferiority with >0.05 maximal AUC difference with the reported AUC of LOW-HARM (0.96, 95% CI 0.94 to 0.98), a case allocation ratio of 0.7 (because the mortality at our center is ~0.3), a power of 0.8, and a significance cut-off level of 0.05, 159 patients would be needed. To detect >0.1 difference between AUCs, 99 patients would be needed with the rest of the parameters held constant. To allow a patient loss rate of ~25%, we obtained data from 200 consecutive hospital admissions.

Patient and public involvement

Patients or the public were not involved in the design, or conduct, or reporting, or dissemination plans of our research.

Results

Characteristics of study subjects

We included 166 patients in our study. Of these, 47 (28.3%) died, while 119 (71.7%) survived. The general demographics and clinical characteristics of these populations are shown in table 1. As expected, decreased peripheral saturation, ventilatory support, cardiac injury, renal injury, leukocytosis, and lymphocytosis were more prevalent in the group of patients who died during their hospitalization.

View this table:
  • View inline
  • View popup
Table 1

Patient demographics and clinical data

Main results

Table 2 shows the median scores and their IQR for each prediction tool. As expected, there was a more pronounced mean difference between groups in scores that were based on a 100-point scale (clinical gestalt, LOW-HARM scores). Table 2 shows the originally reported AUC versus the AUC we observed in our data.

View this table:
  • View inline
  • View popup
Table 2

Distribution and accuracy of selected mortality prediction tools

Performance characteristics of selected predictive models and AUC comparisons

Figure 1 shows the performance characteristics of the selected predictive models. Overall, we found a statistically significant difference between predictive models (p=0.002). However, we did not find statistically significant differences between clinical gestalt and other prediction tools.

Figure 1
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1

AUC comparison of selected mortality prediction tools. AUC, area under the curve.

As expected, we found that the confidence of prediction increased in cases in which the predicted probability of death was clearly high or clearly low (figure 2).

Figure 2
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2

Clinical gestalt prediction and confidence of prediction.

We found a moderate-strong, bimodal correlation between the confidence of prediction and the predicted probability of death at <50% predicted probability of death (Pearson’s r=0.60, p<0.0001) and at >50% predicted probability of death (Pearson’s r=0.50, p=0.0002).

We further explored the performance characteristics of the selected predictive models in specific contexts (online supplemental appendix table 1). Figure 3 shows the results of the analysis including cases in which the certainty of prediction was below and above 80%. Overall, we found a statistically significant difference between predictive models in both settings. In cases in which the confidence of prediction was ≤80%, both versions of the LOW-HARM scores showed a larger AUC compared with clinical gestalt (figure 3B and online supplemental appendix table 1).

Supplementary data

[jim-2021-002037supp001.pdf]
Figure 3
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3

AUC comparison of selected mortality prediction tools according to confidence of prediction. (A) AUC comparison of selected mortality prediction tools in cases where the confidence of prediction was >80%. (B) AUC comparison of selected mortality prediction tools in cases where the confidence of prediction was ≤80%. AUC, area under the curve.

An additional analysis restricted to cases in which the certainty of prediction was ≤80% and the predicted probability of death was ≤30% (ie, median value for all cases) found a statistically significant difference between predictive models (p=0.0005). Similarly, individual comparisons showed a larger statistically significant AUC differences between clinical gestalt and both versions of the LOW-HARM score (online supplemental appendix table 1).

Discussion

Outcome prediction plays an important role in everyday clinical practice. This work highlights the inherent limitations of statistically derived scores and some of the advantages of clinical gestalt predictions. In other scenarios where using predictive scores is frequent, more experienced clinicians can always ponder their sometimes subjective, yet quite valuable insight. However, with the COVID-19 pandemic, clinicians of all levels of training started their learning curve at the same time. In this study, we had the unique opportunity of re-evaluating more than one score (two of them in the same setting and for the same purpose they were designed for), while testing the accuracy of clinical gestalt, in a group of clinicians who started their learning curve for managing a disease at the same time (experience and training within healthcare teams are usually mixed for other diseases).

Additionally, we explored the accuracy of clinical gestalt across different degrees of prediction confidence. To our knowledge, this is the first time that this type of analysis is done for subjective clinical predictions and proved to be quite insightful. The fact that clinical gestalt’s accuracy correlates with confidence in prediction suggests that while there is value in subjective predictions, it is also important to ask ourselves about how confident we are about our predictions. Interestingly, our results suggest clinical gestalt predictions are particularly prone to being positively biased, and that clinicians were more likely to correctly predict which patients would survive than which patients would die (figure 2 and online supplemental figure 1). This is consistent with other studies that have found that clinicians tend to overestimate the effectiveness of their treatments and therefore patient survival.16

Supplementary data

[jim-2021-002037supp002.pdf]

Since it is expected that scores will lose at least some of their predictive accuracy when used outside the context they were developed in, it has already been reported that local adaptations improve or help retain their predictive performance. In this work, we tried to evaluate if by updating the likelihood ratio values used in the calculation of the LOW-HARM score with data from Mexican patients we could mitigate its loss of accuracy. However, despite the AUC of the LOW-HARM V.2 score being slightly larger than the AUC of the original LOW-HARM score, the difference was not statistically significant nor significantly more accurate than clinical gestalt predictions. This highlights the fact that scores are far from being final or perfect tools even after implementing local adjustments.

Limitations

Even when some of the results in this study can prove insightful for other clinical settings and challenges, our results cannot be widely extrapolated due to the local setting of our work and the highly heterogeneous nature of COVID-19 healthcare systems. Additionally, it is likely that emerging variants, vaccination, or the seasonality of contagion waves23 will continue to influence the predictive capabilities of all predictive models. Additionally, our sample size was calculated to detect non-inferiority between prediction methods. On the other hand, it is possible that, despite having comparable experience with COVID-19, overall clinical experience still influences the accuracy of clinical gestalt predictions. We were not able to account for this source of variability because of how our hospital’s patient admission workflows are designed (senior attendings usually meet patients after their initial work-up is complete and their prediction would also be informed by the success or failure of the early therapeutic interventions).

Furthermore, individual consistency cannot be accurately estimated as, on average, each clinician evaluated seven patients only. Nonetheless, 87.5% of the residents (21 of 24) provided at least one prediction per quartile, and we did not observe any of them consistently registering high nor low clinical gestalt scores.

Specifically designed studies are needed to better investigate the relationship between subjective confidence, accuracy, and positive bias. Clinical predictions will always be challenging because all medical fields are in constant development and clinical challenges are highly dynamic phenomena.

All scores had lower predictive accuracy than in their original publications and none of them showed better predictive performance than clinical gestalt predictions; however, scores could still outperform clinical gestalt when confidence in clinical gestalt predictions is perceived to be low. These results remind us that prognostic scores require constant re-evaluation even after being properly validated and adjusted and that no score can or should ever substitute careful medical assessments and thoughtful clinical judgment. Despite its inherent subjectivity, clinical gestalt immediately incorporates context-specific factors, and in contrast to statistically derived models it is likely to improve its accuracy over time.

Data availability statement

Data are available upon reasonable request. Anonymized data for research purposes will be available upon request to the corresponding author.

Ethics statements

Patient consent for publication

Not required.

Ethics approval

This study was approved by the Ethics Committee for Research on Humans of the National Institute of Medical Sciences and Nutrition Salvador Zubirán on August 25, 2020 (reg no DMC‐3369‐20‐20‐1-1a).

Acknowledgments

All authors wish to thank the invaluable support of the National Institute of Medical Sciences and Nutrition Salvador Zubirán Emergency Department staff.

Footnotes

  • Contributors AS-M led, designed, collected, and analyzed research data. BAM-G, SCdO, and JG-M designed and analyzed research data. EMR, DACV, HT-E, JPGC, FED-F, JME-V, and JA-M collected and analyzed research data. All authors contributed to elaborating this manuscript and approved this version.

  • Funding BAMG is currently supported by the patronage of the National Institute of Medical Sciences and Nutrition Salvador Zubirán and by the Foundation for Health and Education Dr Salvador Zubirán (FunSaEd), and the CHRISTUS Excellence and Innovation Center.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

This article is made freely available for personal use in accordance with BMJ’s website terms and conditions for the duration of the covid-19 pandemic or until otherwise determined by BMJ. You may use, download and print the article for any lawful, non-commercial purpose (including text and data mining) provided that all copyright notices and trade marks are retained.

https://bmj.com/coronavirus/usage

References

  1. ↵
    1. Rigoni M,
    2. Torri E,
    3. Nollo G, et al
    . NEWS2 is a valuable tool for appropriate clinical management of COVID-19 patients. Eur J Intern Med 2021;85:118–20.doi:10.1016/j.ejim.2020.11.020pmid:http://www.ncbi.nlm.nih.gov/pubmed/33358535
    OpenUrlPubMed
  2. ↵
    1. Liu S,
    2. Yao N,
    3. Qiu Y, et al
    . Predictive performance of SOFA and qSOFA for in-hospital mortality in severe novel coronavirus disease. Am J Emerg Med 2020;38:2074–80.doi:10.1016/j.ajem.2020.07.019pmid:http://www.ncbi.nlm.nih.gov/pubmed/33142178
    OpenUrlPubMed
  3. ↵
    1. Ma A,
    2. Cheng J,
    3. Yang J, et al
    . Neutrophil-to-lymphocyte ratio as a predictive biomarker for moderate-severe ARDS in severe COVID-19 patients. Crit Care 2020;24.doi:10.1186/s13054-020-03007-0
  4. ↵
    1. Soto‐Mota A,
    2. Marfil‐Garza BA,
    3. Martínez Rodríguez E, et al
    . The low‐harm score for predicting mortality in patients diagnosed with COVID‐19: a multicentric validation study. J Am Coll Emerg Physicians Open 2020;1:1436–43.doi:10.1002/emp2.12259
    OpenUrl
  5. ↵
    1. Bello-Chavolla OY,
    2. Antonio-Villa NE,
    3. Ortiz-Brizuela E, et al
    . Validation and repurposing of the MSL-COVID-19 score for prediction of severe COVID-19 using simple clinical predictors in a triage setting: the Nutri-CoV score. PLoS One 2020;15:e0244051.doi:10.1371/journal.pone.0244051pmid:http://www.ncbi.nlm.nih.gov/pubmed/33326502
    OpenUrlCrossRefPubMed
  6. ↵
    1. White DB,
    2. Lo B
    . A framework for rationing ventilators and critical care beds during the COVID-19 pandemic. JAMA 2020;323:1773.doi:10.1001/jama.2020.5046pmid:http://www.ncbi.nlm.nih.gov/pubmed/32219367
    OpenUrlCrossRefPubMed
  7. ↵
    1. OECD/European Union
    . How resilient have European health systems been to the COVID-19 crisis? In: Health at a glance: Europe 2020: state of health in the EU cycle, 2020: 23–81.
  8. ↵
    1. Olivas-Martínez A,
    2. Cárdenas-Fragoso JL,
    3. Jiménez JV, et al
    . In-Hospital mortality from severe COVID-19 in a tertiary care center in Mexico City; causes of death, risk factors and the impact of hospital saturation. PLoS One 2021;16:e0245772.doi:10.1371/journal.pone.0245772pmid:http://www.ncbi.nlm.nih.gov/pubmed/33534813
    OpenUrlCrossRefPubMed
  9. ↵
    1. Najera H,
    2. Ortega-Avila AG
    . Health and institutional risk factors of COVID-19 mortality in Mexico, 2020. Am J Prev Med 2021;60:471–7.doi:10.1016/j.amepre.2020.10.015pmid:http://www.ncbi.nlm.nih.gov/pubmed/33745520
    OpenUrlPubMed
  10. ↵
    1. Khan Z,
    2. Hulme J,
    3. Sherwood N
    . An assessment of the validity of SOFA score based triage in H1N1 critically ill patients during an influenza pandemic. Anaesthesia 2009;64:1283–8.doi:10.1111/j.1365-2044.2009.06135.xpmid:http://www.ncbi.nlm.nih.gov/pubmed/19860754
    OpenUrlCrossRefPubMedWeb of Science
  11. ↵
    1. Fronczek J,
    2. Polok K,
    3. Devereaux PJ, et al
    . External validation of the revised cardiac risk index and national surgical quality improvement program myocardial infarction and cardiac arrest calculator in noncardiac vascular surgery. Br J Anaesth 2019;123:421–9.doi:10.1016/j.bja.2019.05.029pmid:http://www.ncbi.nlm.nih.gov/pubmed/31256916
    OpenUrlPubMed
  12. ↵
    1. Carr E,
    2. Bendayan R,
    3. Bean D, et al
    . Evaluation and improvement of the National Early Warning Score (NEWS2) for COVID-19: a multi-hospital study. BMC Med 2021;19:23.doi:10.1186/s12916-020-01893-3pmid:http://www.ncbi.nlm.nih.gov/pubmed/33472631
    OpenUrlCrossRefPubMed
  13. ↵
    1. Ros MM,
    2. van der Zaag-Loonen HJ,
    3. Hofhuis JGM, et al
    . Survival prediction in severely ill patients study—the prediction of survival in critically ill patients by ICU physicians. Crit Care Explor 2021;3:e0317.doi:10.1097/CCE.0000000000000317pmid:http://www.ncbi.nlm.nih.gov/pubmed/33458684
    OpenUrlPubMed
  14. ↵
    1. Donzé J,
    2. Rodondi N,
    3. Waeber G, et al
    . Scores to predict major bleeding risk during oral anticoagulation therapy: a prospective validation study. Am J Med 2012;125:1095–102.doi:10.1016/j.amjmed.2012.04.005pmid:http://www.ncbi.nlm.nih.gov/pubmed/22939362
    OpenUrlCrossRefPubMed
  15. ↵
    1. Nazerian P,
    2. Morello F,
    3. Prota A, et al
    . Diagnostic accuracy of physician's gestalt in suspected COVID-19: prospective bicentric study. Acad Emerg Med 2021;28:404–11.doi:10.1111/acem.14232pmid:http://www.ncbi.nlm.nih.gov/pubmed/33576155
    OpenUrlPubMed
  16. ↵
    1. Cheon S,
    2. Agarwal A,
    3. Popovic M, et al
    . The accuracy of clinicians' predictions of survival in advanced cancer: a review. Ann Palliat Med 2016;5:22–9.doi:10.3978/j.issn.2224-5820.2015.08.04pmid:http://www.ncbi.nlm.nih.gov/pubmed/26841812
    OpenUrlPubMed
  17. ↵
    1. Zhou F,
    2. Yu T,
    3. Du R, et al
    . Clinical course and risk factors for mortality of adult inpatients with COVID-19 in Wuhan, China: a retrospective cohort study. Lancet 2020;395:1054–62.doi:10.1016/S0140-6736(20)30566-3pmid:http://www.ncbi.nlm.nih.gov/pubmed/32171076
    OpenUrlCrossRefPubMed
  18. ↵
    1. Yan L,
    2. Zhang H-T,
    3. Goncalves J, et al
    . An interpretable mortality prediction model for COVID-19 patients. Nat Mach Intell 2020;2:283–8.doi:10.1038/s42256-020-0180-7
    OpenUrl
  19. ↵
    1. Liu J,
    2. Liu Y,
    3. Xiang P
    . Neutrophil-To-Lymphocyte ratio predicts severe illness patients with 2019 novel coronavirus in the early stage. medR 2020.doi:10.1101/2020.02.10.20021584
  20. ↵
    1. DeLong ER,
    2. DeLong DM,
    3. Clarke-Pearson DL
    . Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics 1988;44:837.doi:10.2307/2531595pmid:http://www.ncbi.nlm.nih.gov/pubmed/3203132
    OpenUrlCrossRefPubMedWeb of Science
  21. ↵
    1. Goksuluk D,
    2. Korkmaz S,
    3. Zararsiz G, et al
    . EasyROC: an interactive web-tool for ROC curve analysis using R language environment. R J 2016;8:213–30.doi:10.32614/RJ-2016-042
    OpenUrlCrossRef
  22. ↵
    1. Obuchowski NA
    . Roc analysis. AJR Am J Roentgenol 2005;184:364–72.doi:10.2214/ajr.184.2.01840364pmid:http://www.ncbi.nlm.nih.gov/pubmed/15671347
    OpenUrlCrossRefPubMedWeb of Science
  23. ↵
    1. Birkmeyer JD,
    2. Barnato A,
    3. Birkmeyer N, et al
    . The impact of the COVID-19 pandemic on hospital admissions in the United States. Health Aff 2020;39:2010–7.doi:10.1377/hlthaff.2020.00980
    OpenUrlPubMed
PreviousNext
Back to top
Vol 70 Issue 2 Table of Contents
Journal of Investigative Medicine: 70 (2)
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • AFMR Highlights
  • Front Matter (PDF)
Email

Thank you for your interest in spreading the word on JIM.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Prospective predictive performance comparison between clinical gestalt and validated COVID-19 mortality scores
(Your Name) has sent you a message from JIM
(Your Name) thought you would like to see the JIM web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
Alerts
Sign In to Email Alerts with your Email Address
Citation Tools
Prospective predictive performance comparison between clinical gestalt and validated COVID-19 mortality scores
Adrian Soto-Mota, Braulio Alejandro Marfil-Garza, Santiago Castiello-de Obeso, Erick Jose Martinez Rodriguez, Daniel Alberto Carrillo Vazquez, Hiram Tadeo-Espinoza, Jessica Paola Guerrero Cabrera, Francisco Eduardo Dardon-Fierro, Juan Manuel Escobar-Valderrama, Jorge Alanis-Mendizabal, Juan Gutierrez-Mejia
Journal of Investigative Medicine Feb 2022, 70 (2) 415-420; DOI: 10.1136/jim-2021-002037

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Cite This
  • APA
  • Chicago
  • Endnote
  • MLA
Loading
Prospective predictive performance comparison between clinical gestalt and validated COVID-19 mortality scores
Adrian Soto-Mota, Braulio Alejandro Marfil-Garza, Santiago Castiello-de Obeso, Erick Jose Martinez Rodriguez, Daniel Alberto Carrillo Vazquez, Hiram Tadeo-Espinoza, Jessica Paola Guerrero Cabrera, Francisco Eduardo Dardon-Fierro, Juan Manuel Escobar-Valderrama, Jorge Alanis-Mendizabal, Juan Gutierrez-Mejia
Journal of Investigative Medicine Feb 2022, 70 (2) 415-420; DOI: 10.1136/jim-2021-002037
Download PDF

Share
Prospective predictive performance comparison between clinical gestalt and validated COVID-19 mortality scores
Adrian Soto-Mota, Braulio Alejandro Marfil-Garza, Santiago Castiello-de Obeso, Erick Jose Martinez Rodriguez, Daniel Alberto Carrillo Vazquez, Hiram Tadeo-Espinoza, Jessica Paola Guerrero Cabrera, Francisco Eduardo Dardon-Fierro, Juan Manuel Escobar-Valderrama, Jorge Alanis-Mendizabal, Juan Gutierrez-Mejia
Journal of Investigative Medicine Feb 2022, 70 (2) 415-420; DOI: 10.1136/jim-2021-002037
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
Respond to this article
  • Tweet Widget
  • Facebook Like
  • Google Plus One
  • Article
    • Abstract
    • Introduction
    • Methods
    • Results
    • Discussion
    • Data availability statement
    • Ethics statements
    • Acknowledgments
    • Footnotes
    • References
  • Figures & Data
  • eLetters
  • Info & Metrics
  • PDF

Related Articles

Cited By...

More in this TOC Section

  • Bronchodilatory effect of higenamine as antiallergic asthma treatment
  • Evaluating reporting of patient-reported outcomes in randomized controlled trials regarding inflammatory bowel disease: a methodological study
  • Effects of statins on outcomes in Hispanic patients with COVID-19
Show more Original research

Similar Articles

 

CONTENT

  • Latest content
  • Current issue
  • Archive
  • Sign up for email alerts
  • RSS

JOURNAL

  • About the journal
  • Editorial board
  • Subscribe
  • Thank you to our reviewers
  • American Federation for Medical Research

AUTHORS

  • Information for authors
  • Submit a paper
  • Track your article
  • Open Access at BMJ

HELP

  • Contact us
  • Reprints
  • Permissions
  • Advertising
  • Feedback form

© 2023 American Federation for Medical Research