The modelling and approach to tackle the hard medical decisions associated with the spread of the Covid-19 virus may be based on weak and overly-optimistic evidence from studies that are biased and unreliable.
The research published in the BMJ examined multiple studies on the virus and found some were poorly reported, had a high risk of being biased and included recommendations that were questionable should they be put into practice.
The current viral nucleic acid testing and chest computed tomography (CT) are standard methods for diagnosing Covid-19, but are time-consuming. So a team of international experts from Maastricht University, KU Leuven, University Medical Center Utrecht, Oxford University, Medical University of Vienna, Keele University, and Leiden University, in collaboration with the Cochrane Prognosis Methods group, set out to review and appraise prediction models for diagnosis and prognosis of Covid-19 infection from published and pre-print reports.
Quality of reporting in the studies varied substantially
All of these models aimed to predict either the presence of existing Covid-19 infection, future complications in individuals already diagnosed, or models to identify individuals at high risk for Covid-19 in the general population. They focused on 27 studies that described 31 prediction models.
The vast majority (25) of studies used data on Covid-19 cases from China, one study used data on Italian cases, and one study used international data (among others, US, UK and China). Collectively, data were gathered between 8 December 2019 and 15 March 2020.
The researchers’ analysis identified three models to predict hospital admission from pneumonia and other events (as a substitution for covid-19 pneumonia) in the general population, as well as 18 diagnostic models to detect covid-19 infection in symptomatic individuals, 13 of which were machine learning utilising CT results.
In addition, they identified 10 prognostic models for predicting mortality risk, a person’s progression to a severe state, or length of hospital stay.
The researchers found that all the studies they analysed were rated as having a high risk of bias, mostly because:
- they had a non-representative selection of control patients
- they excluded patients who were still ill at the end of the study
- they had poor statistical analysis.
The quality of reporting in the studies varied substantially and a description of the study population and intended use of the models was absent in almost all reports with calibration of predictions rarely being assessed.
Prediction models for Covid-19 were being produced quickly to help support medical decision making because of the urgent need, said the researchers, and they acknowledge that clinical data from Covid-19 patients is still scarce and all studies were done under severe time constraints.
However, they conclude: “Our review indicates proposed models are poorly reported and at high risk of bias. Thus, their reported performance is likely optimistic and using them to support medical decision making is not advised.”
This raised concern that the “models may be flawed and perform poorly when applied in practice, such that their predictions may be unreliable”.
They recommend immediate sharing of the individual participant data from covid-19 studies in order to support collaborative efforts in building “more rigorously developed prediction models” and evaluating existing models.
“We also stress the need to follow methodological guidance when developing and validating prediction models, as unreliable predictions may cause more harm than benefit when used to guide clinical decisions,” they conclude.