There currently are more than one hundred thousand medical apps. It is not clear to users and healthcare professionals which prediction models and apps are reliable. The checklist 'PROBAST' was published on January 1, 2019 in the medical science journal Annals of Internal Medicine.
Many medical computational models predict the likelihood of illness and death based on various characteristics such as age, gender, symptoms and test results. For example, there is a model that estimates the probability of a pregnant woman suffering complications during her pregnancy. Or a model that calculates the probability that a forty-year-old will develop diabetes within ten years or the probability that a man with prostate cancer will die within two years if he is or is not treated. Healthcare professionals then use these probabilities to give lifestyle and nutritional advice or make treatment decisions. These prediction models also form the basis of the tens of thousands of medical apps and websites that are accessible to everyone on the internet.
"The past decades have seen a proliferation of these types of clinical prediction models and medical apps. Healthcare professionals, patients and citizens no longer see the forest for the trees," says Prof. Dr Carl Moons of the Julius Center. He is one of the initiators of the checklist. "There are an estimated one hundred thousand plus medical prediction models out there. We have recently shown that there are more than 350 models for predicting the incidence of cardiovascular disease in healthy citizens alone. And this is a conservative estimate; there are probably many more. Healthcare professionals have no idea which model is most suitable in which situation.
The new checklist 'PROBAST' consists of twenty items, which deal with various aspects, such as the minimum number of subjects tested and whether the data has been adequately defined, measured and collected, but also whether the aspects have been properly collated, analyzed and reported with statistical techniques. "All this must be right and done in a scientifically responsible manner. We don't really know that for the majority of the tens of thousands of models out there. This checklist allows researchers to properly check the published or used prediction model, reproduce it and even better detect any fraud in the report. But more importantly: It enables them to better assess how a prediction model can be used in the field."
Poorly developed and tested prediction models can be detrimental to patients and citizens, says Moons. "It can lead to patients and their families being misinformed about, for example, the course of their illness, to the misprescription of medicines or, on the contrary, to a wrong decision not to treat a patient. The likelihood of cardiovascular disease within ten years, for example, determines whether you should give patients long-term cholesterol or blood pressure lowering agents. This probability estimation is done by such a clinical prediction model, which must therefore be an accurate prediction model, especially when these prediction models are incorporated in medical apps or websites that are accessible to everyone; in that case, it should be safe for users to assume that these prediction models are correct. And that's something we often don't know now."
When researchers, users, healthcare professionals and medical guideline developers start using this checklist, they will be able to ascertain the usability of prediction models more quickly. Moons: "It will step up the introduction and acceptance of good models and weed out poor or poorly substantiated prediction models. This is desperately needed, because we are seeing these prediction models appear more and more randomly on the internet or in medical apps, after which they are available to everyone. Such checklists already exist for publications on, for example, drug research and diagnostic tests." For more information, go to the PROBAST website (www.probast.org).