Challenge #7: Implementing virtual clinical trials with multiscale models to predict outcomes
Question from Gary An: Heterogeneity is a hallmark of clinical populations; do you think there are types/classes of approaches to accounting that heterogeneity when trying to represent clinical populations? It seems like there are different approaches based where on the continuum of translation.
Challenge #10: Predictive multiscale models that strongly incorporate Uncertainty Quantification
Question from Yaling Liu: In some cases, there are only a few measurable outcomes (such as tumor size of a few patiences), compared to a huge number of tunable variables/uncertainties - how to develop an accurate model for such ill-conditioned problem?
Comment from Gary An: With respect to Uncertainty of Model structure: I think there is generally a premature push to discard models with the idea of coming to a "true" model. Given the degree of uncertainty wrt model structures, I think we need to consider approaches that facilitate the propagation of large ensembles of models during the validation/evaluation phase; need better methods for this. Completely agree with the need for non-parametric means of evaluating model/data output. Also completely agree with knowledge-based AI/ML
Comment from Herbert Sauro: I second the comment by Gary. In fact, I would go further and question the very way we do biomedical modeling. We are too much concerned at finding 'the' model when in fact given the limited and nosy data we have, there are likely to be many models that are consistent with the data. I am a big believer in using ensembles of models that attempt to capture all the uncertainty in our knowledge. The ensemble will give a range of predictions, some more probable than others depending on the makeup of the ensemble. The key question is how to improve the predictability of the ensemble.
At what level of fidelity does a patient-specific model need to be in order to be useful for a virtual clinical trial? Or is this simply a tautology?