RETURN TO: Grouped Challenge: Models to Predict and Test Outcomes
Challenge #10: Predictive multiscale models that strongly incorporate Uncertainty Quantification
Uncertainty Quantification – What has been accomplished?
- Parameter uncertainty is addressable.
- What exactly are the methodologies that were developed?
- CVODES, DAKOTA, PSUADE, etc. (See https://www.pnnl.gov/main/publications/external/technical_reports/PNNL-20914.pdf).
- Application specific uncertainty analyses.
- Knowledgebases for reaction networks describing structure in standardized formats (Pathway tools, Kbase, COBRA/BiGG, EBI BioModel database).
- What exactly are the methodologies that were developed?
- UQ in perspective - Role of Models:
- First, as a way to capture and communicate knowledge (Standardization important)
- Second, as a tool for understanding processes, building intuition, and education. (Standardization and UQ important)
- Third, to predict experimental observables. (UQ important)
How have UQ methodologies impacted each field?
- Now that quantifying parameter uncertainty is getting easier, beginning to think about model structural uncertainty, intrinsic stochasticity, possible incomplete or inaccurate data sets.
- Clinical Impact?
- Antecdotal cases in which model/experiment forced reevaluation of data analysis?
- How do we use qualitative or semi-quantitative data in UQ?
- Have new theories resulted from this work to improve the understanding of the problems in the field?
- Emerging
- Physics-Informed Deep Learning, Karniadakis, et al.
- Inferring solutions of differential equations using noisy multi-fidelity data, Karniadakis, et al.
- Distilling the logic of behavioral dynamics using automated inference, Nemenman, et al.
- Emerging
UQ: What still needs to be done?
- Are there methods from other fields that should be applied to your field?
- Uncertain or semi-quantitative data: greater use/recognition of non-parametric methods in uncertainty quantitation, particularly when comparing noisy (uncertain) data to models and simulations.
- Stochastic data: Distribution tests for comparing data from stochastic experimental processes to simulations. E.g., Kolmogrov-Smirnov statistic.
- Neither stochastic data or uncertain data is well supported with UQ and optimization tools at the moment and we don’t have a formal way to describe them or capture this kind of observation.
- Incompleteness/Uncertainty in models: Combined forward and inverse modeling
- General methods to compare models with different structures (incorporating different sets of mechanisms).
- Methods to compare experiment and simulation when the data have structure that changes with space and time rather than a concentration/count time series forms (e.g., generation of metastases in an animal or SPIM images of a developing zebrafish embyro)? How do we determine goodness of fit with an experimental movie (especially if both are stochastic)? Are there any generic approaches that can help with this?
- What further connections need to be made to address unmet needs?
- What questions do you want to pose to the MSM Consortium related to these challenges?
Table sorting checkbox
Off