Back to Main Agenda
Back to THEME 2- Partial Differential Equations session page
Session Lead: George Karniadakis
IMAG Moderators: Fariba Fahroo (AFOSR), Tony Kirilusha (NIAMS)
Breakout Session Notes:
- Introductions (name and interest):
- Session 1:
- Session 2:
- Build on current state of the art PDE Models
- Leverage the sources of certainty (physics) to mitigate sources of uncertainty (data)
- Use ensembles of PDEs
- Node-2-Vec – allows to find effective dimensionality of a system but letting dimensions with a very high uncertainty fall out.
- How high can we include uncertainty propagation into “towers” that comprise multiscale models?
- Build on current state of the art models for Digital Twins
- Models to study human movement go into the hands of physicians, who just want it to work
- Train the model to be adaptable to a specific patient (not a population average) – this is a very difficult problem. Pose the problem by assuming the parameters are a sample from a characterized population, fit measurements from a new patient onto existing parameters (which measurements will give you the maximum amount of information).
- Multiple layers of anatomy in the digital twin – what are the opportunities for “persistent training” (continuous learning over time)?
- Do you need to simulate data for “rare” phenotypes (rare diseases vs. common ones)? – does it address the question of cohort heterogeneity in cases of “common disease”.
- Data access and data privacy? Harder to get human data than non-human data.
- Can you explain why some drugs work in animals but fail in humans (or the other way around?)
- How does instantaneous injury develop into a chronic injury?
- ML-MSM integration opportunities
- Challenges ML-MSM modelers should address
- Where does your PDE come from? If a PDE represents folding tissue, how do you pick the right one? Do you trust the PDE?
- Rich field of uncertainty quantification. There is a lack of work that addresses confidence in machine-learning (ML) predictions. These ideas from the PDE world should be transplanted to ML?
- Reproducible research computing - disseminating scripts and code, standardized language to describe models.
- Model unceratinty in the context of ML is dependent on the training data (and on the network itself). What uncertainties are there? (Parametric uncertainty?)
- In addition to the model, also model the measuring process? Data come from different modaities? What is the hierarchy of data - which ones do you trust more?
- Should you model the data collection process? Is that feasible?
- Brittleness of models is an ongoing issue. Neural networks do better if exposed to noise during training.
- The idea is to make predictions out of the scope of the orinigal data. At the end of the day - does it work?
- Integrating prior knowledge of physics into the model to improve performance.
- Different architectures of neural networks are more appropriate to certain types of data than to others.
- Potential barrier to standardized adversarial training is speed - adversarial training is very computationally expensive.
- Would an ensemble of PDEs work well? Model merging / convergence.
- Is it possible to combine human and animal data to improve overall performance of a model for human predictions. For critical tasks models should come back with a confidence parameter.
Table sorting checkbox
Off