Questions: From Jacob Barhak: What about ensemble models for a way of fusing data? Have you identified trends in that direction that was successful in weather forecast?
Comment from Gary An: Jeff's approach to how to parameterize ABMs is exactly on point: the data that constrains the "higher order" cellular behavior is by definition phenomenological. The lack of a direct mechanistc mapping, however, is exactly what the models are for, and since there are often not directly quantifiable parameters, it is important to think of plausible parameter spaces (across a wide range of data sources). Also, in addition to Jacob's weather analogy, the workflow we should strive for is that seen in physics where theoretical models drive what experiments are done to exclude/reduce that set. Also, completely agree with James G's current comment, about the epistemic gap between knowledge of subcellular state and aggregate cellular behavior, which, again, reinforces the point of using the models as means of this translation.
Questions: From Yaling Liu: Is there advances on using large amount of data to predict Mechanisms - figuring out governing equations beyond the data, such as getting Navier-Stokes equation from flow patterns?
Question from Saleet Jafri: There have been a large growth in agent-based models that can reproduce the phenomena they are meant to simulate. In the data-poor scales, they seem to model behavior. How do the agent-based models give mechanistic insights?
Summary of Group DIscussion and Reponses from Jimmy Moore:
There are sometimes issues sharing data, but the models can be shared eg weather. Have we seen trends?> Yes – genome scale predictions are a good model but not done to the level of weather mapping.
Using models to design experiments is relatively straight forward for continuum models but not so for ABM. Is there a standard process?> (See comment from Gary An above) We are not aware of approaches that have been widely accepted as standard, but in general the idea is to use data to constrain the behavior of the agents. For cellular modeling, often have data available on individual agents, but not necessarily for agent interactions.
Continuum-based models parameters represent physical quantities, so that helps with identifiability.
Just now becoming aware of abilities to model mammalian cells (refers to network modeling example of Exciting Developments presented in slides), but there is still a disconnect with experimentalists. Modelers should not be over-enthusiastic.
How to “make experimentalists measure the stuff the model needs?”> Success stories help show them the value of that.> Example of genome modeling: Simulating >1000 reactions, none of which can be measured.> Boolean models are coming back in epigenetics.> Gene editing opens up new experiments nowadays.
There is still variability in cell behavior with identical inputs. Validation is challenging. Just don’t have the biological knowledge to make predictive models.
(Refers to Marcus Buehler question on Wiki) Relevant relationships are not always there, so hard to disprove hypotheses. Works best at metabolomic level.
Rate of data generation is very different for models – higher than experiments.
How much do we know about what we need to know? Modelers need to be constrained by experimentalists. Differentiation of stem cells, e.g. Models can be misleading. > This circles back to the proposals made on the final slide from the Challenge Groups 3&4: think about mechanisms to encourage collaboration, so that these conversations are motivating both the modeling and the experiments.
What are recent advances in tagging data to create a hybrid of data and mechanistic approaches?