MSM 2014 Satellite Session

Return to 2014 Multiscale Modeling Consortium Meeting

 

Improving Infrastructure for Virtual Tissues Modeling. Sharing Models Through Publications, Open-Source Modeling Platforms, Index of Reproducible Multi-Cell, Multi-Scale Models.

One Day Mini-Workshop September 5th 2014

Organizers:

Herbert Sauro (UW), Maciej Swat (IU), Jim Sluka (IU), James Glazier(IU)

Purpose:

To explore the challenges of reproducibility and reuse of multiscale/multicellular models in biomedical research.

Expected number of Attendees: 25-40

Abstract:

Virtual Tissues denotes computational infrastructure for the construction of detailed spatio-temporal, multi-cell, multi-scale simulations of development, homeostasis, therapeutic interventions and diseases of tissues and organs. The development of easy-to-use, open-source biological simulation environments conforming to standards and developed for interoperability, model-sharing and model-validation uses, running both on single processors and parallel computers is a long-term goal for the biomedical modeling community. During this mini-workshop we will review how much progress the community has made during last 5-10 years, what are the main obstacles and what are the challenges facing modelers who want to build predictive models of tissues and organs. We will also discuss the development and use of modeling standards, ontologies, and model repositories. The group will draft a white paper outlining development tasks that we, as a community, should undertake to move forward. The workshop will consist of short talks that illustrate challenges facing the modeling community followed by round table discussion.


Agenda:

Morning Session: 9:00 – 12:00am


9.00 - 9:10 Introductory remarks


Topic: Challenges in Multi-Scale/Multi-Cell Modelling - Standards, Ontologies, Model Sharing, Reuse, Reproducibility, Repositories, Journals etc...

9:10 - 9:25 James Glazier

9:25 - 9:40 Tony Hunt

9:40 - 10:20 - Discussion

10:20 -10:30 Collect Coffee Break


10:30 - 10:45 - Melissa Knothe Tate

10:45 - 11:00 - Ronan Fleming

11.00 - 12.00 - Discussion


12:00 – 1.00pm: LUNCH


Topic: Credible Practices in Biomedical Modeling

1.00 - 1.15 - Gary An

1.15 - 1.30 - Lealem Mulugeta

1.30 - 2.15 - Discussion

 

2.15 - 2.45 - Phil Bourne

2.45 - 3.00 - Discussion

 

Talk Abstracts

The NIH as a Digital Enterprise

Author: Philip Bourne, NIH

By the end of this decade healthcare will be a predominantly digital enterprise and for the first time (surprising to say perhaps) patient centric. Such a shift from analog research and diagnosis and a provider centric healthcare system to an open digital system is a major change with opportunities and challenges. I will describe some of these challenges and opportunities.



TBD

Anthors: James Glazier, Biocomplexity INstitute, Indiana University

TBD


Achieving increasingly credible mechanistic explanations of expanding sets of tissue and organ level phenomena

Authors: C. Anthony Hunt and Glen E.P. Ropella, UCSF

Vision context: we seek models of tissues and organs that are used in the teal region of the figure as indispensable adjuncts facilitating future healthcare decisions at all levels. By then, we expect that it will be a best practice to undertake many virtual experiments before proposing a costly in vitro, animal, or clinical experiment. To increase the pace of getting there, we can begin identifying requirements of those virtual experiments and chart multiple M&S paths to achieve key milestones while acknowledging that mechanistic explanations of phenomena within the teal area will continue to be shrouded in scale-spanning uncertainties for the foreseeable future. Claims: explanation and prediction will be separate M&S activities, and an improved mechanistic explanation of phenomena of interest should precede any prediction in order to improve the scientific value, credibility, and usefulness of the predictive models. To enable more credible predictions, we will need 1) mechanistic explanations of expanding sets of phenomena for which 2) explanatory credibility increases steadily. Simulation models can provide both. To achieve increasingly credible mechanistic explanations, we need in silico means to 1) distinguish false from plausible generative mechanisms, 2) identify equally plausible generative mechanisms (i.e. they achieve comparable validation targets), and then 3) design experiments intended to falsify each. Such falsification will generate the new knowledge needed to (in the next cycle) increase the credibility of mechanistic explanations. Hazards: simulation platforms that are too confining; premature standards; premature focus on prediction; and shunning falsification (which can lead to overly complicated, over-fitted models). The best standard moving forward is documented adherence to good scientific and engineering practices.



Multiscale, Coupled Computational & Experimental Models of Mechanically-Modulated Tissue Genesis by Stem Cells

Authors: Melissa L. Knothe Tate, University of New South Wales, Sydney, Australia

De novo tissue generation, as well as tissue regeneration and maintenance processes, are intrinsically linked to mechanical environment. However, the cellular and subcellular mechanisms of mechanically-modulated tissue (re-)generation are not fully understood. Recent studies with embryonic mesenchymal and adult, periosteum-derived stem cells exhibit their mechanosensitivity in vitro and in vivo. This talk integrates recent top-down and bottom-up approaches to elucidate mechanically modulated tissue genesis by stem cells in multiscale, coupled computational and experimental models.


A globally convergent algorithm to compute steady state concentrations at genome-scale

Anthors: Ronan Fleming, University Of Luxemburg


At the core of computational systems biology lies a paradox. All of the currently available genome-scale modelling methods can only model chemical reaction rates, but not the concentration of the molecules involved in these reactions. At the same time, the vast majority of experimental omics data measure the abundance of metabolites, rather than reaction rates. The reason for this paradox is that modelling steady state reaction kinetics has been limited to small systems of chemical reactions as the inherently non-linear system of equations, at the core of such models, are difficult to solve. Even if one numerically integrates the corresponding kinetic equations, the absence of a long sought Lyapunov function prevents one from concluding the system will converge to a non-equilibrium steady state, for the most general class of biochemical reaction networks. Here we present the first globally convergent algorithm to simultaneously compute stable non-equilibrium steady state molecular concentrations and reaction rates for genome-scale biochemical reaction networks. We conclude with a discussion of the most important open problems in the development of algorithms for multi-scale kinetic modeling.


The Mapping Problem: How do experimental biological models relate to each other, and how can dynamic computational models be used to link them?

Authors: Gary An, MD, Department of Surgery, University of Chicago

There is a saying in military history that all critical battles are fought at the intersection between two poorly made maps. Such is the case with the translation of knowledge in biomedical research. The limitations of the traditional reliance on experimental biological proxy models have manifest in the Translational Dilemma: the inability to effectively and efficiently pass mechanistic knowledge up the chain of experimental platforms (in vitro to in vivo to human) to produce effective therapeutics. The recent public controversy of the use of murine models as proxies for human disease only emphasizes how poorly characterized and lacking in formal rigor the process of mapping across biological proxy models is. I assert that appropriately used dynamic computational models, and the different class of mappings they provide, are absolutely critical to addressing the Translational Dilemma, as well as providing a pathway for rational Personalized Medicine. Notably, this specifically excludes computational modeling tasks intending to build high-fidelity models of biological proxy models, and rather focuses on using abstract and “incomplete” computational models to formally define which aspects of mechanistic knowledge can be translated from context to context.


Common Practice Guidelines: A Significant Gap in Computational Modeling and Simulation in Healthcare

Authors: Lealem Mulugeta, Universities Space Research Association, NASA Digital Astronaut Project,

Ahmet Erdemir, Cleveland Clinic

Presented on Behalf of the Committee on Credible Practice of Modeling & Simulation in Healthcare

A major challenge in translating computational models to the clinic is a lack of guidelines emphasizing cross-disciplinary commonalities in developing and using simulation based approaches in a credible manner. The healthcare enterprise, and within itself the scientific and clinical applications of modeling and simulation are diverse; which may lead into misunderstandings between experts and novices, among clinicians, scientists and engineers, and across policy makers. If not addressed, this situation can result in misuse and distrust of computational models and simulation tools among medical practitioners, ultimately leading to their under-utilization across all aspects of medicine. To help fill this gap, the “Committee on Credible Practice of Modeling & Simulation in Healthcare”1 (hereafter the Committee) was established under the Interagency Modeling and Analysis Group (IMAG) and the Multiscale Modeling (MSM) Consortium [1]. The Committee’s goal is to establish guidelines, as well as identify new areas of research, for the development and implementation of credible computational models and simulations for healthcare research and intervention. In pursuit of its goal, the Committee has been focusing on three main activities [2]: Ten Simple Rules of Credible Practice: A significant effort of the Committee has been the development of “Ten Simple Rules of Credible Practice” by assembling insight from stakeholders from various disciplines and roles in the modeling and simulation enterprise. To initiate this effort, the Committee first conducted an internal study where Committee members had staged negotiations to identify most important rules from a set of more than twenty candidates. From these discussions, a “Committee Perspective” was formed regarding the simple rules of credible practice. More importantly, this initial activity confirmed the highly multifaceted nature of computational modeling in healthcare. As the group recognized, the application of interest (or context of use) can sometimes lead to incompatible perspectives regarding the simple rules of credible practice and different intensity of execution of the rules. Nonetheless, many emerging themes for establishing credibility were noted; including aspects related to documentation; version control; verification, validation and uncertainty quantification; founding data; model sharing; development and use procedures; reproducibility, and review. These results motivated the Committee to survey the global stakeholder community to evaluate “Community Perspective” and therefore to develop well-balanced guidelines across the range of disciplinary and application interests. In this light, the Committee has developed and will launch a public survey as means to identify a more inclusive “Ten Simple Rules of Credible Practice”. The ultimate goal is to utilize these rules will then be used as a foundation to develop “Guidelines for Credible Practice of Modeling and Simulation in Healthcare”. It should be noted that the term rules for this study is not necessarily utilized in a strict sense, it refers more to “commonly accepted best practice”. A Common Language Across Disciplines: A glossary of terms is currently being generated on the Committee’s website in an attempt to unify the use of M&S vocabulary to ensure clear communication across a variety of disciplines and stakeholders in the field. Public Engagement: The success of the Committee and the guidelines it publishes depends on adoption by all stakeholders – researchers developing the tools, as well as those in the medical community. Consequently, we seek to engage and encourage the global stakeholder community, such as the MSM community and affiliated organizations, to actively contribute to these efforts to ensure that the guidelines established capture the primary interests of the computational medicine community. To facilitate contribution, the Committee maintains a website that contains a public forum and a Wiki, allowing Committee affiliates and the general public to contribute to the Committees’ work. The site will also provide resources and can be used as an educational source for M&S in Healthcare.

Table sorting checkbox
Off