Back to 2024 Agenda
Logistics:
The full group returns to the main room where we will debrief the Pitch session process, overall.
Reminder that the pitch assessment can be found here at this link.
The Final Stretch:
A major goal of this meeting was to help all participants understand what the NASEM definition of a digital twin is and what it would take to develop a Biomedical Digital Twin. We have gone through didactics, examples, practices, and a teaming exercise to help move the theory into something concrete. We have had novices in the room and those with deep experience as a means to enable everyone to understand more fully the NASEM DT definition and how to apply to Biomedicine.
Another goal of this event was to catalyze collaboration. The design of this meeting focused heavily on "productive collision" both physically and well as intellectually. Organizers had a strong desire to help you meet others with interest and passion in this topic. Which is why you did not sit in one place for three days. We hope you have identified and connected with people who can serve as collaborators, mentors, advisors, and/or new friends.
If you are reading this, you are in the final stretch of this meeting - and our hope is you have learned a lot.
To help us finish out, here are the questions to think about as we debrief.
Session Questions:
1. What was most helpful in preparing you to work collaboratively on a proposed BDT project idea?
2. Where was there confusion, major sticking points, or challenges?
3. What ideas do you have for how improved support could be provided to researchers entering this research space?
4. What contributed most to being able to articulate your BDT idea during the Pitch session?
5. How easy or hard was it to assess a proposed BDT idea? What aspects were easy? Which were hard?
6. What is a major takeaway for you from this conference?
- emerging properties that are unique to BIOMEDICAL digital twin development
- emerging challenges for developing BDT
- emerging consideration, not considered in the NASEM definition
BDT Pitch Session Results
Idea Team 1:
Idea Team 2:
Idea Team 3:
Idea Team 4:
Idea Team 6:
Idea Team 7:
Idea Team 8:
Idea Team 9:
Idea Team 10:
Idea Team 11:
Idea Team 12:
Notes:
1) online assessment had different scores
2) nice to interact with people in team, would like to have more time
3) dedicated time for poster presentations
4) helpful to go through BDT guidance
5) breakout groups - topics were too general, perhaps break down to specific models, need to drill down deeper to different types of models
6) exposure in a general level was good to cover overall topics
7) beneficial combining presenters with their definitions leading into workshop
8) biomedical domain compared to other domains - patients, clinicians, ethical and privacy concerns very different from other domains, developing personalized trustworthiness very challenging, human-in-the-loop very different perspective
9) need people from different domains to join in the same meeting, but difficult to bring in other domain expertise, invite as keynote lecture and require them to stay, form joint meeting in other non-biomedical domain conferences.
10) Creating a cross-functional WG in the Digital Twin Consortium
11) Fit-for-Purpose (FFP) for BDT is so challenging, variance of FFP is so broad, would like taxonomy/table/classification of problems, need to group problem to establish common use envelopes
12) People can now describe a BDT with respect to the NASEM definition of a DT, but implementation is hard. Will need to see if this works in the real world
13) Real clinicians have an idea of their patient in their mind, BDT could help them have a platform to systematically use.
14) Categorizing the types for actions you would like to take would be useful - what are the clinically actionable BDT solutions?
15) Need to cross-couple programs
16) Need new math to address VVUQ challenges, especially for biomedical applications of BDT, in addition the rigor of math across the entire loop must be stressed for modeling complex systems, mathematics can help identiy what critical factors to include, mathematics as tool; can use existing math in a new way.
17) Deployment - Biomedical so far has focused on clinical, also must consider BIOLOGICAL digital twins, and BEHAVIORAL digital twins too, and BDT for BBB DT discovery science
Moderator: Michelle Bennett
Comment
The question of whether people want to be monitored or surveilled references the fundamental ethical issue of Benefit: how do we assure that someone benefits from the use of their digital twin, or from their surveillance. The commercial success of digital twins requires us to make sure that benefit is realized.
Collecting and sharing of data and knowledge need to be tied into each person's self-interest, everyone wants to know how the data will be used; will need to explain all future uses; already so much data is being collected everyday.
Consider watching the documentary on the James Webb Telescope - 900+ points of failure described
Track failure to understand failure, track performance
AI trustworthiness
Point where the observation is no longer consistent with the model predictions
"Digital threads" - look at the Digital Twin Consortium website
Pain points for Biomedical digital twins
Pain point: Evolved systems are coupled tangled hot mess with substantial heterogeneity.
Key Question: How are biomedical digital twins(BDTs) different from Digital Twins(DTs) commonly used in other fields including manufacturing and climate science?
Evolved vs manufactured vs non-evolved physical domains
Within organism and across population heterogeneity
Pain Point: Anna Karenina principle: all healthy subsystems are similar, all non-healthy subsystems diverge in their own way.
Key Question: Do we need baseline models of “healthy population” to build models of disease? What are the limitations if we don’t? How hard it is to build good baseline models?
Baseline models > easier to connect to first principles
Baseline models > easier to collect data in some cases
Pain Point: We are all similar to each other but similar in different ways.
Key Question: How should the cohort wide information should inform the BDT?
Parameterize baseline at the level of individual (interpolation)
Default priors
Cohort informed independent priors
Cohort informed manifold
Model disease with in silico perturbations (extrapolation)
Important that the models are connected to mechanistic components
Pain Point: Humans are not good at evaluating 20.000 data points. Patients and clinicians are humans Ergo…
Key Question: What is the interface between a BDT and clinical decision maker? What if the underlying model is not biologically/clinically grounded? What if the model is grounded but practically irreducible to simple causes/explanations?
Is simulation an explanation? Is UQ helpful or not for interpretability? Is interpretability different for interpolation vs. extrapolation components?
Paint Point : Humans don’t like to be poked repeatedly ( a literal pain point).
Key Question: What are the best modalities for digital twins? Imaging, liquid biopsy, wearables? Is there a way to quantify information value / collection cost ? Are there creative ways to augment limited data? How can we deal with sparse, asynchronized multi-modal, multi-scale measurements in a combined way? Baseline vs Longitudinal vs Real Time.
Pain Point: NimBio - Most people want to achieve the public good by sharing their personal biomedical data in a well protected, fair and transparent system. But a small fraction of the population can generate substantial obstacles if they are sharing-averse.
Key Question: What is the best way to advocate for best technical, social, legal and political solutions that balances public good with personal privacy. Is there an ethical way to nudge the system vs Opt-out default vs Opt-in default? (e.g. Denmark)
Pain Point: We have been in the same room for three days but we still provide divergent descriptions of common terms everytime we start explaining our point.
Key Question: How can we establish a common terminology and aspects to facilitate communication and cooperation in the field.
Common examples, use-cases definitions described using common language.