Back to 2024 Agenda
Charge to Speakers:
Review Pre-Meeting Exercise #1
Consider the following critical questions when conceptualizing a BDT in your presentations:
- What is the problem you are trying to solve?
- How will a BDT solve this problem?
- What makes your BDT realistic?
- What enables your BDT to change and mature over time?
- What is the physical asset?
- What is the virtual asset?
- What information will be passed between the physical and virtual assets in real-time?
- What are the ethical issues that must be considered in developing and using this BDT?
Speaker Bios:
Carlos F Lopez Presentation Title: "Building around a model: Cancer Digital Twin (CDT)"
Gary An : Presentation Title: "BDT Project: the Critical Illness Digital Twin"
Presentations:
Gary An, Presentation "BDT Project: The Critical Illness Digital Twin" (PDF): https://www.imagwiki.nibib.nih.gov/sites/default/files/BDT%20Project_Gary%20An_Recording%201_0.pdf
Gary An, Expanded version of the talk above with one slide for each component in the NASEM Loop. Includes references on individual aspects for background: https://www.imagwiki.nibib.nih.gov/sites/default/files/2024%20IMAG%20Loop%20Example%20CIDT%20V7_Recording%201.pdf
Bill Lytton Slides: https://docs.google.com/presentation/d/1XoTXNxCySeCVZBdkE17oWmwd7xM2qSWymH3ieMhpams
Peter Hunter Slides, PDF: https://www.imagwiki.nibib.nih.gov/sites/default/files/BDT%20Project_Slides%20PJH%20Monday%20morning%20v1.pdf
Peter Hunter Slides, PPT: https://www.imagwiki.nibib.nih.gov/sites/default/files/webform/abstract_submission_2023/92106/BDT%20Project_Slides%20PJH%20Monday%20morning%20v1.ppt
Carlos Lopez Slides: https://www.imagwiki.nibib.nih.gov/sites/default/files/%20Building%20around%20a%20model_%20The%20Cancer%20Digital%20Twin-2-2_0.pdf
Moderator Bios:
Materials:
Comment
Expanded version of the talk with one slide for each component of the NASEM Loop. Includes references on individual aspects for background.
Monday morning talk as PDF
CF Lopez presentation slides.
A limitation for a biomedical DT appears to be efficient data capture and data integration. Can the panel comment on how the recent advances in AI are changing (accelerating) the time horizon for making a biomedical digital twin a reality? Thanks.
Q to Gary An:
How to ensure the model projections over a horizon are 'valid' in real-time? Would the model be able to "place an order" for a test/assay, or at least recommend one in a preemptive manner than at the frequency the current practice would follow?
When considering 'human in the loop', is it the current hierarchy of clinical decision-makers giving care, or does the modeler also enter this loop in real-time?
Great question. I think the key to this is recognizing that given the stochasticity of these systems there is a "forecast" cone into which the future state of the system can be found, then this updates (see hurricane forecasting). The width of that forecasting cone helps define the "real time" interval needed for updating. The way this system is currently proposed is that there is a continuous updating every 6 hours (interval based on clinical operations and dynamics of the model) so the reset occurs on a regular continuous basis. The principle is that of "steering" the system given continuous feedback from the physical asset.
The modeler is not in the loop in real-time; the practitioner can interpret the recommended actions in real time. Since the AI is trained offline, it is computationally light weight.
A key point is the validation method, and what is the actual goal with respect to validation
Q to all the panel speakers:
How "real-time" is the real-time bidirectional interaction? Do we have best practices to evaluate if the interaction is happening at the appropriate time scales?
How to formally/systematically compare different DTs for the same problem that may take different approaches and incorporate different time scales?
Hi Raj, this is a very challenging question to answer because it depends on the system and the question being asked. The challenge is identifying the dynamic range of key events that lead to a desirable outcome. For example, in cellular responses to stress, the early changes (from milliseconds to possibly an hour) greatly impact the later stages of the of the cell population response. The best practice to evaluate these interactions that I know of is carrying out time-course measurements at high resolution to identify the key events in a given process.
Comparing digital twins will also depend on whether the digital twins are actually answering the same question, or designed for the same purpose.
The real time is clinically and computationally tractable. The determined decision interval is 4-6 hours, the turn around of the assay is 90 minutes, since the AI is pretrained the recommended actions are almost instantly available.
Fair enough.
If there are 3-5 models that each has slightly different time scales, is there a systematic way to compare which one is better in the loop? yes, one can compare the output quality, prediction uncertainty, etc. Ultimately, is there a way to rank the options for use in the clinic?
Why should the clinic implement GaryAnModel 1.0 versus RajModel_2.0 or BillModel_10.5?
Trying to see if "real-time bidirectional interaction" can be used as a criterion to sort and choose among options? Or, even think of them in an ensemble of experts?
I think there is some minimum performance threshold of the virtual asset and the method used for control discovery with that system. This can be done in simulated populations, ideally with cross-method models (e.g. a control policy should work for simulated patients not simulated with the tested simulation model) - THIS MIGHT BE A GOOD COLLABORATIVE PROJECT.
However, ultimately, it is the entire cyberphysical system that needs to be tested, and the performance of a particular model/virtual asset-expanded-to-BDT be tested
In terms of the multiscale virtual assets, very often the data from the real world asset is not as granular as the finest resolution of the model. How would you address the degrees of freedom between the finer resolution to the scale at which data is present? Also, a key aspect of this is that the data needs to be collected in a non destructive or updatable way; so, for example, what do you do re the molecular profile of a tumor and now the tumor is gone (or distributed as mets)?
One Key issue that hasn't been discussed so much is how quickly we can determine when the underlying patent state has changed compared to initial calibration given availability of data feeds. If we start with a well individual we want to be able to tell when parameters, model structure have changed (early diagnosis), for patient under treatment want to know whether treatment is working or not. Such anomaly detection is not easy given limited time resolution of measurements and the intrinsic stochasticity of single real world time series
This is a great point, and goes to the importance of defining "fit for purpose;" part of this determination is identification of when the clinical situation is outside the specified "fit-for-purpose" of the BDT
Peter, Carlos — in both cases, virtual assets really tease apart biology. Do you see clinical decision making as actionable now, or aspirational for the future? If aspirational, would it be possible to replace the clinical decision with an actionable biological question/decision, and would that change the physical asset or other components of the loop?
Hannah, Thanks for your question! I think my question back would be "actionable now" for what purpose. I think, for example, we have enough data in some domains (e.g. specific breast, lung cancers) where we could use modeling and analysis technologies to make actionable clinical decisions with respect to e.g. chemo vs no chemo, radiation vs targeted. I get very inspired by work like this:
https://www.cell.com/cancer-cell/fulltext/S1535-6108(22)00441-X
However, I think a causal framework that enables us to make decisions taking into account multiple questions, alternatives etc and then formulating probabilistic outcomes based on potential treatments is still missing, and I think where digital twins could contribute.
Do any of the panelists believe that there is a significant ethical issue involved with digital twins in clinical practice? Privacy was mentioned but waved off as covered by existing law and standards. What about consequences of wrong decisions? Who bears responsibility? What happens to a system in which mistakes are made? How does one separate heterogeneity based on disease process versus base physiology or lifestyle?
Dr. Lopez touched on the interaction between scientists and physicians to make BDTs successful. And I believe Dr. Chung also touched on how physicians need to have confidence in the models in order to use them for guiding intervention.
As part of the Teams aspect of this meeting, I'm hoping this will be discussed in more detail. Considering that physicians and clinicians are likely going to be THE key end-users who significantly affect the acceptance and application of BDTs to clinical practice, we need to think ahead and act on how the modelers and scientists work directly with physicians/clinicians to ensure early buy-in and acceptance of BDTs to augment clinical practice.
It has been my experience that end-users need to be involved early in the process to accept modeling to make biomedical modeling successful.
Presentation "BDT Project: the Critical Illness Digital Twin" in PDF form