3. Examples of deep learning in action
DL is already being used in several ways to boost care for cardiovascular imaging patients—and this is most likely just the very beginning.
Examples of DL in action include:
- Image planning on cardiac MR images
- Guiding novice users through image acquisition
- Reducing noise on CT images
- Improving image reconstruction
- Generating reports on diastolic function based on Doppler images
- Predicting adverse events based on single-photon emission CT stress polar perfusion maps
Wehbe and colleagues did emphasize that the U.S. Food and Drug Administration regulates AI models designed to be used in a clinical setting. Many of the DL models written about and discussed in both cardiology and radiology are still works in progress that are under development.
“Notably, few systems have been evaluated prospectively in a clinical trial setting, and even fewer have been evaluated at multiple clinical sites encompassing a diverse cohort representative of the target population,” the authors wrote. “For an algorithm to be accepted for clinical use, it should be prospectively validated at multiple institutions using imaging data from multiple vendors to ensure the robustness of its predictions.”
4. Many challenges in deep learning remain
When developing and implementing any type of AI model, it can be incredibly difficult to explain the “how.” If a DL model helps determine a patient faces a heightened risk of cardiovascular disease, for example, the physician may wonder how the model reached that conclusion. Explainable DL, the authors explained, is one area that researchers are currently exploring to hopefully address this ongoing issue.
Another big challenge when working with DL is accounting for potential bias.
“Investigators must remain vigilant in ensuring that any algorithm leads to equitable and unprejudiced predictions through careful choices about data curation, algorithm design and model evaluation,” the authors wrote.
On a similar note—and this goes back to the topic of validation once again—some DL models may work well with one data set, but work poorly with another data set. Or models may work at first, but “degrade” over time.
“Data set drift can be combatted via continual learning or model updates,” the authors wrote. “However, there is no substitute for repeated validation of algorithms in external, never-before-seen data sets and continuous auditing of model performance.”
These highlights represent just the tip of the iceberg; click here to read the full analysis in JAMA Cardiology.