1:40 PM - 1:45 PM:
Welcome and introduction
Chair: Olivier Coillot, Ctr. National de la Recherche Scientifique (France)
How to ensure that medical imaging AI is trustworthy? How to know if one can trust the results presented in research papers? These are fundamental questions that the field of medical imaging needs to address to lead to true advances in clinical care. At this workshop, three world-class experts will address the following key issues in trustworthy AI: 1) the selection of appropriate metrics for validating AI algorithms in medical imaging; 2) the reproducibility of results which is a cornerstone of the scientific method; 3) the risk that AI algorithms are contaminated by bias, making their outputs potentially misleading and their application unfair. The talks will be followed by a discussion of learnings and Q&A involving the three speakers.
1:45 PM - 2:10 PM:
Metrics Reloaded
Lena Maier-Hein, German Cancer Research Center (DKFZ) (Germany) and Heidelberg Univ. (Germany)
In this presentation, I will present Metrics Reloaded - a comprehensive framework guiding researchers in the problem-aware selection of metrics in biomedical image analysis. I will first demonstrate how the choice of performance metrics often fails to align with the clinicians' interests, thereby impeding scientific progress and the practical application of machine learning (ML) algorithms. Next, I will present the Metrics Reloaded framework, which has been developed by an international expert consortium to overcome the aforementioned issues
2:10 PM - 2:35 PM:
Reproducibility in medical image processing
Ninon Burgos, CNRS, Paris Brain Institute (France)
Reproducibility is a key component of science, as the replication of findings is the process by which they become knowledge. It is widely considered that many fields of science, including medical imaging, are undergoing a reproducibility crisis. This has led to the publications of various guidelines to improve research reproducibility, such as the checklists proposed by NeurIPS or MICCAI. In this presentation, after introducing various concepts related to reproducibility, we will discuss the fact that using such checklists is not straight-forward, both for the authors that have to fill them and for the reviewers that have to comment them, and that their use remains to be clarified for reproducible research to grow in the medical image processing field.
2:35 PM - 3:00 PM:
Bias in radiology artificial intelligence: causes, evaluation, and mitigation
Imon Banerjee, Department of Radiology, Mayo Clinic (United States)
Despite the expert-level performance of artificial intelligence (AI) models for various medical imaging tasks, real-world performance failures with disparate outputs for various minority subgroups limit the usefulness of AI in improving patients' lives. AI has been shown to have a remarkable ability to detect protected attributes of age, sex, and race, while the same models demonstrate bias against historically underserved subgroups of age, sex, and race in disease diagnosis. Therefore, an AI model may take shortcut predictions from these correlations and subsequently generate an outcome that is biased toward certain subgroups even when protected attributes are not explicitly used as inputs into the model. This talk will discuss various types of bias from shortcut learning that may occur at different phases of AI model development. I will also summarize current techniques for mitigating bias from preprocessing (data-centric solutions) and during model development (computational solutions) and postprocessing (recalibration of learning).
3:00 - 3:20 PM:
Panel Discussion
The talks will be followed by a discussion of learnings and Q&A involving the three speakers.
This technical event is part of the Image Processing conference.
Event Details
FORMAT: Speaker presentations followed by panel discussion with live audience Q&A.MENU: Coffee, decaf, and tea will be available outside presentation room.
SETUP: Assortment of classroom and theater style seating. .