Established initially as a Digital Radiology conference, we now proudly celebrate the 42nd anniversary of the Picture Archiving and Communication (PACS) Conference as part of the SPIE Medical Imaging symposium, which began 50 years ago as the Application of Optical Instrumentation in Medicine. Today, Imaging Informatics has evolved into a multidisciplinary field catering not only to radiologists, but also patients, healthy individuals, caregivers, and other healthcare professionals. To improve healthcare outcomes, current research and applications emphasize developing and evaluating new and efficient means of managing the ever-increasing volumes of imaging data. In the era of advanced imaging modalities and artificial intelligence (AI) technologies, there is a need for interoperable data workflows, sophisticated visualizations, and accurate as well as reliable analytics. Moreover, the growing demand for personalized, precision medicine necessitates integrating clinical information, molecular and genomic data, imaging results, and pathology. Imaging informatics supports new technical solutions that can accommodate the needs of all imaging-rich clinical specialties, not just radiology, while ensuring patient data remains accessible to healthcare professionals and secure from malicious actors. This track focuses on new methods for obtaining, transferring, managing, analyzing, and visualizing data for healthcare, research, and educational applications. The conference will include, but is not limited to, the following themes:

Generative AI for imaging informatics
Recently, generative AI models have emerged as powerful tools for enhancing the capabilities of imaging informatics. These models support advanced analytics, enable the creation of realistic images, and facilitate personalized medical treatments. This theme focuses on the application of generative AI models to manage, analyze, and visualize imaging informatics data in healthcare, research, and various applications thereof. The following are example use cases of generative AI models in imaging informatics:
Big data and analytics
Precision medicine utilizes detailed, patient-specific molecular, genetic, and imaging information to diagnose and categorize disease, subsequently guiding treatment to enhance clinical outcomes. The integration of medical imaging, genomics, and molecular markers offers a new opportunity to connect cellular or molecular-level observations with macroscopic phenotypes, necessitating innovative data management strategies.
Data management for precision medicine
Precision medicine involves using detailed, patient-specific molecular, genetic and imaging information to diagnose and categorize disease, then guide treatment to improve clinical outcome. The combination of medical imaging, genomics, and molecular markers presents a new opportunity to link observations made at the cellular or molecular levels to macroscopic phenotypes but also requires novel strategies for data management.
Advanced visualization and 3D printing
Three-dimensional (3D) image data can be visualized and manipulated in an authentic 3D space. Augmented reality (AR) technology overlays medical imaging data onto the real world, while virtual reality (VR) creates fully immersive environments. 3D printing provides innovative methods for simulating procedures and prototyping customized medical devices. Submissions on new technical milestones or clinical applications involving the use of 3D objects, both physical and virtual, are encouraged.
Mobile imaging and image-based vital data
The field of medical imaging is expanding into the mobile domain, incorporating small devices that integrate ultrasound, endoscopy, fundoscopy, and other imaging modalities. With the rapid increase in smartphone usage for medical imaging, managing mobile data presents unique challenges compared to traditional clinic-based diagnostics, including differences in data type, quality, and management. Additionally, vital signs are now being extracted from photographs and videos captured on mobile devices, necessitating effective data management, integration, and evaluation strategies.
Digital operating theatres
The DICOM standard has expanded its interoperability scope to encompass use cases in radiation oncology, optical imaging, and digital pathology. Additionally, imaging has enabled the digital operating room through surgical PACS. Current research focuses on closing the gaps between diagnostic and interventional imaging
PACS-integration of multimedia data
Data generated from various clinical specialties such as cardiology, pathology, ophthalmology, dermatology, and surgery is extensively utilized for screening, diagnosis, treatment, and rehabilitation, often becoming part of electronic medical records. The acquisition methods, workflows, and management of these non-radiological images differ from radiology-centric imaging practices.
Images for education
Emerging technologies have enabled a new generation of learners to engage with interconnected, immersive, and self-directed environments. Modern patients can also take a more active role in their medical decisions by reviewing their own medical imaging and diagnostic reports, facilitated by technology providing timely and clear explanations. This theme welcomes research and technical breakthroughs in the education of students, patients, and other healthcare professionals.

POSTER AWARD
The Imaging Informatics for Healthcare, Research, and Applications conference will feature a cum laude poster award. All posters displayed at the meeting for this conference are eligible. Posters will be evaluated at the meeting by the awards committee. The winners will be announced during the conference and the presenting author will be recognized and awarded a certificate.

;
In progress – view active session
Conference 12931

Imaging Informatics for Healthcare, Research, and Applications

19 - 21 February 2024 | Palm 8
View Session ∨
  • SPIE Medical Imaging Awards and Plenary
  • Monday Morning Keynotes
  • 1: Large Language Models
  • 2: Augmentation of Clinical Workflow
  • Tuesday Morning Keynotes
  • 3: Informatics Data Management
  • 4: Generative AI - GANs and Flow Models
  • 5: Generative AI - Diffusion Models
  • Live Demonstrations Workshop
  • Publicly Available Data and Tools to Promote Machine Learning: an interactive workshop exploring MIDRC
  • 3D Printing and Imaging: Enabling Innovation in Personalized Medicine, Device Development, and System Components
  • Establishing Ground Truth in Radiology and Pathology
  • Wednesday Morning Keynotes
  • 6: AI/ML for Data Analytics
  • 7: AI/ML for Precision Medicine
  • 8: Multimodal and Hybrid Data/Systems
  • Posters - Wednesday
  • Thursday Morning Keynotes
SPIE Medical Imaging Awards and Plenary
18 February 2024 • 5:30 PM - 6:30 PM PST | Town & Country A

5:30 PM - 5:40 PM:
Symposium Chair Welcome and Best Student Paper Award announcement
First-place winner and runner-up of the Robert F. Wagner All-Conference Best Student Paper Award
Sponsored by:
MIPS and SPIE

5:40 PM - 5:45 PM:
New SPIE Fellow acknowledgments
Each year, SPIE promotes Members as new Fellows of the Society. Join us as we recognize colleagues of the medical imaging community who have been selected.

5:45 PM - 5:50 PM:
SPIE Harrison H. Barrett Award in Medical Imaging
Presented in recognition of outstanding accomplishments in medical imaging
12927-501
Author(s): Cynthia Rudin, Duke Univ. (United States)
18 February 2024 • 5:50 PM - 6:30 PM PST | Town & Country A
Show Abstract + Hide Abstract
We would like deep learning systems to aid radiologists with difficult decisions instead of replacing them with inscrutable black boxes. "Explaining" the black boxes with XAI tools is problematic, particularly in medical imaging where the explanations from XAI tools are inconsistent and unreliable. Instead of explaining the black boxes, we can replace them with interpretable deep learning models that explain their reasoning processes in ways that people can understand. One popular interpretable deep learning approach uses case-based reasoning, where an algorithm compares a new test case to similar cases from the past ("this looks like that"), and a decision is made based on the comparisons. Radiologists often use this kind of reasoning process themselves when evaluating a new challenging test case. In this talk, I will demonstrate interpretable machine learning techniques through applications to mammography and EEG analysis.
Monday Morning Keynotes
19 February 2024 • 8:30 AM - 10:45 AM PST | Town & Country A
Session Chairs: Weijie Chen, U.S. Food and Drug Administration (United States), Susan M. Astley, The Univ. of Manchester (United Kingdom), Jeffrey Harold Siewerdsen, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States), Maryam E. Rettmann, Mayo Clinic (United States)

8:30 AM - 8:35 AM:
Welcome and introduction

8:35 AM - 8:45 AM:
Award announcements

  • Robert F. Wagner Award finalists for conferences 12927, 12928, and 12932
  • Computer-Aided Diagnosis Best Paper Award
  • Image-Guided Procedures, Robotic Interventions, and Modeling student paper and Young Scientist Award

12927-403
Author(s): Curtis P. Langlotz, Stanford Univ. School of Medicine (United States)
19 February 2024 • 8:45 AM - 9:25 AM PST | Town & Country A
Show Abstract + Hide Abstract
Artificial intelligence and machine learning (AI/ML) are powerful tools for building computer vision systems that support the work of clinicians, leading to high interest and explosive growth in the use of these methods to analyze clinical images. These promising AI techniques create computer vision systems that perform some image interpretation tasks at the level of expert radiologists. In radiology, deep learning methods have been developed for image reconstruction, imaging quality assurance, imaging triage, computer-aided detection, computer-aided classification, and radiology documentation. The resulting computer vision systems are being implemented now and have the potential to provide real-time assistance, thereby reducing diagnostic errors, improving patient outcomes, and reducing costs. We will show examples of real-world AI applications that indicate how AI will change the practice of medicine and illustrate the breakthroughs, setbacks, and lessons learned that are relevant to medical imaging.
12928-404
Author(s): Lena Maier-Hein, Deutsches Krebsforschungszentrum (Germany)
19 February 2024 • 9:25 AM - 10:05 AM PST | Town & Country A
Show Abstract + Hide Abstract
Intelligent medical systems adept at acquiring and analyzing sensor data to offer context-sensitive support are at the forefront of modern healthcare. However, various factors, often not immediately apparent, significantly hinder the effective integration of contemporary machine learning research into clinical practice. Using insights from my own research team and extensive international collaborations, I will delve into prevalent issues in current medical imaging practices and offer potential remedies. My talk will highlight the vital importance of challenging every aspect of the medical imaging pipeline from the image modalities applied to the validation methodology, ensuring that intelligent imaging systems are primed for genuine clinical implementation.
12932-408
Author(s): Nebojsa Duric, Univ. of Rochester (United States), Delphinus Medical Technologies (United States)
19 February 2024 • 10:05 AM - 10:45 AM PST | Town & Country A
Show Abstract + Hide Abstract
Ultrasound tomography (UST) is an emerging medical imaging modality that has found its way into clinical practice after its recent approval by the Food and Drug Administration (FDA) for breast cancer screening and diagnostics. As an active area of research, UST also shows promise for applications in brain, prostate, limb and even whole-body imaging. The historical development of ultrasound tomography is rooted in the idea of “seeing with sound” and the concept borrows heavily from diverse disciplines, including oceanography, geophysics and astrophysics. A brief history of the field is provided, followed by a review of current reconstruction methods and imaging examples. Unlike other imaging modalities, ultrasound tomography in medicine is computationally bounded. Its future advancement is discussed from the perspective of ever-increasing computational power and Moore's Law.
Session 1: Large Language Models
19 February 2024 • 2:00 PM - 3:00 PM PST | Palm 8
Session Chairs: Jessica Fried, Michigan Medicine (United States), Hiroyuki Yoshida, Massachusetts General Hospital (United States)
12931-1
Author(s): Parth Dodhia, Stanford Univ. (United States); Shawn Meepagala, Howard Univ. College of Medicine (United States); Golnaz Moallem, Stanford Univ. School of Medicine (United States); Daniel Rubin, Stanford Univ. (United States); Gregory Bean, Mirabela Rusu, Stanford Univ. School of Medicine (United States)
19 February 2024 • 2:00 PM - 2:20 PM PST | Palm 8
Show Abstract + Hide Abstract
A wealth of medical knowledge is used to make clinical decisions, yet treatment or disease outcomes are challenging to assess without clinical trials. However, clinical trials take time, are expensive, and are impossible to perform for every decision. One approach to systematically assess treatment outcomes involves the retrospective analysis of clinical notes, e.g., radiology and pathology reports, which can benefit from automated parsing to provide systematic frameworks to extract outcome information. In this study, we used a large language model, i.e., ChatGPT (GPT-3.5), to parse radiology and pathology reports and extract information related to response to neoadjuvant chemotherapy in patients with breast cancer. The large language model achieved sensitivities of 84-94% in parsing radiology reports, but had lower performance in the pathology reports, 72-87%. Our study illustrates the complexity of decision-making and outcome prediction using radiology images.
12931-2
Author(s): Manish Sharma, Imaging Endpoints (India); Samira Farough, Andre Burkett, Imaging Endpoints, LLC (United States); Jerome Prasanth, Imaging Endpoints (India); Nabil El-Shafeey, Dominic Zygadlo, Chera Dunn, Ron Korn, Imaging Endpoints, LLC (United States)
19 February 2024 • 2:20 PM - 2:40 PM PST | Palm 8
Show Abstract + Hide Abstract
A prospective study with Response Evaluation Criteria in Solid Tumors (RECIST) 1.1 criteria was conducted to assess the agreement of adjudication in a blinded independent central imaging review setup. Adjudicator comments were analyzed using sentiment analysis with Python and ChatGPT. Out of 100 subjects, only 4 had different outcomes when compared to the Gold Standard as determined by an experienced radiologist. The sensitivity, specificity, and accuracy of the algorithm supported by ChatGPT were calculated as 0.857, 1.0, and 0.96, respectively. The NLP capabilities of ChatGPT allowed for accurate classification of sentiment in adjudicator comments, enabling comparison with adjudicator assessments and quality monitoring in central read set up, thereby improving study outcomes. This demonstrates the impressive performance of ChatGPT in this novel context for clinical trials and opens up possibilities of use of LLMs (ChatGPT) in review of clinical data and medical text in clinical trials.
12931-4
Author(s): Alexander Shieh, Iwan Paolucci, Jessica Albuquerque, Kristy Brock, Bruno Odisio, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States)
19 February 2024 • 2:40 PM - 3:00 PM PST | Palm 8
Show Abstract + Hide Abstract
Percutaneous liver ablation is a minimally invasive procedure to treat liver tumors. Measuring treatment-specific outcome is important for liver ablation research. However, the cancer surveillance imaging reports after the procedure can be numerous and challenging to read, and annotated data is limited in this setting. In this work we used Llama 2 to automatically extract critical findings from real-world diagnostic imaging reports without the need of training a new information extraction model. A dataset of 87 reports from 13 patients were used to benchmark the capability of Llama 2 for cancer progression finding extraction and classification. We asked Llama 2 to determine whether there is cancer progression within the given report and then classify progression findings into local tumor progression (LTP), intrahepatic progression (IHP) and extrahepatic progression (EHP). Llama 2 achieved decent performance for detecting progression at study level. The precision is 0.91 and recall is 0.96, with specificity 0.84. However, the classification of progression into LTP, IHP and EHP still needs to be improved.
Session 2: Augmentation of Clinical Workflow
19 February 2024 • 3:30 PM - 5:30 PM PST | Palm 8
Session Chairs: Thomas Martin Deserno, Peter L. Reichertz Institut für Medizinische Informatik (Germany), Shandong Wu, Univ. of Pittsburgh (United States)
12931-5
Author(s): Anshu Goyal, Seena Pourzand, Sachi Pawooskar-Almeida, The Univ. of Southern California (United States); M. Wasil Wahi-Anwar, Univ. of California, Los Angeles (United States); Galo A. Apolo Aroca, Roski Eye Institute (United States); Benjamin Y. Xu, Roski Eye Institute, The Univ. of Southern California (United States); Matthew S. Brown, Univ. of California, Los Angeles (United States); Brent J. Liu, The Univ. of Southern California (United States)
19 February 2024 • 3:30 PM - 3:50 PM PST | Palm 8
Show Abstract + Hide Abstract
Primary angle closure disease or PACD, is a leading cause of global vision impairment and requires early intervention and detection. The AS-OCT imaging modality, capturing anterior eye structures, faces slow adoption due to non-standard analysis. However, by leveraging an imaging-informatics approach, we have developed a streamlined system integrated with the SimpleMind Cognitive AI framework for efficient AS-OCT analysis. The system incorporates deep learning to detect critical regions like the cornea and iris in order to identify the scleral spur and measure the anterior chamber angle allowing for an improved and potentially earlier diagnosis of PACD.
12931-6
Author(s): Renaid B. Kim, Jordan Breyfogle, Benjamin Mervak, Lubomir Hadjiiski, Kenneth Buckwalter, Jessica Fried, Univ. of Michigan (United States)
19 February 2024 • 3:50 PM - 4:10 PM PST | Palm 8
Show Abstract + Hide Abstract
The “STAT” designation for imaging studies is often overused and misused, obscuring the actual urgency of an imaging order. Not all STAT imaging orders are equal in terms of urgency, so we create semi-supervised machine learning models to classify actual urgency of the STAT imaging studies with more than 20,000 studies, even though only a small subset of data in the training set was manually labeled by the experts.
12931-7
Author(s): Trent Benedick, Jorge Galvan, Wejdan Alshehri, Ryan Fue, The Univ. of Southern California (United States); John Asbach, Univ. at Buffalo (United States), Roswell Park Comprehensive Cancer Ctr. (United States); Anh H. Le, Cedars-Sinai Medical Ctr. (United States), Roswell Park Comprehensive Cancer Ctr. (United States); Brent Liu, The Univ. of Southern California (United States)
19 February 2024 • 4:10 PM - 4:30 PM PST | Palm 8
Show Abstract + Hide Abstract
We propose a web based informatics application to introduce data driven methods and uniformity into radiation therapy treatment plan creation. Our informatics platform uses a quantitative analysis of tumor anatomy of historical cases to identify useful and relevant treatment plan templates for treatment plan creation and benchmarking. The system is based on a database of historical DICOM RT objects and the quantitative anatomical features we extract from them. Our semi-self supervised similarity matching algorithm will generate an index of historical best practice cases in our database based on the earth movers distance similarity/dissimilarity of their quantitative features to those of a current case undergoing treatment planning. A clinician can use the identified historical best practice cases, indexed based on their similarity to the current case, as templates and reference during treatment planning. Our system aims to introduce uniformity of methods and data driven methods into radiation therapy treatment planning.
12931-8
Author(s): Michal Brzus, Cavan J. Riley, Joel Bruss, Aaron Boes, The Univ. of Iowa (United States); Randall Jones, Bot Image, Inc. (United States); Hans J. Johnson, The Univ. of Iowa (United States)
19 February 2024 • 4:30 PM - 4:50 PM PST | Palm 8
Show Abstract + Hide Abstract
Healthcare increasingly employs artificial intelligence-based tools and relies on large, multi-site datasets. However, processing medical imaging is difficult due to the challenges in automatically identifying DICOM data. Therefore, we developed a robust, easily extensible classification framework employing machine learning paradigms and a dataset of over 250000 scans from over 50 sites. Using well-defined DICOM fields, our tool classifies image modality and acquisition plane with over 99% accuracy. We designed this framework for fast and easy adaptation and integration into medical imaging workflow, able to revolutionize processing vast amounts of data and automating medical imaging applications.
12931-9
Author(s): Leon Wiese, Lennart Hinz, Eduard Reithmeier, Leibniz Univ. Hannover (Germany)
19 February 2024 • 4:50 PM - 5:10 PM PST | Palm 8
Show Abstract + Hide Abstract
The persistent need for more qualified personnel in operating theatres exacerbates the remaining staff's workload. This increased burden can result in substantial complications during surgical procedures. To address this issue, this research paper introduces a comprehensive operating theatre system. The system offers real-time monitoring of all surgical instruments in the operating theatre, aiming to alleviate the problem. The foundation of this endeavor involves a neural network trained to classify and identify eight distinct instruments belonging to four distinct surgical instrument groups. A novel aspect of this study lies in the approach taken to select and generate the training and validation datasets. The datasets used in this study consist of synthetically generated image data rather than real data. Additionally, three virtual scenes were designed to serve as the backdrop for a generation algorithm. This algorithm randomly positions the instruments within these scenes, producing annotated rendered RGB images of the generated scenes. To assess the efficacy of this approach, a separate real dataset was also created for testing the neural network.
12931-10
Author(s): Elizabeth McAvoy, Matthias Wilms, Nils D. Forkert, Univ. of Calgary (Canada)
19 February 2024 • 5:10 PM - 5:30 PM PST | Palm 8
Show Abstract + Hide Abstract
The brain age gap is a promising biomarker for assessment of overall brain health and disease prediction. The aim of this work was to identify group-level and individual variability in the brain age gap biomarker in healthy subjects and patients with neurological and cardiovascular disease. Therefore, a deep convolutional neural network was trained using T1-weighted MRI datasets from healthy subjects from the UK Biobank to predict the brain age and used to calculate the brain age gap in independent healthy subjects, and patients with neurological and cardiovascular diseases revealing quantitative and qualitative saliency differences between the groups.
Tuesday Morning Keynotes
20 February 2024 • 8:30 AM - 10:00 AM PST | Town & Country A
Session Chairs: Barjor Sohrab Gimi, Univ. of Massachusetts Chan Medical School (United States), Andrzej Krol, SUNY Upstate Medical Univ. (United States), John E. Tomaszewski, Univ. at Buffalo (United States), Aaron D. Ward, Western Univ. (Canada)

8:30 AM - 8:35 AM:
Welcome and introduction

8:35 AM - 8:40 AM:
Robert F. Wagner Award finalists announcements for conferences 12930 and 12933

12930-406
Author(s): Frank J. Rybicki, The Univ. of Arizona College of Medicine (United States); Leonid Chepelev, University of Toronto (Canada)
20 February 2024 • 8:40 AM - 9:20 AM PST | Town & Country A
Show Abstract + Hide Abstract
Medical imaging data is often used inefficiently, and this happens most often for patients with abnormal imaging who require a complex procedure. This talk describes those patients, how their medical images undergo Computer Aided Design (CAD), and how that data reaches a Final Anatomic Realization, one of which is 3D printing. This talk highlights “keys” to “unlock” value when this clinical service line is performed in a hospital, and the critical role for medical engineers who work in that infrastructure. The talk includes medical oversight, data generation, and a specific, durable definition of value for medical devices that are 3D printed in hospitals. The talk also includes clinical appropriateness, and how it folds into accreditation for 3D printing in hospitals and universities. Up to the minute information on reimbursement for medical devices that are 3D printed in hospitals and universities will be presented.
12933-409
Author(s): David S. McClintock, Mayo Clinic (United States)
20 February 2024 • 9:20 AM - 10:00 AM PST | Town & Country A
Show Abstract + Hide Abstract
The use of artificial intelligence in healthcare is a current hot topic, generating tons of excitement and pushing multiple academic medical centers, startups, and large established IT companies to dive into clinical AI model development. However, amongst that excitement, one topic that has lacked direction is how healthcare institutions, from small clinical practices to large health systems, should approach AI model deployment. Unlike typical healthcare IT implementations, AI models have special considerations that must be addressed prior to moving them into clinical practice. This talk will review the major issues surrounding clinical AI implementations and present a scalable, standardized, and responsible framework for AI deployment that can be adopted by many different healthcare organizations, departments, and functional areas.
Session 3: Informatics Data Management
20 February 2024 • 10:30 AM - 12:30 PM PST | Palm 8
Session Chairs: Brent J. Liu, The Univ. of Southern California (United States), Anh H. Le, Roswell Park Comprehensive Cancer Ctr. (United States)
12931-11
Author(s): Joshua Genender, John M. Hoffman, Univ. of California, Los Angeles (United States), UCLA Ctr. for Computer Vision & Imaging Biomarkers (United States)
20 February 2024 • 10:30 AM - 10:50 AM PST | Palm 8
Show Abstract + Hide Abstract
SF-CT-PD is a single-file derivative of the DICOM-CT-PD file format for CT projection data that stores projections within a single DICOM file, stores pixel data detector-row-major, and stores projection-specific parameters as ordered tables within the DICOM header. We compared the performance of SF-CT-PD against DICOM-CT-PD in read speed, disk usage, and network transfer. Cases were sampled from TCIA’s “LDCT-and-Projection-data" dataset and encoded into DICOM-CT-PD and SF-CT-PD representations. Read tests were conducted for four programming languages on hard-disk and solid-state drives. Rsync-based network transfer analysis measured the Ethernet throughput for each format. Accuracy of the implementation was confirmed by analyzing reconstructions and transfer file-checksums for each format. SF-CT-PD was generally more performant in read operations and disk usage. Network throughput was equivalent between the formats, with file-checksums indicating file integrity. Reconstruction accuracy was supported by difference image agreements. SF-CT-PD represents a viable extension of DICOM-CT-PD where a single file is preferred.
12931-12
Author(s): Jiren Li, Dooman Arefan, Matthew Pease, Chang Liu, David O. Okonkwo, Shandong Wu, Univ. of Pittsburgh (United States)
20 February 2024 • 10:50 AM - 11:10 AM PST | Palm 8
Show Abstract + Hide Abstract
Data shift is a prevalent concern in the field of machine learning. It occurs when the distribution of training data for a machine learning model is different from the distribution of the data the model will encounter in operational environment. This issue becomes even more significant in the field of medical imaging due to the multitude of factors that can contribute to data shifts. It is crucial for medical machine learning systems to identify and address these issues. In this paper, we present an automated pipeline designed to identify and alleviate certain types of data shift issues in medical imaging datasets. We intentionally introduce data shift into our dataset to assess and address it with our workflow. More specifically, we employ Principal Components Analysis (PCA) and Maximum Mean Discrepancy (MMD) algorithms to detect data shift between the training and test datasets. We utilize simple image processing techniques, including data augmentation and image registration methods, to individually and collectively mitigate data shift issues and assess their impacts. Results show that our proposed method is effective in detecting and significantly improving model performance.
12931-13
Author(s): Da He, Shanghai Jiao Tong Univ. (China); Jayaram K. Udupa, Yubing Tong, Drew Torigian, Univ. of Pennsylvania (United States)
20 February 2024 • 11:10 AM - 11:30 AM PST | Palm 8
Show Abstract + Hide Abstract
It is uncertain how well current segmentation metrics can reflect auto-segmentation clinical value. Five segmentation metrics (Dice Coefficient, Hausdorff Distance, surface Dice Coefficient, Added Path Length, and Mendability Index) are applied to predict the human effort required by manually mending auto-segmentations. 265 3D computed tomography (CT) samples for 3 objects of interest from 3 institutions are adopted to train and test linear and support vector regression models. The five-fold cross-validation experiments demonstrate that segmentation metric-based mending effort prediction is feasible and that a variant of Mendability Index generally performs better than other metrics, showing greater potential for guiding auto-segmentation techniques.
12931-14
Author(s): Zhiyun Xue, Tochi Oguguo, U.S. National Library of Medicine (United States); Kelly Yu, National Cancer Institute (United States); Tseng-Cheng Chen, National Taiwan Univ. Hospital (Taiwan); Chun-Hung Hua, China Medical Univ. Hospital (Taiwan); Chung Jan Kang, Chih-Yen Chien, Chang Gung Memorial Hospital (Taiwan); Ming-Hsui Tsai, China Medical Univ. Hospital (Taiwan); Cheng-Ping Wang, National Taiwan Univ. Hospital (Taiwan); Anil K. Chaturvedi, National Cancer Institute (United States); Sameer Antani, U.S. National Library of Medicine (United States)
20 February 2024 • 11:30 AM - 11:50 AM PST | Palm 8
Show Abstract + Hide Abstract
In this paper, we present our work towards preparing and making image data ready for the development of AI-driven approaches for oral cancer study. We focus on 1) cleaning the image data; 2) extracting the annotation information. Both are important for the succeeding ML algorithm development and data analysis. We describe the challenges and issues, present the approaches we used to clean/standardize the image data and extract labelling information, and discuss the ways to increase efficiency of the process and the lessons we have learned. Research ideas on automating the process with ML-driven techniques are also discussed.
12931-15
Author(s): Muhammad Asad, Zayed Univ. (United Arab Emirates); Yading Yuan, Columbia Univ. Irving Medical Ctr. (United States)
20 February 2024 • 11:50 AM - 12:10 PM PST | Palm 8
Show Abstract + Hide Abstract
Federated learning (FL) in medical imaging leverages data from multiple hospitals, improving machine learning model generalization. Though it allows local data retention, enhancing patient privacy, there are concerns about data privacy during model parameter exchanges between clients and servers. Another significant challenge in FL is the communication overhead, especially with complex models like transformers or when working with geographically diverse collaborators for global health issues. Our solution, FeSEC, addresses these problems. We introduce a sparse compression algorithm for efficient communication across distributed hospitals. Furthermore, we combine homomorphic encryption with differential privacy to ensure data security during model exchanges. Trials on COVID-19 detection demonstrate that FeSEC boosts the accuracy and privacy preservation of FL models, outperforming FedAvg, and reducing communication costs.
12931-16
Author(s): Haoqi Wang, Mia Markey, The Univ. of Texas at Austin (United States); Nicolas Pannetier, Mehul Sampat, Flywheel (United States)
20 February 2024 • 12:10 PM - 12:30 PM PST | Palm 8
Show Abstract + Hide Abstract
Fully automated organ segmentation on Computed Tomography (CT) images is an important first step in many medical applications. Many different Deep Learning (DL) based approaches are being actively developed for this task. However, often it is hard to make a direct comparison between two segmentation methods. We tested the performance of two deep learning based CT on an independent dataset of CT scans. Algorithm-1 performed much better on the segmentation of the kidney. In contrast, the performance of the two algorithms was similar for the segmentation of the liver. For both algorithms, a number of outliers (Dice <= 0.5) were observed. With limited scan acquisition parameters, it was not possible to diagnose the root cause for the outliers. This work highlights the urgent need for complete DICOM header curation. The DICOM header information could help to pin-point the scanning parameters that lead to segmentation errors by Deep Learning algorithms.
Session 4: Generative AI - GANs and Flow Models
20 February 2024 • 2:00 PM - 3:20 PM PST | Palm 8
Session Chairs: Shandong Wu, Univ. of Pittsburgh (United States), Thomas Martin Deserno, Peter L. Reichertz Institut für Medizinische Informatik (Germany)
12931-17
Author(s): Ridhi Arora, Tamerlan Mustafaev, Juhun Lee, Univ. of Pittsburgh (United States)
20 February 2024 • 2:00 PM - 2:20 PM PST | Palm 8
Show Abstract + Hide Abstract
Our previous study introduced a conditional GAN (CGAN) model to generate simulated mammograms based on paired mammogram images. While CGAN produced realistic mammograms, it often introduced artifacts. This study delved into the connection between patient factors and these artifacts, focusing on age and breast density. Four types of artifacts were identified, occurring in 69% of the cases. The findings suggest that certain artifacts relate to patient characteristics, with denser breasts showing breast boundary and nipple-areola artifacts and older women with less dense breasts having more black spot artifacts.
12931-18
Author(s): Imane Chafi, Farida Cheriet, Polytechnique Montréal (Canada); Julia Keren, Intellident Dentaire Inc. (Canada); Ying Zhang, François Guibault, Polytechnique Montréal (Canada)
20 February 2024 • 2:20 PM - 2:40 PM PST | Palm 8
Show Abstract + Hide Abstract
The operation of shaping dental preparations from diseased teeth is important in validating the retention and viability of a dental crown. A crown bottom is the first point of contact between the teeth preparation and the new crown. This crown bottom must fit the preparation shape with a set of parameters to follow. One of the many challenges in the case of crown bottom generation using machine learning is its consistency, due to the unique nature of human teeth. Our goal is to investigate and compare the use of automated pythonic VTK geometric solutions, as well as a deep learning GAN solution that will allow for crown bottoms to be created in an automated manner, which in turn ameliorates the crown generation pipeline.
12931-19
Author(s): Rie Tachibana, National Institute of Technology, Oshima College (Japan), Massachusetts General Hospital (United States), Harvard Medical School (United States); Janne J. Näppi, Toru Hironaka, Massachusetts General Hospital (United States), Harvard Medical School (United States); Masaki Okamoto, Boston Medical Sciences Co., Ltd. (Japan); Hiroyuki Yoshida, Massachusetts General Hospital (United States), Harvard Medical School (United States)
20 February 2024 • 2:40 PM - 3:00 PM PST | Palm 8
Show Abstract + Hide Abstract
We developed a novel 3D generative artificial intelligence (AI) method for performing electronic cleansing (EC) in CT colonography (CTC). In the method, a 3D transformer-based UNet is used as a generator to map an uncleansed CTC image volume directly into a virtually cleansed CTC image volume. A 3D-PatchGAN is used as a discriminator to provide feedback to the generator to improve the quality of the EC images generated by the 3D transformer-based UNet. The EC method was trained by use of the CTC image volumes of an anthropomorphic phantom that was filled partially with a mixture of foodstuff and an iodinated contrast agent. The CTC image volume of the corresponding empty phantom was used as the reference standard. The quality of the EC images was tested visually with six clinical CTC test cases and quantitatively based on a phantom test set of 100 unseen sample image volumes. The image quality of EC was compared with that of a previous 3D GAN-based EC method. Our preliminary results indicate that the 3D generative AI-based EC method outperforms our previous 3D GAN-based EC method and thus can provide an effective EC method for CTC.
12931-20
Author(s): Erik Y. Ohara, Finn Vamosi, Harsh Patil, Vibujithan Vigneshwaran, Matthias Wilms, Nils D. Forkert, Univ. of Calgary (Canada)
20 February 2024 • 3:00 PM - 3:20 PM PST | Palm 8
Show Abstract + Hide Abstract
Deep learning techniques for medical image analysis have reached comparable performance to medical experts, but the lack of reliable explainability leads to limited adoption in clinical routine. Explainable AI has emerged to address this issue, with causal generative techniques standing out by incorporating a causal perspective into deep learning models. However, their use cases have been limited to 2D images and tabulated data. To overcome this, we propose a novel method to expand a causal generative framework to handle volumetric 3D images, which was validated through analyzing the effect of brain aging using 40196 MRI datasets from the UK Biobank study. Our proposed technique paves the way for future 3D causal generative models in medical image analysis.
Session 5: Generative AI - Diffusion Models
20 February 2024 • 3:50 PM - 5:30 PM PST | Palm 8
Session Chairs: Hiroyuki Yoshida, Massachusetts General Hospital (United States), Thomas Martin Deserno, Peter L. Reichertz Institut für Medizinische Informatik (Germany)
12931-21
CANCELED: Reconstruction of stimulus images to human brain using latent diffusion, U-net and CLIP models
Author(s): Talha Minhas, Bahria Univ. (Pakistan)
20 February 2024 • 3:50 PM - 4:10 PM PST | Palm 8
Show Abstract + Hide Abstract
In this paper, after following paper [21] high-resolution image reconstruction with latent diffusion models from human brain activity by Yu Takagi, and Shinji Nishimoto, we tried to train the latent stimulus images to corresponding brain fMRIs. However, considering the training part a bit questionable, we decided to present the work without the training. We incorporated a technique called latent diffusion models (LDMs) to reconstruct the stimulus images. Our study uses the natural scene dataset (NSD) and the functional magnetic resonance images (fMRIs) from the dataset to investigate the brain and the stimulus images. We visualize the early and late visual cortex regions of the brain of subject 1 and draw the outputs of 09 stimulus images. This has been performed using latent diffusion, Unet, and Contrastive Language-Image Pretraining (CLIP) models. Thus, demonstrating the effectiveness of our approach and validating the approach of [21] which is different and unique in the training perspective
12931-22
Author(s): Nghi C. Truong, Chandan Ganesh Bangalore Yogananda, Benjamin C. Wagner, James M. Holcomb, Divya Reddy, Niloufar Saadat, Kimmo J. Hatanpaa, Toral R. Patel, The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States); Baowei Fei, The Univ. of Texas at Dallas (United States); Matthew D. Lee, Rajan Jain, NYU Grossman School of Medicine (United States); Richard J. Bruce, Univ. of Wisconsin-Madison (United States); Marco C. Pinho, Ananth J. Madhuranthakam, Joseph A. Maldjian, The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States)
20 February 2024 • 4:10 PM - 4:30 PM PST | Palm 8
Show Abstract + Hide Abstract
Data scarcity and data imbalance are two major challenges in training deep learning models on medical images, such as brain tumor MRI data. The recent advancements in generative artificial intelligence have opened new possibilities for synthetically generating MRI data, including brain tumor MRI scans. This work focused on adapting the 2D latent diffusion models to generate 3D multi-contrast brain tumor MRI data with a tumor mask as the condition. Unlike existing works that focused on generating either 2D multi-contrast or 3D single-contrast MRI samples, our models generate multi-contrast 3D MRI samples. We also integrated a conditional module within the UNet backbone of the diffusion model to capture the semantic class-dependent data distribution driven by the provided tumor mask to generate MRI brain tumor samples based on a specific brain tumor mask. Our models were able to generate high-quality 3D multi-contrast brain tumor MRI samples with the tumor location aligned by the input condition mask. This work has the potential to mitigate the scarcity of brain tumor data and improve the performance of deep learning models involving brain tumor MRI data.
12931-23
Author(s): Siddhartha Kapuria, The Univ. of Texas at Austin (United States); Naruhiko Ikoma, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States); Sandeep Chinchali, Farshid Alambeigi, The Univ. of Texas at Austin (United States)
20 February 2024 • 4:30 PM - 4:50 PM PST | Palm 8
Show Abstract + Hide Abstract
Colorectal cancer (CRC) is a significant health concern, with early detection crucial for reducing mortality rates. However, identifying pre-cancerous polyps during colonoscopy can be challenging, leading to high miss rates. To address this, we developed a novel Vision-Based Tactile Sensor, Hysense, which generates detailed textural images of CRC polyps. Our key contribution in this work is using generative diffusion models to enhance these images. This approach improves the training of machine learning algorithms for polyp classification, even on limited real data. We assessed various classifier models using synthetic images, highlighting the potential of synthetic data in enhancing diagnostic accuracy. Our work offers a promising avenue for improving CRC detection and showcases the benefits of generative models in healthcare, particularly when real data is scarce.
12931-24
Author(s): Andrew S. Yu, Case Western Reserve Univ. (United States), Cleveland Clinic (United States); Richard Lartey, William Holden, Ahmet Hakan Ok, Jeehun Kim, Carl Winalski, Naveen Subhas, Cleveland Clinic (United States); Vipin Chaudhary, Case Western Reserve Univ. (United States); Xiaojuan Li, Cleveland Clinic (United States)
20 February 2024 • 4:50 PM - 5:10 PM PST | Palm 8
Show Abstract + Hide Abstract
Lesion segmentation in medical images, particularly for bone marrow edema-like lesions (BMEL) in the knee, faces challenges due to imbalanced data and unreliable annotations. This study proposes an unsupervised deep learning method with the use of conditional diffusion models coupled with inpainting tasks for anomaly detection. This innovative approach facilitates the detection and segmentation of BMEL without human intervention, achieving a DICE testing score of 0.2223. BMEL has been shown to correlate and predict disease progression in several musculoskeletal disorders, such as osteoarthritis. With further development, our method has great potential for fully automated analysis of BMEL to improve early diagnosis and prognosis for musculoskeletal disorders. The framework can be extended to other lesion detection as well.
12931-25
Author(s): Shaoyan Pan, Emory Univ. (United States); Shao-Yuan Lo, Honda Research Institute USA, Inc. (United States); Chih-Wei Chang, Ella Salari, Emory Univ. (United States); Tonghe Wang, Memorial Sloan-Kettering Cancer Ctr. (United States); Justin Roper, Aparna H. Kesarwala, Xiaofeng Yang, Emory Univ. (United States)
20 February 2024 • 5:10 PM - 5:30 PM PST | Palm 8
Show Abstract + Hide Abstract
This study introduces an innovative 3D diffusion-based model, the Geometric-integrated X-ray to CT Denoising diffusion probabilistic model (X-CT-DDPM), to enhance radiation therapy planning. The X-CT-DDPM efficiently converts single anterior-posterior X-ray projections into synthetic CT (sCT) volumes, reducing the need for co-registration and minimizing patient radiation exposure. Leveraging non-equilibrium thermodynamics, the model provides stable training and high-detail outputs. Novel to the traditional DDPMs, the X-CT-DDPM uses dual DDPMs—one for generating full-view X-ray projections and another for volumetric CT reconstruction— which mutually enhance the learning ability of each other, to eventually improve the quality of sCT images while preserving anatomical accuracy. Tailored to individual patients, the model is trained on paired projection-4DCT data spanning 10 patient’s respiratory cycle, allowing for the generation of CT scans at new temporal frames. The X-CT-DDPM demonstrates superior performance with a MAE of 36.36±4.04, PSNR of 32.83±0.98, SSIM of 0.91±0.01, and FID of 0.32±0.02, outperforming DDPM, conditional GAN and Vnet based on institutional datasets.
Live Demonstrations Workshop
20 February 2024 • 5:30 PM - 7:00 PM PST | Pacific A
Session Chairs: Karen Drukker, The Univ. of Chicago (United States), Lubomir M. Hadjiiski, Michigan Medicine (United States), Horst Karl Hahn, Fraunhofer-Institut für Digitale Medizin MEVIS (Germany)


The goal of this workshop is to provide a forum for systems and algorithms developers to show off their creations. The intent is for the audience to be inspired to conduct derivative research, for the demonstrators to receive feedback and find new collaborators, and for all to learn about the rapidly evolving field of medical imaging. The Live Demonstrations Workshop invites participation from all attendees of the SPIE Medical Imaging symposium. Workshop demonstrations include samples, systems, and software demonstrations that depict the implementation, operation, and utility of cutting-edge as well as mature research. Having an accepted SPIE Medical Imaging paper is not required for giving a live demonstration. A certificate of merit and $500 award will be presented to one demonstration considered to be of exceptional interest.

Award sponsored by:
Siemens Healthineers
Publicly Available Data and Tools to Promote Machine Learning: an interactive workshop exploring MIDRC
20 February 2024 • 5:30 PM - 7:00 PM PST | Pacific A
Session Chairs: Weijie Chen, U.S. Food and Drug Administration (United States), Heather M. Whitney, The Univ. of Chicago (United States)

View Full Details: spie.org/midrc-workshop

In this interactive hands-on workshop exploring the infrastructure and resources of the Medical Imaging and Data Resource Center (MIDRC), we will introduce the data collection and curation methods; the user portal for accessing data including tools designed specifically for cohort building; system evaluation approaches and tools including evaluation metric selection; as well as tools for diversity assessment, identification and mitigation of bias and more.

3D Printing and Imaging: Enabling Innovation in Personalized Medicine, Device Development, and System Components
20 February 2024 • 5:30 PM - 7:00 PM PST | Town & Country A

Join this technical event on 3D printing and imaging and hear how it is enabling innovation in personalized medicine, device development, and system components. This special session consists of four presentations followed by a panel discussion.

12925-801
20 February 2024 • 5:30 PM - 5:32 PM PST | Town & Country A
12925-168
Author(s): Jonathan M. Morris, Mayo Clinic (United States)
20 February 2024 • 5:32 PM - 5:49 PM PST | Town & Country A
Show Abstract + Hide Abstract
Over the last 17 years Mayo Clinic has become a world leader in a field now known as point of care manufacturing. Using additive manufacturing we focus on five distinct areas. First to create diagnostic anatomic models for each surgical subspecialty from diagnostic imaging. Second to manufacture custom patient-specific sterilizable osteotomy cutting guides for ENT, OMFS, Orthopedics, and Orthopedic Oncology. Third to build simulators and phantoms using a combination of special effects and 3Dprinting. Fourth using 3D printers to create custom phantoms, phantom holders, and other custom medical devices such as pediatric airway devices, proton beam appliances, and custom jigs and fixtures for the department and hospital. Finally to transfer the digital twins into virtual and augmented reality environments for preoperative surgical planning and immersive educational tools. Mayo Clinic has scaled this endeavor to all three of its main campuses including Jacksonville Fl and Scottsdale AZ to complete the enterprise approach. In doing so we have been able to advance patient care locally as well as assist in building the national IT, regulatory, billing, RSAN 3D SIG, and quality control infrastructure needed to assure scaling across this and other countries.
12925-169
Author(s): Alex Grenning, The Jacobs Institute, Inc. (United States)
20 February 2024 • 5:49 PM - 6:06 PM PST | Town & Country A
Show Abstract + Hide Abstract
Engineers often design products to work within available test fixtures. Test fixtures define the goal posts for device evaluation. It is important for test fixtures to accurately represent the critical conditions of operation and be supported with justification for regulatory review. This presentation explores the role of 3D printing and model design workflows in producing anatomically relevant text fixtures which can be used to guide, and more importantly accelerate the device development process. The Jacobs Institute is a one-of-a-kind, not-for-profit vascular medical technology innovation center. The Jacobs Institute’s mission is to accelerate the development of next-generation technologies in vascular medicine through collisions of physicians, engineers, entrepreneurs, and industry.
12925-171
Author(s): Devarsh Vyas, Benjamin Johnson, 3D Systems Corp. (United States)
20 February 2024 • 6:06 PM - 6:23 PM PST | Town & Country A
Show Abstract + Hide Abstract
AM is already a widely adopted manufacturing process used to produce millions of medical devices and healthcare products every year. Common uses for AM include the printing of patient-specific surgical implants and instruments derived from imaging data and the manufacturing of metal implants and instruments with features that are impossible to fabricate using traditional subtractive manufacturing. In addition to reducing costs, patient-specific solutions—such as customized surgical plans and personalized implants—aim to improve surgical outcomes for patients and give surgeons more options and more flexibility in the OR. With advancement in technology, implants are 3D printed in various materials and at various manufacturing sites including at the point-of-care. 3D Systems collaborates with medical device manufacturers and health systems to develop personalized health solutions and is the leader in design, manufacturing and getting regulatory approvals for 3D printed patient-specific implants in various materials and technologies
12925-170
Author(s): David W. Holdsworth, Western Univ. (Canada)
20 February 2024 • 6:23 PM - 6:40 PM PST | Town & Country A
Show Abstract + Hide Abstract
Additive Manufacturing has not realized it’s full potential due to a number of factors. The regulatory environment for medical devices is geared towards conventional manufacturing techniques, making it challenging to certify 3D-printed devices. Additive manufacturing may still not be competitive when scaled up for industrial production, and the need for post-processing may negate some of the benefits. The promises and the challenges of additive manufacturing will be explored in the context of medical imaging device design.
12925-802
20 February 2024 • 6:40 PM - 7:00 PM PST | Town & Country A
Establishing Ground Truth in Radiology and Pathology
20 February 2024 • 5:30 PM - 7:00 PM PST | Palm 4

Establishing ground truth is one of the hardest parts in an imaging experiment. In this workshop we'll talk to pathologists, radiologists, an imaging scientist (who evaluates imaging technology without ground truth), and an FDA staff scientist (who creates his own ground truth) to determine how to best deal with this difficult problem.

Moderator:
Ronald Summers, National Institutes of Health (United States)

Panelists:
Richard Levenson, Univ. of California, Davis (United States)
Steven Horii, Univ. of Pennsylvania (United States)
Abhinav Kumar Jha, Washington Univ., St. Louis (United States)
Miguel Lago, U.S. Food and Drug Administration (United States)

Wednesday Morning Keynotes
21 February 2024 • 8:30 AM - 10:00 AM PST | Town & Country A
Session Chairs: Claudia R. Mello-Thoms, Univ. Iowa Carver College of Medicine (United States), Hiroyuki Yoshida, Massachusetts General Hospital (United States), Shandong Wu, Univ. of Pittsburgh (United States)

8:30 AM - 8:35 AM:
Welcome and introduction

8:35 AM - 8:40 AM:
Robert F. Wagner Award finalists announcements for conferences 12929 and 12931

12929-405
Author(s): Robert M. Nishikawa, Univ. of Pittsburgh (United States)
21 February 2024 • 8:40 AM - 9:20 AM PST | Town & Country A
Show Abstract + Hide Abstract
Image perception, observer performance, and technology assessment have driven many of the advances in breast imaging. Technology assessment metrics were used to develop mammography systems, first with screen-film mammography and then to digital mammography and digital breast tomosynthesis. To optimize these systems clinically, it became necessary to determine what type of information a radiologist needed to make a correct diagnosis. Image perception studies helped define what spatial frequencies were necessary to detect breast cancers and how different sources of noise affected detectability. Finally, observer performance studies were used to show that advances in the imaging system led to better detection and diagnoses by radiologists. In parallel to these developments, these three concepts were used to develop computer-aided diagnosis systems. In this talk, I will highlight how image perception, observer performance, and technology assessment were leveraged to produce technologies that allow radiologists to be highly effective in detecting breast cancer.
12931-407
Author(s): Gordon J. Harris, Massachusetts General Hospital (United States)
21 February 2024 • 9:20 AM - 10:00 AM PST | Town & Country A
Show Abstract + Hide Abstract
Within academia, there are challenges to building and sustaining software platforms and translating them into widely available tools with national or global use. Hurdles include identifying significant needs, acquiring funding, implementing commercial grade development processes and user experience design and choosing a sustainable financial model and licensing plan. In addition, moving beyond the academic sphere into the commercial realm requires an investment in business processes and skills including branding, marketing, sales, operations, regulatory/compliance, legal, and fundraising expertise. Experiences licensing from academia are shared, illustrated with the following two examples: First, a clinical trials imaging informatics platform will be discussed, developed initially to manage all the clinical trials imaging assessments within a Comprehensive Cancer Center, and now licensed commercially for use in over 4,200 active clinical trials at 18 cancer centers including 12 NCI-designated sites. Second, a web-based medical imaging framework will be covered, an open-source software platform that has become the standard for over a thousand academic and industry software projects.
Session 6: AI/ML for Data Analytics
21 February 2024 • 10:30 AM - 12:30 PM PST | Palm 8
Session Chairs: Anh H. Le, Roswell Park Comprehensive Cancer Ctr. (United States), Brent J. Liu, The Univ. of Southern California (United States)
12931-26
Author(s): Anastasiia Rozhyna, Manfredo Atzori, Henning Müller, HES-SO Valais-Wallis (Switzerland)
21 February 2024 • 10:30 AM - 10:50 AM PST | Palm 8
Show Abstract + Hide Abstract
Diet, lifestyle and an aging population have led to many diseases, some of which can be seen well in the eyes and analyzed by simple means, such as OCT (Optical Coherence Tomography) scans. This article presents a comparative study examining transfer learning methods for classifying retinal OCT scans. The study focuses on the classification of several retina alterations such as age-related macular degeneration (AMD), choroidal neovascularization (CNV), diabetic macular edema (DME) and normal cases. The approach was evaluated on a large dataset of labeled OCT scans. The results indicate that the proposed transfer learning is a powerful tool for classifying multi-class retinal OCT scans.
12931-27
Author(s): Sidharth Sengupta, Ana A. Araujo, Maisie L. Shindo, Brian J. Park, Oregon Health & Science Univ. (United States)
21 February 2024 • 10:50 AM - 11:10 AM PST | Palm 8
Show Abstract + Hide Abstract
Radiofrequency ablation (RFA) with continuous ultrasonography (US) monitoring is a non-surgical alternative to traditional thyroid surgery for treating benign symptomatic thyroid nodules. Monitoring nodules over time through US imaging is used to determine procedural success, primarily indicated by measured volume reduction. These images also capture other rich clinical characteristics that we believed could be systematically interrogated across patients to better understand and stratify nodule response to RFA. We performed radiomic texture analysis on 56 preoperative and postoperative US thyroid nodule images from patients treated with RFA that generated 767 radiomic feature (RFs) measurements. Using dimensionality reduction and clustering of thyroid nodules by their US image texture features, unique populations of nodules were discovered that suggest these methods combined with radiomics texture analysis as a useful system for stratifying thyroid nodules. Additionally, individual texture features were found to be different between nodules with successful and unsuccessful outcomes, further supporting radiomics features as potential biomarkers for RFA-treated thyroid nodules.
12931-28
Author(s): Mingzhe Hu, Xiaofeng Yang, Emory Univ. (United States)
21 February 2024 • 11:10 AM - 11:30 AM PST | Palm 8
Show Abstract + Hide Abstract
This study presents a lightweight pipeline for skin lesion detection, addressing the challenges posed by imbalanced class distribution and subtle or atypical appearances of some lesions. The pipeline is built around a lightweight model that leverages ghosted features and the DFC attention mechanism to reduce computational complexity while maintaining high performance. The model was trained on the HAM10000 dataset. The model also incorporates a knowledge-based loss weighting technique, which assigns different weights to the loss function at the class level and the instance level, helping the model focus on minority classes and challenging samples. The model achieved an accuracy of 92.4%, a precision of 84.2%, and a recall of 86.9%, with a particularly strong performance in identifying Benign Keratosis-like lesions and Nevus. Despite its superior performance, the model's computational cost is considerably lower than some models with less accuracy, making it an optimal solution where both accuracy and efficiency are essential.
12931-29
Author(s): Sabyasachi Samantaray, Kartik S. Nair, Indian Institute of Technology Bombay (India); Matheus de Freitas Oliveira Baffa, Univ. de São Paulo (Brazil); Thomas M. Deserno, Peter L. Reichertz Institut für Medizinische Informatik (Germany)
21 February 2024 • 11:30 AM - 11:50 AM PST | Palm 8
Show Abstract + Hide Abstract
This research investigates the use of lateral and oblique infrared thermal images, employing advanced machine learning techniques, to detect breast cancer. By exploring angular views, we aim to improve breast cancer screening, providing a radiation-free and cost-effective alternative to traditional mammography. The study shows competitive diagnostic capabilities, achieving 97.74\% accuracy and an AUC Score of 99.17\%.
12931-30
Author(s): Kaito Koshino, Dai Nishioka, Yoshiki Kawata, Tokushima Univ. (Japan); Yuuki Kobari, Tokyo Women's Medical Univ. Hospital (Japan); Atsushi Ikeda, Univ. of Tsukuba (Japan); Noboru Niki, Tokushima Univ. (Japan)
21 February 2024 • 11:50 AM - 12:10 PM PST | Palm 8
Show Abstract + Hide Abstract
Contrast-enhanced multi-slice 3D CT images enable highly accurate analysis, diagnosis, and treatment of the kidney. Contrast-enhanced CT images provide more accurate information about blood vessels, organs, and lesions. Abdominal multiphase contrast-enhanced CT images are used and analyzed renal tumors of pallidocystic renal cell carcinoma (ccRCC), pigmented renal cell carcinoma (chRCC), papillary renal cell carcinoma (pRCC), angiomyolipoma, and oncocytoma. By analyzing these multiphase changes, the features of kidney tumors are extracted and classify them with high accuracy.
12931-31
Author(s): Shrey Sukhadia, Dartmouth-Hitchcock Medical Ctr. (United States); Abdibaset A. Bare, Thayer School of Engineering, Dartmouth College (United States); Marthony L. Robins, Dartmouth-Hitchcock Medical Ctr. (United States)
21 February 2024 • 12:10 PM - 12:30 PM PST | Palm 8
Show Abstract + Hide Abstract
Radiomics, a burgeoning field within medical imaging, has gained momentum for its potential to provide nuanced insights into lesion characteristics. This study introduces a pioneering approach to benchmarking radiomic features, delving into their correlations with ground truth measurements and subsequent clustering patterns. By analyzing 59 simulated lesions extracted from the QIDW Liver II Hybrid Dataset, this research meticulously extracts and evaluates a vast array of radiomic attributes using advanced software platforms. A total of 2060 features are correlated with ground truth volume and contrast measurements, revealing intricate relationships. Novel co-clustering patterns emerge, underscoring the versatility and complexity of radiomic features. These findings not only contribute to lesion characterization precision but also advance our understanding of radiomic intricacies. The presented research holds potential to refine radiomics-driven medical insights, paving the way for more informed clinical decision-making and improved patient care.
Session 7: AI/ML for Precision Medicine
21 February 2024 • 1:40 PM - 3:20 PM PST | Palm 8
Session Chairs: Hiroyuki Yoshida, Massachusetts General Hospital (United States), Brent J. Liu, The Univ. of Southern California (United States)
12931-32
Author(s): Bradley Lowekamp, Andrei Gabrielian, Darrell Hurt, Alex Rosenthal, Ziv R. Yaniv, National Institute of Allergy and Infectious Diseases (United States)
21 February 2024 • 1:40 PM - 2:00 PM PST | Palm 8
Show Abstract + Hide Abstract
The NIAID TB Portals program is an international consortium with a primary focus on patient centric data collection and analysis for drug resistant TB. The data includes images, their associated radiological findings, clinical records, and socioeconomic information. This work describes a chest X-ray based image retrieval system which enables precision medicine. An input image is used to retrieve similar images and the associated patient specific information, thus facilitating inspection of outcomes and treatment regimens from comparable patients. Image similarity is defined using clinically relevant biomarkers: age, gender, body mass index (BMI), and the percentage of lung affected per sextant. The biomarkers are predicted using variants of the DenseNet169 convolutional neural networks. All models were evaluated and found to be sufficiently accurate for the task. The system is currently available at https://rap.tbportals.niaid.nih.gov/find_similar_cxr
12931-33
Author(s): Nishta Letchumanan, The Univ. of Tokyo (Japan); Shouhei Hanaoka, Tomomi Takenaga, Yusuke Suzuki, The Univ. of Tokyo Hospital (Japan); Takahiro Nakao, The Univ. of Tokyo (Japan); Yukihiro Nomura, Ctr. for Frontier Medical Engineering, Chiba Univ. (Japan); Takeharu Yoshikawa, The Univ. of Tokyo (Japan); Osamu Abe, The Univ. of Tokyo Hospital (Japan)
21 February 2024 • 2:00 PM - 2:20 PM PST | Palm 8
Show Abstract + Hide Abstract
Type 2 Diabetes Mellitus (T2DM) is a common lifestyle-related disease and predicting its occurrence before 5 years could help patients in altering their lifestyle as a preventive measure. Mammography is one of the common health check screening examinations for women and it can be easily obtained for patients who are within the risk age range for both breast cancer and diabetes and radiomics is an effective research tool to maximize the number of information extracted from clinical images. We intend to investigate the feasibility of radiomics features to predict future diabetes risk using mammography images and the HbA1c results in annual blood test results.
12931-34
Author(s): Matheus de Freitas Oliveira Baffa, Univ. de São Paulo (Brazil); Nadine S. Schaadt, Friedrich Feuerhake, Medizinische Hochschule Hannover (Germany); Thomas M. Deserno, Peter L. Reichertz Institut für Medizinische Informatik (Germany)
21 February 2024 • 2:20 PM - 2:40 PM PST | Palm 8
Show Abstract + Hide Abstract
In this research, we employ Self-Organizing Maps and advanced deep feature extraction methodologies to analyze Immunohistochemistry (IHC) stained samples of lung cancer. Utilizing the specificity of IHC images with deep unsupervised machine learning, we aim to detect detailed tissue types and cellular interactions. Results indicate the effectiveness of our approach in enhancing histopathological evaluations, suggesting a significant advancement for lung cancer diagnostics and subsequent studies.
12931-35
Author(s): Rento Nii, Yoshiki Kawata, Tokushima Univ. (Japan); Yosinori Ohtsuka, Hokkaido Chuo Rosai hospital (Japan); Takumi Kishimoto, Okayama Rosai Hospital (Japan); Kazuto Ashizawa, Department of Clinical Oncology Unit of Translational Medicine (Japan); Noboru Niki, Medical Science Institute Inc (Japan)
21 February 2024 • 2:40 PM - 3:00 PM PST | Palm 8
Show Abstract + Hide Abstract
Pneumoconiosis is an occupational respiratory disease caused by inhaling dust into the lungs. In Japan, 240,000 people undergo pneumoconiosis screening every year. X-rays are used to classify the severity of pneumoconiosis. It is important to distinguish between stage 0/1 and stage 1/0, which are eligible for recognition of occupational injury. CT images are expected to provide more accurate diagnosis because they can be confirmed in three dimensions compared to X-rays. We extract micro-nodules from 3D CT images for each severity of pneumoconiosis, and analyze and evaluate the number, size, position and CT values of micro-nodules in each lung lobe.
12931-36
Author(s): Yubing Tong, Jayaram K. Udupa, Univ. of Pennsylvania (United States); Joseph McDonough, The Children's Hospital of Philadelphia (United States); Caiyun Wu, Yusuf Akhtar, Lipeng Xie, Mostafa Alnoury, Mahdie Hosseini, Leihui Tong, Univ. of Pennsylvania (United States); Samantha Gogel, David Biko, Oscar H. Mayer, Jason Anari, The Children's Hospital of Philadelphia (United States); Drew Torigian, Univ. of Pennsylvania (United States); Patrick Cahill, The Children's Hospital of Philadelphia (United States)
21 February 2024 • 3:00 PM - 3:20 PM PST | Palm 8
Show Abstract + Hide Abstract
We introduce a large open-source normative database from our ongoing virtual growing child (VGC) project, including measurements of volumes, architecture, and dynamics in healthy children (6-18 years) via dynamic MRI. The database provides four categories of regional respiratory measurement parameters including morphological, architectural, dynamic, and developmental. The database is unique, and has 3,820 3D segmentations (~100,000 2D slices with segmentations), which to our knowledge is the largest dMRI dataset of healthy children, and can serve as a reference standard to quantify regional respiratory abnormalities on dMRI in young patients with various respiratory conditions and facilitate treatment planning and response assessment.
Session 8: Multimodal and Hybrid Data/Systems
21 February 2024 • 3:50 PM - 5:30 PM PST | Palm 8
Session Chairs: Shandong Wu, Univ. of Pittsburgh (United States), Anh H. Le, Roswell Park Comprehensive Cancer Ctr. (United States)
12931-37
Author(s): Maxence Wynen, Univ. Catholique de Louvain (Belgium); Pedro M. Gordaliza, Ctr. for Biomedical Imaging, Univ. de Lausanne (Switzerland), Lausanne Univ. Hospital (Switzerland); Anna Stölting, Pietro Maggi, Univ. Catholique de Louvain (Belgium); Meritxell Bach Cuadra, Ctr. for Biomedical Imaging, Univ. de Lausanne (Switzerland), Lausanne Univ. Hospital (Switzerland); Benoit Macq, Univ. Catholique de Louvain (Belgium)
21 February 2024 • 3:50 PM - 4:10 PM PST | Palm 8
Show Abstract + Hide Abstract
Magnetic Resonance Imaging (MRI) plays a pivotal role in diagnosing and predicting the course ofMultiple Sclerosis (MS). A distinctive biomarker, Paramagnetic Rim Lesions (PRL), offers promise but poses challenges in manual assessment. To address this, we introduce a direct PRL segmentation approach and extensively evaluate various methods, with a focus on preprocessing and input modalities. Our study emphasizes instance segmentation metrics tailored for sparse lesions. Single-modal inputs show limitations, except for FLAIR and Magnitude, exhibiting potential in PRL detection. Integrating Phase and/or MPRAGE with FLAIR enhances the detection capacity. Notably, applying white matter masks yields mixed results, while lesion masks improve overall performance. Despite the complexities of PRL segmentation, our optimal model, FLAIR+Phase, attains a F1 score of 0.443, a Dice score coefficient per True Positive of 0.68 and a deceiving Dice score of 0.191 on the test set. This highlights the intricate nature of the PRL segmentation task. Our work pioneers an automated approach to PRL analysis, offering valuable insights and paving the way for future advancements in this critical field.
12931-38
Author(s): Willi Schüler, Lisa-Marie Bente, Thomas M. Deserno, Tim Kacprowski, Technische Univ. Braunschweig (Germany)
21 February 2024 • 4:10 PM - 4:30 PM PST | Palm 8
Show Abstract + Hide Abstract
Virtual reality (VR) enables new perspectives and approaches for interaction with a computer-generated environment and, hence, for data science and a variety of applications. We investigate the potential of VR in combination with a textile sensor shirt for real-time monitoring of an ECG signal. To that end, we wirelessly record and analyzes the ECG from a subject in real-time. The ECG itself is visualized in VR in a graph and the inferred heart beat is visualizedo n a three dimensional heart model. The combination of smart wearables and VR demonstrates how immersive analytics facilitates real-time monitoring of the heart. Eventually, similar approaches can open new possibilities for training medical personnel as well as educating a broader interested audience.
12931-39
Author(s): Zhou Yu, Ming Fan, Yuanling Chen, Xinquan Xiao, Xinxin Pan, Lihua Li, Intelligent Biomedicine, Hangzhou Dianzi Univ. (China)
21 February 2024 • 4:30 PM - 4:50 PM PST | Palm 8
Show Abstract + Hide Abstract
We propose a histopathological image information-guided transformer model based on DCE-MRI for predicting the response to neoadjuvant chemotherapy in breast cancer. A modality information transfer module was designed to generate histopathological image features from the DCE-MRI features. This enables the trained deep neural network for prediction by using solely the image data. The prediction performance of our model guided by histologic information is significantly improved compared to prediction methods using only histopathological image data or DCE-MRI data without histological information. The results showed that our proposed model is promising in the treatment management of breast cancer.
12931-40
Author(s): Anshu Goyal, Jingqi Hu, Joseph Liu, Harper E. Stewart, Casey Wiens, Jill L. McNitt-Gray, Brent J. Liu, The Univ. of Southern California (United States)
21 February 2024 • 4:50 PM - 5:10 PM PST | Palm 8
Show Abstract + Hide Abstract
Integration of sports medicine and human performance has the potential to aid people of various abilities. Previous work done has resulted in the integrated biomechanics informatics system (IBIS). IBIS has the capacity to store, view, and retrieve data and can be expanded to include different sports medicine research related applications. One such application is a data processing application to create force vector overlays for decision support. The efficacy of the data processing application was tested by having users process data to create force vector overlays using the IBIS application and then comparing it to the traditional workflow. The new workflow using the IBIS application was found to be quicker and easier to use than the traditional workflow while generating identical results. The deployment of this data processing tool for decision support shows the capability of IBIS to be expanded to include more tools such as automatic foot contact detection.
12931-41
Author(s): Juan Carlos Prieto, The Univ. of North Carolina at Chapel Hill (United States); Felicia Miranda, Marcela Gurgel, Luc Anchling, Nathan Hutin, Selene Barone, Najla Al Turkestani, Aron Aliaga, Univ. of Michigan (United States); Marilia Yatabe, The Univ. of North Carolina at Chapel Hill (United States); Jonas Bianchi, Univ. of the Pacific (United States); Lucia Cevidanes, Univ. of Michigan (United States)
21 February 2024 • 5:10 PM - 5:30 PM PST | Palm 8
Show Abstract + Hide Abstract
ShapeAXI is an advanced framework for shape analysis, using a multi-view method to analyze 3D objects with 2D Neural Networks (CNNs). It includes an automatic N-fold cross-validation process, producing explainability heat-maps for enhanced interpretability. ShapeAXI’s versatility is highlighted in two classification experiments: the first classifies condyles as healthy or degenerative, and the second, more complex experiment, classifies cleft patients’ shapes from CBCT scans into four severity levels. This innovation aligns with current medical research and creates new possibilities for specialized cleft patient analysis. The insights from ShapeAXI’s explainability images contribute to understanding in the fields of condyle assessment and cleft severity classification. As a flexible and interpretive tool, ShapeAXI sets a new standard in 3D object interpretation and holds the potential for significant impact across various research and practical applications.
Posters - Wednesday
21 February 2024 • 5:30 PM - 7:00 PM PST | Pacific A

Conference attendees are invited to attend the SPIE Medical Imaging poster session on Wednesday evening. Come view the posters, enjoy light refreshments, ask questions, and network with colleagues in your field. Authors of poster papers will be present to answer questions concerning their papers. Attendees are required to wear their conference registration badges.

Poster Presenters:
Poster Setup Period: 5:00 PM Tuesday – 5:00 PM Wednesday

  • In order to be considered for a poster award, it is recommended to have your poster set up by 12:00 PM Wednesday. Judging may begin after this time. Posters must remain on display until the end of the Wednesday evening poster session, but may be left hanging until 1:00 PM Thursday. After 1:00 PM, any posters left hanging will be discarded.
View poster presentation guidelines and set-up instructions at
spie.org/MI/Poster-Presentation-Guidelines

12931-42
Author(s): Manish Sharma, Imaging Endpoints (India); Samira Farough, Andre Burkett, Imaging Endpoints, LLC (United States); Jerome Prasanth, Imaging Endpoints (India); Nabil El-Shafeey, Dominic Zygadlo, Chera Dunn, Ron Korn, Imaging Endpoints, LLC (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
This study explores the application of Large Language Models (LLMs) such as OpenAI's GPT-4 and ChatGPT in enhancing Blinded Independent Central Review (BICR) processes in oncology clinical trials that employ the RECIST criteria. The study used an AI-based chatbot trained on clinical trial documents, aiming to assist readers with study-related queries, thereby streamlining the assessment process. Prospective evaluation demonstrates the chatbot's ability to provide immediate, contextually accurate responses, reducing response times and preventing delays in decision-making. This innovative approach showcases the potential of LLMs to improve efficiency and decision quality in clinical research, offering a promising avenue for future advancements in the field.
12931-43
Author(s): Yuanyuan Ge, Ming Fan, Xian Li, Yueyue Liu, Lihua Li, Intelligent Biomedicine, Hangzhou Dianzi Univ. (China)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
In this study, we developed a contrastive learning-based deep model to synthesize prognostic immune cell signatures from DCE-MRI image features. To evaluate the prognosis of breast cancer, the risk score of each sample was calculated by multivariable Cox regression. Patients were separated according to predicted risk scores. The generated risk scores of immune cells achieved the R square of 0.48 and 0.43 in the validation and test sets, respectively. Significant differences in prognosis were observed after grouping the patients according to risk scores, with p values of 0.033 and 0.011 in the validation and test sets, respectively. The results indicated that the synthesized immune cell signature was promising for prognosis analysis in breast cancer.
12931-44
Author(s): Arata Segawa, Mitsutaka Nemoto, Kindai Univ. (Japan); Hayato Kaida, Kindai Univ. (Japan), Kindai Univ. Hospital (Japan); Yuichi Kimura, Takashi Nagaoka, Katsuhiro Mikami, Kindai Univ. (Japan); Takahiro Yamada, Kohei Hanaoka, Kindai Univ. Hospital (Japan); Tatsuya Tsuchitani, Kazuhiro Kitajima, Hyogo College of Medicine (Japan); Kazunari Ishii, Kindai Univ. (Japan), Kindai Univ. Hospital (Japan)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
We propose an unsupervised method to detect lung lesions on FDG-PET/CT images based on deep anomaly detection using 2.5-dimensional image processing. This method transforms the image from various CT slice images to normal FDG-PET slice images without any lesion-like SUV patterns by multiple 2D U-Nets trained using only normal FDG-PET/CT images with no lung lesions. The lesions are enhanced and detected by subtraction analysis between an input FDG-PET image and the transformed normal FDG-PET images. The evaluation with clinical FDG-PET/CT images with lung lesions showed acceptable performance with 82.9 % sensitivity and five false positives per image.
12931-45
Author(s): Zhouping Wei, Yoganand Balagurunanthan, Moffitt Cancer Ctr. (United States)
21 February 2024 • 5:30 PM - 7:00 PM PST | Pacific A
Show Abstract + Hide Abstract
The purpose of this study is to investigate methods to derive time dependent features from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to identify clinically significant prostate cancer. Clinically, rapid early intensity enhancement followed by a relatively rapid washout has been attributed to cancer tissue characteristics, most often described heuristically. In this study, we investigate the feasibility of applying image transformation followed by quantitative time dependent DCE image characterization to derive cancer-specific discriminative features.
12931-46
Author(s): Rie Tachibana, National Institute of Technology, Oshima College (Japan), Massachusetts General Hospital, Harvard Medical School (United States); Janne J. Näppi, Massachusetts General Hospital, Harvard Medical School (United States); Masaki Okamoto, Boston Medical Sciences Co., Ltd. (Japan); Hiroyuki Yoshida, Massachusetts General Hospital, Harvard Medical School (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
We developed a novel 3D transformer-based UNet method for performing electronic cleansing (EC) in CT colonography (CTC). The method is designed to map an uncleansed CTC image volume directly into the corresponding virtually cleansed CTC image volume. In the method, the layers of a 3D transformer-based encoder are connected via skip connections to the decoder layers of a 3D UNet to enhance the ability of the UNet to use long-distance image information for resolving EC image artifacts. The EC method was trained by use of the CTC image volumes of an anthropomorphic phantom that was filled partially with a mixture of foodstuff and an iodinated contrast agent. The CTC image volume of the corresponding empty phantom was used as the reference standard. The quality of the EC images was tested visually with six clinical CTC test cases and quantitatively based on a phantom test set of 100 unseen samples. The image quality of EC was compared with that of a conventional 3D UNet-based EC method. Our preliminary results indicate that the 3D transformer-based UNet EC method is a potentially effective approach for optimizing the performance of EC in CTC.
12931-47
Author(s): Janne J. Näppi, Massachusetts General Hospital (United States), Harvard Medical School (United States); Toru Hironaka, Massachusetts General Hospital (United States); Dufan Wu, Rajiv Gupta, Massachusetts General Hospital (United States), Harvard Medical School (United States); Rie Tachibana, Massachusetts General Hospital (United States), Harvard Medical School (United States), National Institute of Technology, Oshima College (Japan); Katsuyuki Taguchi, Johns Hopkins Univ. (United States); Masaki Okamoto, Boston Medical Sciences Co., Ltd. (Japan); Hiroyuki Yoshida, Massachusetts General Hospital (United States), Harvard Medical School (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
We compared the polyp segmentation performance of the traditional 3D U-Net, transformer-based 3D U-Net, and diffusion-based 3D U-Net on a photon-counting CT colonography dataset. All three segmentation networks yielded a satisfactory average segmentation accuracy, but their complementary performance indicates than an ensemble approach might work better than using any single network. These preliminary results are useful for the development of an explainable deep-learning tool for estimating polyp size for the diagnosis and management of patients in colorectal screening.
12931-48
Author(s): Manahil Shaikh, Bisma Khalid, Javaria Latif, Uzair Iqbal, Labiba Fahad, National Univ. of Computer and Emerging Sciences (Pakistan)
21 February 2024 • 5:30 PM - 7:00 PM PST | Pacific A
Show Abstract + Hide Abstract
Chronic kidney disease (CKD) refers to any long-term condition which deteriorates the functionality of the kidney in body waste filtration. The disease could be treated if diagnosed timely. Concerning image segmentation of chronic kidney disease, the increased computation cost of models due to varied shapes and sizes of tumors and position of stones, and varied range of image intensity of different forms of CKD makes it a challenging task to accurately segment the disease and kidney from an image Keeping this problem domain in view, this paper proposes a lightweight UNET based architecture for segmentation of chronic kidney disease using the concepts of machine learning models to trace the infected areas and create a base for kidney image analysis. This paper compares two different architectural changes to the UNET model to decrease the training time and training epochs. Furthermore, this paper uses feature upscaling in pre-processing to see the impact on the computational cost. Moreover, our technique includes promising opportunities as it can further be modified into a real-time device. Also, it can play an important role in a patient’s health care and diagnosis.
12931-49
Author(s): Changhee Han, Callisto Inc. (Japan); Kyohei Shibano, The Univ. of Tokyo (Japan); Wataru Ozaki, Keishiro Osaki, Callisto Inc. (Japan); Takafumi Haraguchi, St. Marianna Univ. School of Medicine (Japan); Daisuke Hirahara, Harada Academy (Japan); Shumon Kimura, Yasuyuki Kobayashi, St. Marianna Univ. School of Medicine (Japan); Gento Mogi, The Univ. of Tokyo (Japan)
21 February 2024 • 5:30 PM - 7:00 PM PST | Pacific A
Show Abstract + Hide Abstract
Deep Learning is advancing medical imaging Research and Development (R&D), leading to the frequent clinical use of Artificial Intelligence/Machine Learning (AI/ML)-based medical devices. However, to advance AI R&D, two challenges arise: 1) significant data imbalance, with most data from Europe/America and under 10% from Asia, despite its 60% global population share; and 2) hefty time and investment needed to curate proprietary datasets for commercial use. In response, we established the first commercial medical imaging platform, encompassing steps like: 1) data collection, 2) data selection, 3) annotation, and 4) pre-processing. Moreover, we focus on harnessing under-represented data from Japan and broader Asia. We are preparing/providing ready-to-use datasets for medical AI R&D by 1) offering these datasets to companies and 2) using them as additional training data to develop tailored AI solutions. We also aim to merge Blockchain for data security and plan to synthesize rare disease data via generative AI.
12931-50
Author(s): Trenton D. Campos, Gregory Datto, Andy Liu, Carlos Mendez-Cruz, Krunal Patel, Chet Friday, Rashad Madi, Michael Hast, Yubing Tong, Chamith S. Rajapakse, Univ. of Pennsylvania (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
The proposed method seeks to model hard and soft tissue using DICOM files generated from Computed Tomography (CT) scans. This employs the use of creating hard tissue STLs directly from the DICOM files and utilizing our reverse imaging protocol to reverse-engineer DICOM files into STLs of soft tissues. These STL files are then bioprinted utilizing novel Polycaprolactone (PCL) and Hydroxyapatite (HA) printing techniques that circumvent the need for solvents. Due to the biocompatible and osteogenic nature of the scaffolds, they can be employed as image-based, implantable biocompatible scaffolds for musculoskeletal applications. Testing was conducted to determine the stiffness of various PCL:HA compositions and infill geometries to determine the appropriate composition for various applications.
12931-51
Author(s): Sreeja Malladi, Sanket Purohit, Advait Brahme, Julia A. Scott, Santa Clara Univ. (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
Image-guided radiotherapy planning relies on accurate segmentation to delineate target tumor anatomy. In head and neck tumors, the primary and secondary tumors are difficult to distinguish from surrounding soft tissues on CT and may be enhanced by PET. Manual segmentation is not feasible for CT/PET, though deep learning models may solve for this segmentation challenge. In this work, we benchmark two convolutional neural network approaches--Squeeze-and-Excitation (SE) U-Net and SegResNet--on the multi-center HECKTOR 2022 PET/CT dataset for multi-class tumor delineation. Through comparative analysis of model accuracy and efficiency, we determined the advantages of dynamic feature re-calibration provided by SE blocks versus hierarchical feature extraction of SegResNet. Our findings show that the SegResNet out performed the SE U-Net and is comparable to many models of the HECKTOR 2022 grand challenge. The architectural analysis informs design guidelines for translating these segmentation models into reliable tools that enhance standardization and efficiency of image-guided radiotherapy planning.
12931-52
Author(s): Tamerlan Mustafaev, Univ. of Pittsburgh (United States); Robert Nishikawa, Juhun Lee, University of Pittsburgh (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
It is well-known that GAN-generated images contain artifacts, yet there are few studies exploring the impact of various types, amounts, and sizes of these artifacts on medical dataset augmentation, particularly in mammographic images. The scarcity of high-quality, multicenter mammographic data necessitates generative algorithms for creating controlled datasets tailored to specific requirements. However, with these artificially generated datasets, there is a need to address the different types of artifacts that may potentially influence our target needs. This study evaluates classification and segmentation models for identifying and segmenting these artifacts in such datasets and proposes potential applications for these models in future research.
Thursday Morning Keynotes
22 February 2024 • 8:30 AM - 10:00 AM PST | Town & Country A
Session Chairs: Rebecca Fahrig, Siemens Healthineers (Germany), John M. Sabol, Konica Minolta Healthcare Americas, Inc. (United States), Ke Li, Univ. of Wisconsin School of Medicine and Public Health (United States), Olivier Colliot, Ctr. National de la Recherche Scientifique (France), Jhimli Mitra, GE Research (United States)

8:30 AM - 8:35 AM:
Welcome and introduction

8:35 AM - 8:40 AM:
Award announcements

  • Robert F. Wagner Award finalists for conferences 12925 and 12926
  • Physics of Medical Imaging Best Student Paper Award
  • Image Processing Best Paper Award
12925-401
Author(s): David W. Holdsworth, Western Univ. (Canada)
22 February 2024 • 8:40 AM - 9:20 AM PST | Town & Country A
Show Abstract + Hide Abstract
Additive manufacturing (i.e. 3D printing) offers transformative potential in the development of biomedical devices and medical imaging systems, but at the same time presents challenges that continue to limit widespread adoption. Within medical imaging, 3D printing has numerous applications including device design, radiographic collimation, anthropomorphic phantoms, and surgical visualization. Continuous technological development has resulted in improved plastic materials as well as high-throughput fabrication in medical-grade metal alloys. Nonetheless, additive manufacturing has not realized its full potential, due to a number of factors. The regulatory environment for medical devices is geared towards conventional manufacturing techniques, making it challenging to certify 3D-printed devices. Additive manufacturing may still not be competitive when scaled up for industrial production, and the need for post-processing may negate some of the benefits. In this talk, we will describe the current state of 3D printing in medical imaging and explore future potential, including links to 3D design and finite-element modeling.
12926-402
Author(s): Shuo Li, Case Western Reserve Univ. (United States)
22 February 2024 • 9:20 AM - 10:00 AM PST | Town & Country A
Show Abstract + Hide Abstract
Foundation models rapidly emerge as a transformative force in medical imaging, leveraging extensive datasets and sophisticated pre-trained models to decode and interpret complex medical images. Our presentation will begin with an in-depth exploration of the essential concepts that underpin these models, with a special emphasis on the synergy between vision-language models and medical imaging. We aim to elucidate how the integration of prompts, language, and vision catalyzes a groundbreaking shift in the foundation of artificial intelligence. We will then analyze how these four critical elements - prompts, language, vision, and foundation models - will collaboratively shape state-of-the-art AI solutions in medical imaging. Our objective is to ignite a vibrant dialogue about leveraging the collective strength of these components at the SPIE Medical Imaging Conference.
Conference Chair
Massachusetts General Hospital (United States), Harvard Medical School (United States)
Conference Chair
Univ. of Pittsburgh (United States)
Program Committee
McMaster Univ. (Canada)
Program Committee
Cleveland Clinic (United States)
Program Committee
The Univ. of Pennsylvania Health System (United States)
Program Committee
Technische Univ. Braunschweig (Germany), Hannover Medical School (Germany)
Program Committee
Michigan Medicine (United States)
Program Committee
The Univ. of Pennsylvania Health System (United States)
Program Committee
Hong Kong Sanatorium and Hospital (Hong Kong, China)
Program Committee
Roswell Park Comprehensive Cancer Ctr. (United States)
Program Committee
Computer Assisted Radiology and Surgery (Germany)
Program Committee
The Univ. of Southern California (United States)
Program Committee
Oregon Health & Science Univ. (United States)
Program Committee
Indiana Univ. School of Medicine (United States)
Program Committee
Univ. of Maryland Medical Ctr. (United States)
Program Committee
Univ. of California, San Francisco (United States)
Additional Information
For information on application for the Robert F. Wagner All-Conference Best Student Paper Award, view the SPIE Medical Imaging Awards page