Join the Machine Learning in Medical Imaging Consortium (MaLMIC) for an opportunity to network
Med-ImageNet: Creation of a Compendium of AI-Ready Medical Imaging Data for Deep Learning Analysis
Friday, July 21, 2023
3:00 to 4:30 p.m. ET
Co-Chairs: Benjamin Haibe-Kains and Amber Simpson
Artificial Intelligence (AI) applied to medical images has the potential to transform patient care, especially in radiation oncology where radiological imaging is ubiquitous and treatment planning is time consuming.
AI models are data hungry but medical data are often insufficient for many applications. Building foundational models and fine-tuning them for specific medical tasks with only a few data points represents a new avenue of research with high translational potential. However, there is a lack of large compendia of radiological data that are sufficiently curated for building these foundational models.
In this session, the presenters will describe current work on implementing Med-ImageNet, a compendium of radiological images that have been standardised to develop deep learning models, such radiomics predictors or the foundational MedSAM model.
Interested in joining? Please contact us.
Med-ImageNet: Creation of a Compendium of AI-Ready Medical Imaging Data for Deep Learning Analysis
Friday, July 21, 2023
3:00 to 4:30 p.m. ET
Co-Chairs: Benjamin Haibe-Kains and Amber Simpson
Artificial Intelligence (AI) applied to medical images has the potential to transform patient care, especially in radiation oncology where radiological imaging is ubiquitous and treatment planning is time consuming.
AI models are data hungry but medical data are often insufficient for many applications. Building foundational models and fine-tuning them for specific medical tasks with only a few data points represents a new avenue of research with high translational potential. However, there is a lack of large compendia of radiological data that are sufficiently curated for building these foundational models.
In this session, the presenters will describe current work on implementing Med-ImageNet, a compendium of radiological images that have been standardised to develop deep learning models, such radiomics predictors or the foundational MedSAM model.
Interested in joining? Please contact us.
Sejin Kim, Graduate Student
PhD Candidate, University of Toronto
Talk title: Med-ImageNet – a public repository of 100,000+ 3D medical images
Talk summary: The computational prowess of deep learning models is seemingly difficult to harness for 3D medical imaging analysis. While it has brought tremendous advancements to segmentation tasks, it is still unclear whether it will bring meaningful performances in classification tasks such as survival or adverse event prediction, where models often result in overfitting and learn non-robust features. Transfer learning from ImageNet has shown promising results, but the use of natural images to pre-train deep learning models for medical imaging applications is suboptimal. Med-ImageNet aims to address these issues by creating a unique compendium of radiological images that are “AI-ready” to facilitate the development of foundational AI models for medical imaging applications. However, these datasets are heterogeneous as they use different file formats and metadata nomenclatures, preventing their joint analysis. The datasets are acquired in clinical workflows and are therefore not easy to prepare for machine learning projects. Sejin will talk about a robust platform for data curation they have developed to address this challenge.
Jun Ma, PhD
Post-doctoral Fellow, University of Toronto
Talk title: Segment Anything in Medical Images
Talk summary: Medical imaging plays an indispensable role in clinical practice. Accurate and efficient medical image segmentation provides a means of delineating regions of interest and quantifying various clinical metrics. However, building customized segmentation models for each medical imaging task can be a daunting and time-consuming process, limiting the widespread adoption in clinical practice. In this talk, Jun will introduce MedSAM, a segmentation foundation model that enables universal segmentation across a wide range of medical imaging tasks and modalities. MedSAM achieved remarkable improvements in 30 segmentation tasks, surpassing the existing segmentation foundation model by a large margin. MedSAM also demonstrated zero-shot and few-shot capabilities to segment unseen tumor types and adapt to new imaging modalities with minimal effort. The results validate the versatility of MedSAM compared to existing customized segmentation models, emphasizing its potential to transform medical image segmentation and enhance clinical practice.