MaLMIC Virtual Open Forum on Validating AI in Healthcare: Generalizability and Explainability – Forum, March 13, 2025

806 529 MaLMIC - Machine Learning in Medical Imaging Consortium

Join the Machine Learning in Medical Imaging Consortium (MaLMIC) for an opportunity to network on machine learning

MaLMIC Virtual Open Forum on Validating AI in Healthcare: Generalizability and Explainability

Join us for a session on advancing trustworthy AI in healthcare, featuring two complementary talks addressing robustness, validation, and clinical translation. PhD Candidate Abhishek Moturu will present LiLAW, a lightweight meta-learning framework that dynamically reweights training samples to improve robustness to label noise, stabilize fine-tuning, support safer use of synthetic data, and reduce fairness disparities in high-stakes settings such as medical imaging. Dr. Benjamin Haibe-Kains will examine the challenges limiting radiomics in oncology and introduce READII-2-ROQC, a reproducible, open-source processing and quality control pipeline designed to rigorously assess radiomic features and strengthen their path toward reliable clinical adoption.

March 13th, 2026 
12:00 to 1:00 p.m. Eastern

Interested in joining? Please contact us.

MaLMIC Virtual Open Forum on Validating AI in Healthcare: Generalizability and Explainability

Join us for a session on advancing trustworthy AI in healthcare, featuring two complementary talks addressing robustness, validation, and clinical translation. PhD Candidate Abhishek Moturu will present LiLAW, a lightweight meta-learning framework that dynamically reweights training samples to improve robustness to label noise, stabilize fine-tuning, support safer use of synthetic data, and reduce fairness disparities in high-stakes settings such as medical imaging. Dr. Benjamin Haibe-Kains will examine the challenges limiting radiomics in oncology and introduce READII-2-ROQC, a reproducible, open-source processing and quality control pipeline designed to rigorously assess radiomic features and strengthen their path toward reliable clinical adoption.

March 13th, 2026 
12:00 to 1:00 p.m. Eastern

Interested in joining? Please contact us.

Benjamin Haibe-Kains, PhD
Senior Scientist and Professor, Princess Margaret Cancer Centre and University of Toronto

Dr. Benjamin Haibe-Kains is Executive AI Scientific Director at the University Health Network, Senior Scientist at Princess Margaret Cancer Centre, and Professor of Medical Biophysics at the University of Toronto. He holds the Canada Research Chair in Computational Pharmacogenomics and leads major data science initiatives, including the Cancer Digital Intelligence Program and the AI Hub at UHN. His research applies AI and machine learning to integrate large-scale chemical, radiological, and genomic data to develop predictive models for drug development and cancer treatment, advancing precision medicine and improving patient outcomes.

Talk Title: Unmasking radiomic signatures: novel biomarkers or volume in disguise?

Talk Description: Radiomics has exploded in oncology applications to develop signatures capable of predicting cancer outcomes and survival, but has not achieved clinical translation. Commonly cited limitations include inconsistent, complex methodology, insufficient data availability for external validation, and a lack of open-source processing tools to handle common radiological image formats. However, another major concern highlighted by lesser studies raises the question of whether radiomic features with high predictive value are surrogates for tumour volume measurements. In this talk, we’ll showcase READII-2-ROQC, a reproducible, automated, open-source processing and quality control pipeline to assess radiomic features through negative control image generation.

Abhishek Moturu, PhD Candidate
University of Toronto

Abhishek Moturu is a PhD candidate in Computer Science at Trinity College, University of Toronto, where his research focuses on training robust and fair machine learning models in the presence of noisy, mislabelled, synthetic, or otherwise challenging data. His work spans pediatric cancer detection at SickKids, facial pain detection in dementia at UHN, AI in dermatology, and medical vision-language models. He is Education Trainee Co-Lead at T-CAIREM, advancing AI literacy for medical trainees across Canada, and currently serves as CTO of CoordCare, an agentic AI startup automating healthcare administrative tasks.

Talk Title: Learning What Matters When Data Is Noisy

Talk Description: Deep neural networks are typically trained as if every data point deserves equal trust, yet real-world datasets are noisy, heterogeneous, imbalanced, and increasingly augmented with synthetic samples. In this talk, I will introduce our work, LiLAW, a lightweight meta-learning framework that learns just three global parameters to dynamically reweight samples based on evolving difficulty, without additional models, pruning, or heavy computational overhead. We will see how this simple idea consistently improves robustness to label noise, stabilizes fine-tuning, enables safer integration of synthetic data, and even reduces fairness disparities across subgroups, offering a principled and practical way to rethink how we train models in imperfect data regimes and high-stakes settings such as medical imaging.

Privacy Preferences

When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Here you can change your Privacy preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

Click to enable/disable Google Analytics tracking code.
Click to enable/disable Google Fonts.
Click to enable/disable Google Maps.
Click to enable/disable video embeds.
Our website uses cookies, mainly from 3rd party services. Define your Privacy Preferences and/or agree to our use of cookies.