Creating actionable insights in human health.
Designing Learning Methods for Health that are Robust, Private, and Fair
We work on robust machine learning model that can efficiently and accurately model events from healthcare data, and investigate best practices for multi-source integration, and learning domain appropriate representations.
When personalization harms: Reconsidering the use of group attributes in prediction V Suriyakumar, M Ghassemi, B Ustun ICML 2023. |
|
Change is Hard: A Closer Look at Subpopulation Shift Y Yang, H Zhang, D Katabi, M Ghassemi ICML 2023. |
|
Is Fairness Only Metric Deep? Evaluating and Addressing Subgroup Gaps in Deep Metric Learning N Dullerud, K Roth, K Hamidieh, N Papernot, M Ghassemi ICLR 2022. |
|
Learning Optimal Predictive Checklists H Zhang, Q Morris, B Ustun, M Ghassemi NeurIPS 2021. |
|
Simultaneous Similarity-based Self-Distillation for Deep Metric Learning K Roth, T Milbich, B Ommer, JP Cohen, M Ghassemi ICML 2021. |
|
Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings VM Suriyakumar, N Papernot, A Goldenberg, M Ghassemi FAccT 2021. |
|
SSMBA: Self-Supervised Manifold Based Data Augmentation for Improving Out-of-Domain Robustness N Ng, K Cho, M Ghassemi EMNLP 2020. |
Auditing Bias and Improving Ethics in Health with ML
The labels we obtain from health research and health practices are all based on decisions made from humans, as part of a larger system. We work on auditing and improving model fairness, as well as understanding the trade-offs that other constructs such as privacy may dictate, are important parts of responsible machine learning in health.
In medicine, how do we machine learn anything real? M Ghassemi, EO Nsoesie Patterns. 2022. |
|
AI recognition of patient race in medical imaging: a modelling study JW Gichoya, et al. The Lancet Digital Health. 2022. |
|
Write It Like You See It: Detectable Differences in Clinical Notes By Race Lead To Differential Model Recommendations H Adam, MY Yang, K Cato, I Baldini, C Senteio, LA Celi, J Zeng, M Singh, M Ghassemi AIES 2022. |
|
The false hope of current approaches to explainable artificial intelligence in health care M Ghassemi, L Oakden-Rayner, AL Beam The Lancet Digital Health. 2021. |
|
Ethical machine learning in healthcare IY Chen, E Pierson, S Rose, S Joshi, K Ferryman, M Ghassemi Annual Review of Biomedical Data Science. 2021. |
|
Challenges to the reproducibility of machine learning models in health care AL Beam, AK Manrai, M Ghassemi Journal of the American Medical Association. 2020. |
Addressing Challenges of Designing and Evaluating Systems
A perfect model will fail if it is not used appropriately, and doesn’t conform well to the environment it will operate in. We work to define how models can interact with expert and non-expert users so that overall health practice and knowledge is actually improved.
Mitigating the impact of biased artificial intelligence in emergency decision-making H Adam, A Balagopalan, E Alsentzer, F Christia, M Ghassemi Communications Medicine 2022. |
|
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations A Balagopalan, H Zhang, K Hamidieh, T Hartvigsen, F Rudzicz, M Ghassemi FAccT 2022. |
|
Get To The Point! Problem-Based Curated Data Views To Augment Care For Critically Ill Patients M Zhang, D Ehrmann, M Mazwi, D Eytan, M Ghassemi, F Chevalier CHI 2022. |
|
Medical Dead-ends and Learning to Identify High-risk States and Treatments M Fatemi, TW Killian, J Subramanian, M Ghassemi NeurIPS 2021. |
|
Do as AI say: susceptibility in deployment of clinical decision-aids S Gaube, H Suresh, M Raue, A Merritt, SJ Berkowitz, E Lermer, M Ghassemi npj Digital Medicine. 2021. |