Roller R, Hahn M, Ravichandran AM, Osmanodja B, Oetke F, Sassi Z, Burchardt A, Netter K, Budde K, Herrmann A, Strapatsas T, Dabrock P, Moeller S (2025)
Publication Language: English
Publication Status: Accepted
Publication Type: Unpublished / Preprint
Future Publication Type: Article in Edited Volumes
Publication year: 2025
DOI: 10.48550/ARXIV.2506.14400
Machine learning (ML) models are increasingly used to support clinical decision-making. However, real-world medical datasets are often noisy, incomplete, and imbalanced, leading to performance disparities across patient subgroups. These differences raise fairness concerns, particularly when they reinforce existing disadvantages for marginalized groups. In this work, we analyze several medical prediction tasks and demonstrate how model performance varies with patient characteristics. While ML models may demonstrate good overall performance, we argue that subgroup-level evaluation is essential before integrating them into clinical workflows. By conducting a performance analysis at the subgroup level, differences can be clearly identified-allowing, on the one hand, for performance disparities to be considered in clinical practice, and on the other hand, for these insights to inform the responsible development of more effective models. Thereby, our work contributes to a practical discussion around the subgroup-sensitive development and deployment of medical ML models and the interconnectedness of fairness and transparency.
APA:
Roller, R., Hahn, M., Ravichandran, A.M., Osmanodja, B., Oetke, F., Sassi, Z.,... Moeller, S. (2025). One Size Fits None: Rethinking Fairness in Medical AI. (Unpublished, Accepted).
MLA:
Roller, Roland, et al. One Size Fits None: Rethinking Fairness in Medical AI. Unpublished, Accepted. 2025.
BibTeX: Download