Vishnu Lokhande PHD Final Oral Defense
Event Details
Title: Handling Correlations and Nuisance Variables in Deep Learning Models Robustly
Abstract: Over the past decade, we have witnessed significant advances in capabilities of deep neural network models in vision and machine learning. However, issues related to bias, discrimination, and fairness in general, have received a great deal of negative attention (e.g., mistakes in surveillance and animal-human confusion of vision models). But bias in AI models goes beyond compliance with anti-discrimination legislation, ensuring dataset balance, or making model behavior more predictable on minority groups. While impartiality towards sensitive attributes like gender, race, and age are very relevant, the general study of bias allows a much better understanding of how nuisance or co-existing/spurious attributes influence the behavior of the models, how to mitigate such influence and consequently, build robust and more inclusive models. If fairness is incorporated as a first order constraint in the model development lifecycle, are the models more interpretable and consistent with human perception? How does controlling nuisance variables enable dataset pooling in multi-site international studies to understand early signs of disease? What are the computational/statistical challenges of learning representations that are fairness-aware? In this talk, I will cover a range of results that will shed light on these questions and outline an exciting research agenda with applications spanning industry deployment of foundational models, healthcare as well as social sciences research.