Beyond Bias: Algorithmic Unfairness, Infrastructure, and Genealogies of Data
Alex Hanna, Ph.D., Senior Research Scientist, Ethical AI at Google
Event Details
Abstract
Problems of algorithmic bias are often framed in terms of lack of representative data or formal fairness optimization constraints to be applied to automated decision-making systems. However, these discussions sidestep deeper issues with data used in AI, including problematic categorizations and the extractive logics of crowdwork and data mining. In this talk, Dr. Hanna makes two interventions: first by reframing of data as a form of infrastructure, and as such, implicating politics and power in the construction of datasets; and secondly discussing the development of a research program around the genealogy of datasets used in machine learning and AI systems. These genealogies should be attentive to the constellation of organizations and stakeholders involved in their creation, the intent, values, and assumptions of their authors and curators, and the adoption of datasets by subsequent researchers.
Bio
Alex Hanna is a sociologist and senior research scientist working on the Ethical AI team at Google. Before that, she was an Assistant Professor at the Institute of Communication, Culture, Information and Technology at the University of Toronto. Her research centers on origins of the training data which form the informational infrastructure of AI and algorithmic fairness frameworks, and the way these datasets exacerbate racial, gender, and class inequality.
Talk presented by the Information School /School of Computer Data and Information Sciences and the Sociology Department at the College of Letters and Science at UW-Madison.
Red Talks - https://www.cs.wisc.edu/cdis-red-talks/