Skip to main content

ML+X Forum: Bias and Fairness

Machine Learning Community Forum

Event Details

Date
Tuesday, December 6, 2022
Time
12-1 p.m.
Location
Orchard View Room, Third Floor, Discovery Building
Also offered online
Description

Bias is a somewhat overloaded term in the field of machine learning that often needs to be defined within a specific context. Join the ML Community on Tuesday, Dec. 6th, 12-1pm, to learn about some of the different forms of bias (e.g., data bias vs model bias) that can influence a model's decision making process (for better or for worse), as well as some tips on how to ensure that your model behaves fairly.

Presenter Lineup
1. Bias in machine learning: the good, the bad, and the ugly — Chris Endemann
2. Locating and analyzing the texture/shape bias in machine and human vision — Kesong Cao
3. What's fair? — Mariah Knowles

Register for the social (morning after ML+X): Want to discuss ML projects and connect with the presenters following this event? Come to the ML Community's monthly social — ML+Coffee. Learn more and register.

Join the ML Community google group: The ML Community has a google group it uses to send reminders about its upcoming events. If you aren't already a member of the google group, you can use this link to join. Note that you have to be signed into a google account to join the group. If you have any trouble joining, please email faciltator@datascience.wisc.edu.

Cost
Free

Tags