(Data) Shift Happens, and How Should We Handle Them?
You are cordially invited to the weekly CS Machine Learning Lunch Meetings. This is a chance to get to know machine learning professors, and talk to your fellow researchers. This week's meeting will be on Tuesday Feb. 7 12:15-1:15pm in CS 1240. Professor Sharon Li will tell us about out-of-distribution issues, see abstract below. If you would like to be informed of future CS Machine Learning Lunch Meetings, please sign up our mailing list at https://lists.cs.wisc.edu/mailman/listinfo/mllm -- please use your cs or wisc email. After you enter your email, the system will send you an email for confirmation. Only after you respond to that email will you be on the mailing list.
When deploying machine learning models in the open and non-stationary world, their reliability is often challenged by the presence of out-of-distribution (OOD) samples. Since data shifts happen prevalently in the real world, identifying OOD inputs has become an important problem in machine learning. In this talk, I will discuss challenges, research progress, and opportunities in OOD detection. The work is motivated by the insufficiency of existing learning objective such as ERM --- which focuses on minimizing error only on the in-distribution (ID) data, but do not explicitly account for the uncertainty that arises outside ID data. To mitigate the fundamental limitation, I will introduce a new algorithmic framework, which jointly optimizes for both accurate classification of ID samples, and reliable detection of OOD data. The learning framework integrates distributional uncertainty as a first-class construct in the learning process, thus enabling both accuracy and safety guarantees.