Talk: Self-Supervised Learning beyond Vision and Language
Maja Rudolph: Senior Research Scientist, Bosch Center for Artificial Intelligence
LIVE STREAM: https://uwmadison.zoom.us/j/99096503506?pwd=NGRqbTgvQ1prc1hpZFVBZHowRFFkZz09
Abstract: Self-supervised learning has emerged as a powerful paradigm for machine learning, especially for drawing insights from unlabeled data. The key idea is to introduce auxiliary prediction tasks and to train a deep model to solve these auxiliary tasks. If the tasks are designed well, the trained model will be useful for a number of purposes, such as anomaly detection, feature extraction, and forecasting. Unfortunately, most successful approaches for SSL rely on domain-specific indictive biases and are, therefore, limited to individual use cases. In this talk, I present advanced self-supervised learning losses that facilitate domain-general self-supervised learning beyond images and text. Exponential family embeddings, for example, generalize word embeddings to provide insight into a wide range of applications. They are a useful tool for studying zebrafish brains in neuroscience, studying shopping behavior in economics, or studying language evolution in computational social science. Similarly, neural transformation learning (NTL) is a new general-purpose tool for self-supervised anomaly detection. While related methods in computer vision typically require image transformations such as rotations, blurring, or flipping, NTL automatically learns the best transformations from the data and generalizes self-supervised AD to almost any data type.
Bio: Maja Rudolph is a Senior Research Scientist at the Bosch Center for Artificial Intelligence. Her research is focused on deep probabilistic modeling and self-supervised learning. As of 2022, Maja is also Technical Lead of the Bosch Center for AI. In this role, she is responsible for AI excellence in the work of over 200 associates in 6 different countries. Maja holds a Ph.D. in computer science from Columbia and a BS in mathematics from MIT.