Skip to main content

Machine Learning Lunch Meeting

What Kinds of Functions do Neural Networks Learn? Theory and Practical Applications

Event Details

Thursday, April 4, 2024
1 p.m.

Everyone is invited to the weekly machine learning lunch meetings, where our faculty members from Computer Science, Statistics, ECE, and other departments will discuss their latest groundbreaking research in machine learning. This is an opportunity to network with faculty and fellow researchers while learning about the cutting-edge research being conducted at our university. See for more information.

Speaker: Rob Nowak (ECE)

Abstract: This talk presents a theory that characterizes the types of functions neural networks learn from data, detailing the properties of these functions, solution uniqueness, and bounds on network widths. It highlights the impact of skip connections, low-rank weight matrices, and sparsity in learned representations. Additionally, the framework provides new insights on multi-task learning, underscoring the regularization benefits of incorporating multiple tasks as well as leading to novel methods for network compression. The theory also has implications for improving implicit neural representations, where multi-layer neural networks are used to represent a continuous signal, image or 3D scene. This exploration connects theoretical insights with practical advancements, offering a new view of neural network capabilities and future research directions.

This talk is based on joint work with Julia Nakhleh, Rahul Parhi, Joe Shenouda, and Liu Yang.