Skip to main content

Distinguished Lecture: Calibrated Language Models Must Hallucinate

Santosh Vempala: Frederick Storey II Professor, School of Computer Science, Georgia Tech

Event Details

Date
Monday, September 23, 2024
Time
4-5 p.m.
Location
Description

Live stream: https://uwmadison.zoom.us/j/93271287631?pwd=cLJKw6jSNw4OeKPN57awFOveAPsfll.1

ABSTRACT: Recent language models generate false but plausible-sounding text with surprising frequency. Such “hallucinations” are an increasingly serious problem. Here we present an inherent statistical lower bound on the rate at which pretrained language models hallucinate certain types of statements. The lower bound is independent of the transformer architecture and the quality of data. For “arbitrary” facts whose veracity cannot be determined from the training data, hallucinations must occur for language models that are "calibrated", an appropriate and desirable property for generative language models. Specifically, if the maximum probability of any fact is bounded, we show that the probability of generating a hallucination is close to the fraction of facts that occur exactly once in the training data (a “Good-Turing” estimate), even assuming ideal training conditions. One conclusion is that models pretrained to be good predictors (i.e., calibrated) may require post-training to mitigate hallucinations on arbitrary facts. However, our analysis also suggests that there is no statistical reason that pretraining will lead to hallucination on facts that tend to appear more than once in the training data or on systematic facts (like arithmetic calculations); different architectures and learning algorithms may mitigate these latter types of hallucinations.

This is joint work with Adam Kalai, the top computer scientist at OpenAI.

BIO: Santosh Vempala is the Frederick Storey II Professor in the School of Computer Science at Georgia Tech, with courtesy appointments in Mathematics and Industrial and Systems Engineering (ISyE). He served as the founding director of the Algorithms and Randomness Center (2006-2011), initiated the Computing-for-Good program, and is currently the director of GT's interdisciplinary ACO PhD program. His research interests are broadly in the theory of algorithms, with emphasis on tools for high-dimensional sampling, learning and optimization.  He graduated from CMU in 1997 advised by Avrim Blum and was on the MIT Mathematics faculty until 2007. He gets rather excited when a phenomenon that appears complex from one perspective turns out to be simple from another. In recent years, he has been trying to understand the limits of sampling and optimization algorithms, robustness in learning, and building a computational theory of brain. 

Cost
Free

Tags