Skip to main content

Do LLMs reason as we do? A synthetic study of transformers' learning dynamics for compositions

Professor Yiqiao Zhong (Statistics) at Machine Learning Lunch Meetings

Event Details

Date
Tuesday, February 17, 2026
Time
12:15-1:15 p.m.
Location
7th Floor Seminar Room, Morgridge Hall
Description

Large language models (LLMs) have demonstrated remarkable reasoning capabilities, yet the mechanisms underlying these behaviors remain poorly understood: how do models build compositions, and what constitutes faithful chain-of-thought (CoT) reasoning? In this talk, I present a unified investigation into the learning dynamics and emergent reasoning behaviors of transformer-based autoregressive models through controlled synthetic experiments.

First, I introduce "shattered compositionality": instead of learning skills in a human-like sequential order, models often acquire subskills in parallel or in reverse, leading to mixing errors and brittle generalization under distribution shift. This behavior reflects correlational pattern matching rather than causal, procedural composition.

Second, I show that autoregressive training on CoT traces yields faithful causal reasoning only when training noise is below a critical threshold. As noise increases, models undergo phase transitions between distinct reasoning modes, including an intermediate regime that encodes uncertainty and suggests the emergence of implicit self-verification. Together, these results reveal how training data quality shapes emergent reasoning behavior in LLMs, offering a unified explanation for brittleness and unfaithful reasoning.

(This talk is part of the weekly Machine Learning Lunch Meetings (MLLM), held every Tuesday from 12:15 to 1:15 p.m.  Professors from Computer Sciences, Statistics, ECE, the iSchool, and other departments will discuss their latest research in machine learning, covering both theory and applications. This is a great opportunity to network with faculty and fellow researchers, learn about cutting-edge research at our university, and foster new collaborations. For the talk schedule, please visit https://sites.google.com/view/wiscmllm/home. To receive future weekly talk announcements, please subscribe to our UW Google Group at https://groups.google.com/u/1/a/g-groups.wisc.edu/g/mllm.)

 

Cost
Free
Accessibility

We value inclusion and access for all participants and are pleased to provide reasonable accommodations for this event. Please call 608-334-7269 or email jerryzhu@cs.wisc.edu to make a disability-related accommodation request. Reasonable effort will be made to support your request.

Tags