Probability Seminar

Christopher Earls (Cornell)
LLMs Learn Physical Rules Governing Dynamical Systems: A Geometric Investigation of Emergent Algorithms

Monday, April 27, 2026 - 4:00pm
Malott 406

Large Language Models (LLMs) can exhibit surprising, emergent (and undesigned for) abilities such as in context learning. We explore the ability of unprompted LLMs to autoregressively continue numerical time series data that emanate from well understood physical processes; whose governing transition probabilities are known to us. We find LLMs are able to discern the underlying probabilities, in context (on-the-fly), and achieve polynomial decay in ground truth error, as context length grows.

This leads us to also wonder how it is that LLMs in-context-reason about stochastic time series, in general. Using tools from information geometry, we are able to uncover, quantify, and then describe some interesting LLM behaviors that arise in such in context learning. If we switch from LLMs, to small transformer-based toy models, we notice when such behaviors are nascent.