Title: All Roads Lead to Likelihood: The Value of Reinforcement Learning in Fine-Tuning

Abstract: From a first-principles perspective, it may seem odd that the strongest results in foundation model fine-tuning (FT) are achieved via a relatively complex, two-stage training procedure. Specifically, one first trains a reward model (RM) on some dataset (e.g. human preferences) before using it to provide online feedback as part of a downstream reinforcement learning (RL) procedure, rather than directly optimizing the policy parameters on the dataset via offline maximum likelihood estimation. In fact, from an information-theoretic perspective, we can only lose information via passing through a reward model and cannot create any new information via on-policy sampling. To explain this discrepancy, we scrutinize several hypotheses on the value of RL in FT through both theoretical and empirical lenses. Of the hypotheses considered, we find the most support for the explanation that on problems with a generation-verification gap, the combination of the ease of learning the relatively simple RM (verifier) from the preference data, coupled with the ability of the downstream RL procedure to then filter its search space to the subset of policies (generators) that are optimal for relatively simple verifiers is what leads to the superior performance of online FT.

Bio: Gokul Swamy is a 5th year PhD student in the Robotics Institute at Carnegie Mellon University, where he works with Drew Bagnell and Steven Wu. His research centers on learning to make decisions when objectives are hard to specify (e.g. imitation learning, reinforcement learning from human feedback) and builds on techniques from RL and game theory. He has spent summers at Google Research, MSR, NVIDIA, Aurora, and SpaceX and holds M.S. / B.S. degrees from UC Berkeley.