Title: Reasoning about Large Language Models

Abstract: Today, many expect AI to tackle complex problems by performing reasoning—commonly interpreted as large language models generating sequences of tokens that resemble chains of thought. Yet historically, AI reasoning had a very different meaning: executing symbolic algorithms that performed logical or probabilistic deduction to derive definite answers to questions about knowledge. In this talk, I show that such old-fashioned ideas are very relevant to reasoning with large language models today. In particular, I will demonstrate that integrating symbolic reasoning algorithms directly into the architecture of language models enables state-of-the-art capabilities in controllable text generation and alignment.

Bio: Guy Van den Broeck is a Professor and Samueli Fellow in UCLA’s Computer Science Department, where he directs the StarAI Lab. His research lies at the intersection of machine learning, knowledge representation, and reasoning. His contributions have earned awards from leading conferences, including AAAI, UAI, KR, and OOPSLA. Guy is the recipient of an NSF CAREER award, a Sloan Fellowship, and the IJCAI-19 Computers and Thought Award.