- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Fall 2024 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University - High School Programming Contests 2024
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Realizable Learning is All You Need (via Zoom)
Abstract: The equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory. With variants ranging from classical settings like PAC learning and regression to recent trends such as adversarially robust and private learning, it’s surprising we still lack a unifying theory explaining these results.
In this talk, we'll introduce exactly such a framework: a simple, model-independent blackbox reduction between agnostic and realizable learnability that explains their equivalence across a wide host of classic models. Further, we’ll discuss how this reduction extends our understanding to new settings that are traditionally considered difficult to handle such as learning with arbitrary distributional assumptions or more general loss. Finally, we will discuss some cool and exciting open problems. Based on joint work with Max Hopkins, Daniel Kane, Shachar Lovett.
Bio: Gaurav is a 5th year PhD student in the theory group at UCSD advised by Sanjoy Dasgupta and Shachar Lovett. He has broad interests in problems related to learning and recently worked in reinforcement learning theory and learning theory. He has spent some fun summers at Microsoft Research, Institute for Advanced Study and Simons Institute.