- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Fall 2024 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University - High School Programming Contests 2024
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Abstract:
Large-scale information retrieval is the problem of finding relevant documents from a large corpus, while extreme multi-label classification is the problem of predicting labels from an enormous label set. At first glance, the problems appear quite different; one is a large-scale search problem and the other a large-scale supervised learning problem. However, modern versions of these problems reveal similarities as implicit supervision is often available for retrieval problems, while extreme multi-label classification problems need an efficient search strategy due to the extremely large label set. But current approaches to solve these problems are diametrically opposite. Modern retrieval problems are often solved by the two-tower approach where both query and document embeddings are parameterized by an encoder model and learned from the supervision, and then an appropriate index structure is built that permits efficient approximate nearest neighbor search. On the other hand, recent methods for extreme classification first form a semantic index structure and then learn classifiers to efficiently navigate this structure. In this talk, I will review extreme classification approaches, present recent work that jointly learns the semantic index and the search parameters, and discuss how ideas from both large-scale retrieval and extreme classification can be unified together to build general end-to-end search algorithms in large output spaces.
Bio:
Inderjit Dhillon is the Gottesman Family Centennial Professor of Computer Science and Mathematics at UT Austin, where he is also the Director of the ICES Center for Big Data Analytics. Currently he is on leave from UT Austin and is a Distinguished Scientist at Google. Prior to that, he was Vice President and Distinguished Scientist at Amazon, and headed the Amazon Research Lab in Berkeley, California, where he and his team developed and deployed state-of-the-art machine learning methods for Amazon Search. His main research interests are in machine learning, big data, deep learning, network analysis, linear algebra and optimization. He received his B.Tech. degree from IIT Bombay, and Ph.D. from UC Berkeley. Inderjit has received several awards, including the ICES Distinguished Research Award, the SIAM Outstanding Paper Prize, the Moncrief Grand Challenge Award, the SIAM Linear Algebra Prize, the University Research Excellence Award, and the NSF Career Award. He has published over 200 journal and conference papers, and has served on the Editorial Board of the Journal of Machine Learning Research, the IEEE Transactions of Pattern Analysis and Machine Intelligence, Foundations and Trends in Machine Learning and the SIAM Journal for Matrix Analysis and Applications. Inderjit is an ACM Fellow, an IEEE Fellow, a SIAM Fellow, an AAAS Fellow and a AAAI Fellow.