Date Posted: 1/16/2019

"Multi-armed bandit problems explore the general theme of how to get computers to run experiments and learn about the world through that experimentation the way people do,” notes Robert Kleinberg, Associate Professor, Computer Science. With limited resources—time, money, etc.—how do we achieve optimal outcomes? “A navigation app has an algorithm that conducts a sequence of experiments that try out each road segment as frequently as needed in order to estimate how good each segment is,” Kleinberg says. But what if the app sends you down a slower path? A reply is found in considering game theoretic challenges, including one called "multi-armed bandit" problems.

Experimentations have led Kleinberg—and his collaborators, Peter I. Frazier, Operations Research and Information Engineering; Jon Kleinberg, Computer Science/Information Science; and David Kempe at the University of Southern California—to explore the most optimal way to engage in what they termed incentivized exploration. The results could prove useful for driving apps, online shopping, and (in a separate bit of research with Nicole Immorlica, Senior Researcher at Microsoft), algorithmic selection of music.

Read more at Cornell Research, "What's Behind Your Navigation App" by Jackie Swift.