Overcoming Uncertainty of Interactions in Static and Dynamic Multi-Agent Settings (via Zoom)

Abstract: Most modern socio-technical systems involve multiple agents that interact with each other and need to make strategic decisions in uncertain settings (such as in energy markets, autonomous vehicles, recommendation systems, and financial trading). The presence of a very large number of agents and the uncertainty related to their interactions introduces a number of challenges for learning and control. In this talk, I will discuss how to overcome some of these challenges in both static and dynamic settings.

First, I will consider static games where a very large number of agents interact in heterogeneous ways. In this case, the main challenge for a central planner that wishes to control the system is that the underlying network of interactions is typically unknown (as collecting exact network data might be either too costly or impossible due to privacy concerns). Moreover, methods for designing optimal interventions that rely on the exact network data typically do not scale well with the population size. To obviate these issues, I will present a framework in which the social planner designs interventions based on probabilistic instead of exact information about agents' interactions. I will then introduce the tool of “graphon games” as a way to formally describe strategic interactions in this setting and I will illustrate how this tool can be exploited to design interventions that are asymptotically optimal in terms of the population size and can be easily computed without requiring exact network data.

Second, I will consider a dynamic multi-agent setting where payoffs depend on agents' actions but also on an underlying time-varying state. The main challenge for learning in this case is that agents' actions modify the state evolution and thus have an influence not only on current but also on continuation payoffs, effectively coupling the games played at each stage. To overcome this issue I will present a learning dynamics that extends fictitious play dynamics in three important aspects. First, it is applied to dynamic stochastic games. Second, it simultaneously enables agents to learn about their own continuation payoffs and other players' strategies. Third, and most importantly, it incorporates two-timescale learning, whereby agents update their beliefs about others' actions more frequently than their beliefs about their continuation payoffs. This approach attenuates the coupling among different states and enables decentralized learning in zero-sum stochastic games with no need for coordination in the learning behavior of different agents. 

Bio: Francesca Parise joined the School of Electrical and Computer Engineering at Cornell University as an assistant professor in July 2020. Before then, she was a postdoctoral researcher at the Laboratory for Information and Decision Systems at MIT. She defended her PhD at the Automatic Control Laboratory, ETH Zurich, Switzerland in 2016 and she received the B.Sc. and M.Sc. degrees in Information and Automation Engineering in 2010 and 2012, from the University of Padova, Italy, where she simultaneously attended the Galilean School of Excellence. Francesca’s research focuses on identification, analysis and control of multi-agent systems, with application to transportation, social, economic networks and systems biology.

Francesca was recognized as an EECS rising star in 2017 and is the recipient of the Guglielmo Marin Award from the “Istituto Veneto di Scienze, Lettere ed Arti”, the SNSF Early Postdoc Fellowship, the SNSF Advanced Postdoc Fellowship and the ETH Medal for her doctoral work.