Date Posted: 12/02/2021

Weighing the likelihood of an outcome, or the nature of risk, is decidedly difficult — especially when those decisions involve complex systems. How should a policymaker compare the health risks of concentrating pollution in a local environment versus diffusing the pollution more globally? How should societies allocate scarce resources like hunting tags — should benefits be distributed by uniform lotteries, or guaranteed to some subset of people? These and other fraught queries are the concern of a group of Cornell Ann S. Bowers College of Computing and Information Science researchers who have directed their attention to algorithmic decision-making in the context of allocating harms and benefits — and now their findings have achieved professional acclaim.

The paper, “On Modeling Human Perceptions of Allocation Policies with Uncertain Outcomes,” which won the Exemplary Applied Modeling Track award during the 22nd Association for Computing Machinery Conference on Economics and Computation (ACM EC 21), combines mathematical and qualitative analysis to study situations in which society allocates harms or benefits that are uncertain in nature, and, in turn, proposes explanations for societal preferences that would otherwise be difficult to explain using standard models of cost-benefit analysis.

The paper is coauthored by Hoda Heidari, a former post-doctoral associate in the Department of Computer Science (CS) department and the AI, Policy, and Practice Initiative, and assistant professor at Carnegie Mellon University with joint appointments in the Machine Learning Department and the Institute for Software Research; Solon Barocas, adjunct assistant professor in the Department of Information Science (IS) and principal researcher at Microsoft Research; Jon Kleinberg, Tisch University Professor in the CS and IS departments; and Karen Levy, assistant professor in IS and associated faculty at Cornell Law School.

In the wake of their recent honor, the research team discussed its findings:

What problem prompted this line of investigation?

This project originally grew out of a general interest in leveraging insights from the behavioral sciences to develop a more nuanced understanding of how people were likely to perceive the fairness of algorithmic decision-making. In particular, we wondered how probability weighting — a well-established behavioral bias in which people systematically misperceive low and high probability events — might affect the way that people perceive the fairness of algorithms that allocate probabilistic harms and benefits. One common example that has motivated related work on this topic — outside algorithmic decision-making — is the allocation of the uncertain risk of harm posed by pollutants, where people seem to have very different reactions to different allocation policies, even when the total amount of harm under consideration is the same.

To make this more concrete, consider a policy that concentrates the risk of harm posed by pollutants on a small number of people (such that these people are very likely to experience the harm (e.g., disease or loss of life induced by the pollutants), while everyone else bears no risk of harm), a policy that distributes the risk of harm across an intermediate number of people (such that the risk posed to any one person is noticeably lower, even though many people continue to face no risk at all), and a policy that spreads the risk evenly across the entire population (such that the risk is very small, but everyone is at some risk). These policies seem to engender very different reactions even when they result in the same total number of fatalities or diseases, and even if the people placed at risk are not meaningfully different from those who have not been. In our work, we show that the preferred policy in a number of real-world scenarios along these lines is actually the allocation policy you would arrive at if you take probability weighting into account — that is, it is the policy that minimizes the total amount of perceived harm.

What methodology did you employ to address it?

As mentioned above, we developed a model based on the concept of probability weighting from behavioral economics. The analysis of this model led to new insights about the fundamental differences between perceptions of policies that allocate harms or benefits. In particular, we found that when it comes to the probabilistic allocation of benefits, probability-weighting agents prefer uniform lotteries, but in the allocation of harms or burdens, they actually prefer concentrating the probability of harm across a smaller subset of individuals. 

The pollution example is quite salient, though somehow depersonalized because we’re dealing with pollutants (albeit with potentially great harm to humans). Can you share another example from your research that situates the human at the center of probability weighting?

Consider the military draft. In the policies for drafting people into the military in the United States, the government has considered a number of different implementations for randomizing the selection of inductees. Discussions of revisions to the draft framed uncertainty itself as a cost being borne by members of the population. As the U.S. Selective Service System notes, prior to the introduction of a structured process for randomization, men knew only that they were eligible to be drafted from the time they turned 18 until they reached age 26; “[this] lack of a system resulted in uncertainty for the potential draftees during the entire time they were within the draft-eligible age group. All throughout a young man’s early 20s he did not know if he would be drafted.” The systems that were subsequently introduced specified priority groups according to age, which had the effect of deliberately producing non-uniform probabilities of being drafted in any given year; under these systems, some people were selected with higher-than-average probability and others with lower-than-average probability. This non-uniform distribution of burden is precisely what our theory predicts.

Any reflections on the possible present and future implications of your findings? 

While our work focuses on only one consideration related to policies allocating harms and benefits in society, it underscores the importance of understanding people’s perceptions as a prerequisite to designing acceptable procedures (and when appropriate, algorithms) governing the allocation.

Can you share citations for some background resources that informed “On Modeling Human Perceptions”? For instance, what prior research — by you and others — underwrites this project?

  1. “Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making,” H. Heidari, C. Ferrari, K. P. Gummadi, A. Krause. Neural and Information Processing Systems (NeurIPS), 2018.
  2. “A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual and Group Unfairness via Inequality Indices,” T. Speicher, H. Heidari, N. Grgic-Hlaca, K. P. Gummadi, A. Singla, A. Weller, M. B. Zafar. The International Conference on Knowledge Discovery and Data Mining (KDD), 2018.
  3. U.S. Selective Service System. [n.d.]. Changes from Vietnam to Now.
  4. “Prospect theory: An analysis of decision under risk,” Handbook of the Fundamentals of Financial Decision Making, Part I. Daniel Kahneman and Amos Tversky. 2013. World Scientific, 99-127.      

For related news coverage, see also: 

  • Hoda Heidari and Jon Kleinberg Receive a Best Paper Award at the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT)
  • Karen Levy: Studying How Tech Can Be Used to Track Our Daily Lives