This is one of the most interesting papers I've read in a while and it is in the *Journal of Economic Surveys*, not a journal we in operations research and management science typically look at. Plenty of concepts in this paper are relevant to the field of robust optimization, and of course many of my robust optimization colleagues will have heard of some of those concepts, it still feels that our two communities of management scientists and economists should talk together more often. I really enjoyed reading this paper and recommend that everyone read it too. Anyway, here is a summary.

The authors use the terms 'uncertainty' or 'ambiguity' equivalently, to represent non-probabilized uncertainty, as opposed to risk, which represents 'probabilized' uncertainty (you can be sure that sentence caught my interest, and I was still on the first page.) The authors explain: "We will thus concentrate on situations in which there is too little information to pin down easily probabilistic beliefs (as opposed to risky situations, in which objects of choice - lotteries - are already formulated in terms of probability distributions)." One thing you want to remember if you haven't looked at the economics literature in a while is that they view everything random in terms of lotteries when they can and imagine that the decision maker can choose between different lotteries to express his attitude toward risk. The lotteries typically only have two possible outcomes, and only one in a riskless case.

The authors readily point out that they had to leave out some recent developments in the field, especially the concept of "unforeseen contingencies and more generally the issue of subjective state space" because it is too early to distinguish the main contributions from the rest. But their paper was first published about ten years ago so I was curious to discover the developments in those fields in 2020. Some articles about unforeseen contingencies include "Unforeseen Contingencies and Incomplete Contracts" by Eric Maskin and economics superstar Jean Tirole in the *Review of Economics Studies* 66(1): 83-114, 1999, who argue that what matters is that "agents can probabilistically forecast their possible future *payoffs* (even if other aspects of the state of the nature cannot be forecast). In other words, all that is required for optimality is that agents be able to perform dynamic programming, an assumption always invoked by the incomplete contract literature". The ability to perform dynamic programming is certainly a valuable skill for anyone to have in life. In "Static Choice in the Presence of Unforeseen Contingencies," by David M. Kreps, in the book "Economic Analysis of Markets and Games", Kreps presents a way to "model conceptually the idea that certain contingencies may be unforeseen", which gives rise to "an implicit representation theorem that is remarkably similar to the Savage model" - the 1954 model at the core of choice under uncertainty.

Savage developed subjective expected utility theory and characterized individual choice in presence of uncertainty as expected utility maximizing behavior. The most important axiom of his theory is known as the "sure thing principle". It says that "when comparing two decisions, it is not necessary to consider states of nature in which these decisions yield the same outcome." Another key axiom is the idea that the likelihood ranking of events does not depend on the consequences (such as receiving a car rather than receiving 100 euros). Savage's theory resembles von Neumann and Morgenstern expected utility under risk, meaning that "decision under uncertainty can in some sense be reduced to decision under risk, with one important caveat: beliefs are here a purely subjective construct."

Etner et al then discuss models generalizing Savage expected utility using non-Bayesian decision criteria. The first model is the **Wald maxmin criterion**, which evaluates decisions by looking exclusively at the worst possible consequence. It was generalized by Arrow and Hurwicz in 1972, who consider a convex combination of both the worst and the best outcomes (this is called the **Arrow and Hurwicz alpha maxmin model**). The weight of the worst outcome, denoted alpha, measures the decision maker's pessimism. The Arrow and Hurwicz model has been shown to be the only criterion susceptible to model choice under complete ignorance, i.e., the decision maker has no means to assess whether one event is more likely than another. Etner et al then present two "first-generation" models that use capacities to represent non-additive beliefs in the face of uncertain situations: the **Choquet expected utility** and the **cumulative prospect theory**. In the case of Choquet expected utility, beliefs are characterized not by a subjective probability but by a capacity, that is, a non-necessarily additive, increasing set function. Actions are then evaluated using Choquet integrals rather than standard Lebesgue integrals due to the non-additivity of the capacity. For decisions with a finite set of outcomes, "a decision maker evaluates a decision by considering first the lowest outcome and then adding to this lowest outcome the successive possible increments, weighted by his personal estimation of the occurrence of these increments." The cumulative prospect theory developed by Kahneman and Tversky in 1979 is related to the Choquet expected utility model but has a reference point and an asymmetry in the treatment of gains and losses. It introduces two different capacities, one for gains and one for losses.

The next generation of models of decision making under uncertainty "rests on the idea that when information is sccarce, it is too demanding to ask for precise subjectiee beliefs (a probability distribution) but maybe asking only for 'imprecise' subjective beliefs (i.e., a set of probability distributions) is more appropriate." This is known as having multiple priors. It is still possible to compute expected utilities, but now we have one value per prior. One possibility to compare two decisions is to say that an act f is preferred to an act g if all the expected utilities of f with respect to the distributions in the set of priors are higher than those of g. As a result, not all acts can be compared (f can be better than g for some priors and g can be better than f for others.) This is an example of** incomplete preferences**. It is possible to evaluate the multiple priors by considering their worst case, which gives rise to the **maxmin expected utility** of Gilboa and Schmeidler. Under an assumption of uncertainty aversion, "the Choquet expected utility model is a particular case of the maxmin expected utility model" and the set of priors over which the decision maker takes the minimum is the core of a convex capacity. The maxmin expected utility can also be generalized using a confidence function, using a threshold level under which priors are not taken into account in the evaluation.

A natural follow-up question is how to determine the set of priors in the maxmin expected utility model. That, and the rest of the paper, will be the subject of my next post!

One of the most useful features of the paper is a toy insurance example that runs throughout the paper to illustrate the various concepts.

- An individual with initial wealth w is facing a risk of loss d. The state space is: {Loss, No Loss}. An act specifies what amount of money the decision maker has in each state of nature. The act of not buying insurance would be represented by f = (w-d, w). Buying full coverage at premium pi would yield g = (w - pi, w - pi). Buying partial coverage at premium pi' would lead to h = (w-d + I - pi'), w - pi') with I the indemnity paid in case of damage.
- A decision maker following Savage's axioms would have a probability of loss p and a utility function u and would compute p*u(w-d)+(1-p)*u(w) for act f, u(w-pi) for act g and p*(w-d + I - pi')+(1-p)*u(w - pi') for act h.
- If the DM's preferences are represented by the Choquet expected utility model, the decisions will be evaluated as: for act f, u(w-d)+v(NoLoss)*[u(w)-u(w-d)], for act g, u(w-pi) and for act h, u(w-d+I-pi')+v(NoLoss)*[u(w-pi')-u(w-d+I-pi')]
- Incomplete preferences can be illustrated assuming that p (the probability of loss) is either 1/3 or 1/2 and taking w=3/2, d=1/2 and u(x)=x. If 4/3>3/2-pi >5/4, "it is sometimes better to get full insurance and sometimes better not to have any insurance" depending on the prior used.
- If the individual's subjective set of beliefs is such that p is in the range [p',p''] and he decides using a maxmin expected utility model, then he evaluates f as p''u(w-d) + (1-p'')u(w), g as u(w-pi) and h as p''u(w-d+I-pi')+(1-p'')u(w-pi'). Note that the individual "evaluates acts only according to the worst possible prior in the set, that is, the highest loss probability p''."
- Starting with the sixth part of the example the math starts being too cumbersome to retype so you will have to look up the rest in the paper itself!

## Comments