Here is the follow-up on my previous post on decision making under ambiguity. I stopped halfway through the section on non-Bayesian criteria with non-additive beliefs because the post had become humongous. Now the authors discuss how to determine the set of priors in the maxmin expected utility model. One formulation is called epsilon-contamination. "In this formulation, the decision maker has a probabilistic benchmark in the sense that he believes the probability distribution on the state space is a given distribution, say p. But he's not totally confident about this. The way to model this is to say that the set of priors he has in mind will be a combination between the 'probabilistic benchmark' on one hand and 'anything can happen' on the other hand."
Hansen and Sargent (2001, 2008) also tackle model uncertainty using ideas inspired from robust control (something dear to my heart, since I studied control theory in college before moving on to operations research). In this robust preferences approach, agents rank acts according to a criterion that uses an estimated distribution p and also the relative entropy or Kullback-Leibler divergence between p and other distributions (capturing the possibility that p is not the right distribution). Hansen and Sargent's model can be interpreted in terms of variational preferences.
Ellsberg's paradox, which basically says that people prefer situations where they have a known probability of winning rather than situations where the probability is unknown ("the devil they know"), can be explained using an extension of the Rank Dependent Utility to compound lotteries or two-layer expected utility. In this model, the decision maker has a set of priors in mind but also a prior over the set of priors, called a second-order belief, over which we take an expectation of distorted expected utilities.
Jaffray (1989) "Generalizes the standard expected utility under risk to a framework in which the probabilities of the different states of nature are imperfectly known," belonging to given intervals. The authors explain that "to any set of probability distributions, it is possible to associate is lower envelope, associating to each event its lower probability compatible with the set of distributions." This lower envelope is a capacity (an increasing, non-necessarily additive set function). Then the value associated to a decision uses the minimal and maximal outcomes on events and the Mobius transform of the capacity makes an apparition, measuring the ambiguity of the event, i.e., the gap between the lower probability of this event and the sum of the lower probabilities of its sub-events. Oh, and to add to the fun, there is a pessimism-optimism index.
The paper continues with definitions of ambiguity aversion and complexities that arise from including dynamics. Many researchers have considered updating rules for preferences in maxmin expected utilities. There is also a section documenting the experimental evidence about decision makers' attitude toward ambiguity. Finally, the authors end with references about how these decision criteria have been used in finance and auctions.
It was a very worthwhile read and I highly recommend it!
Comments