The Middle-Skills Gap
Blog Updates

Decision-making under uncertainty using robust optimization

Decision-making in most applications involves dealing with uncertainty, from random stock returns in finance to random demand in inventory management to random arrival and service times in a service center. This leads to the following two questions:

  1. how should this randomness be modelled?
  2. how should system performance be evaluated, and ultimately optimized? 

When assessing possible answers, it is fair to ask what makes a decision-making methodology more appealing than another. Intuitively, an “ideal” approach exhibits strengths in both dimensions outlined above, in the sense that (1) it allows for a modelling of randomness that does not require many additional assumptions, especially in settings where the validity of such assumptions is hard to check in practice, and (2) it allows for efficient optimization procedures for a wide range of problem instances, which in these days of “big data” most likely includes instances of very large scale. In other words, an “ideal” approach should build directly upon the information at hand – specifically, historical observations of the randomness – without forcing this information into rigid constructs that are difficult to validate even with the benefit of hindsight, and while preserving computational tractability to remain of practical relevance for the decision maker.

Interestingly, the method of choice to model uncertainty has long scored poorly on not just one but both criteria above. Indeed, probability theory assigns likelihoods to events that, when defined as a function of several sources of uncertainty, may occur in many different ways (think for instance of the sum of ten random variables taking a specific value, and all the possible combinations of those ten random variables that achieve the desired value as their sum). To assign a probability to this complex event, it is then necessary to resort to an advanced mathematical technique called convolution, made even more complex when random variables are continuous – and this is just to compute a probability, not optimize it. Additional hurdles include the fact that the manager observes realizations of the random variables but never the probability distributions themselves, and that such distributions are difficult to estimate accurately in many practical applications.

So why have probabilities become the standard tool to model uncertainty? A possible answer is that they were developed to describe random events in the realm of pure mathematics and have since been used beyond their intended purpose to guide business managers in their optimization attempts. This might have happened because academicians and practitioners alike lacked other tools to describe uncertainty in a satisfactory manner, or because the systems under early consideration remained simple enough to fit easily within the probabilistic framework. But as the size of the problems considered by today’s managers increases and today’s business environments are defined more and more by fast-changing market conditions, it seems particular urgent to bring forward a viable alternative to probabilistic decision-making that exhibits the attributes outlined above. This is precisely what researchers in a field called “robust optimization” have done. (Disclosure: robust optimization is also one of my key research areas.)

By now robust optimization has been a hot topic in operations research for almost two decades, and I won’t go over its distinguished contributions to fields as varied as portfolio management, logistics and shortest path problems, among many others, in this post – for that purpose, the interested reader is referred to one of the several overview papers available on the Internet.

For today’s post I will focus on a specific aspect of robust optimization that is currently gaining much traction, specifically, the use of robust optimization as a “one-stop” decision-making methodology that (a) builds directly upon the data at hand, (b) remains tractable, intuitive and insightful in a wide range of settings where probabilistic models become intractable and (c) leads to the axioms of probability theory as a consequence of the modelling framework.

While (a) and (b) have been standard arguments in favor of robust optimization for at least a decade, the connection with the world of probabilities provided in (c) is a novel development that should appeal to many decision makers. The resulting framework is at the core of a paper by Chaithanya Bandi and Dimitris Bertsimas – my former dissertation adviser – published last summer in Mathematical Programming.

They suggest modelling sources of uncertainty not as random variables with probability distributions but as parameters belonging to uncertainty sets that are consistent with the limit laws of probabilities (which bound the deviation of a sum of independently and identically distributed random variables from their mean, using their standard deviation and the square root of the number of random variables). Other asymptotic laws can be incorporated to the uncertainty set in specific circumstances.

The objective of the problem then becomes to optimize the worst-case value of some criterion, instead of its expected value as would have been the case in stochastic optimization.

Bandi and Bertsimas discuss:

  • Using historical data and the central limit theorem,
  • Modeling correlation information,
  • How to use stable laws to construct uncertainty sets for heavy-tailed distributions that have infinite variance,
  • Incorporating distributional information using “typical sets”, first introduced in the context of information theory, which exhibit the properties that (i) the probability of the typical set is nearly 1 and (ii) all elements of the typical set are nearly equiprobable, and are defined in the Bandi and Bertsimas paper in terms of uncertainty sets with examples drawn from common distributions.

The rest of the paper shows the results obtained by applying this framework to three applications.

Performance analysis of queueing networks. (Some of the work described in this section is joint work between Bandi, Bertsimas and Nataly Youssef - download their paper here.) The authors introduce the concept of a robust queue where arrival and service processes are modelled by uncertainty sets instead of assigning probability distributions and derive formulas on the worst-case waiting time for the n-th customer in the queue. They connect their results with those obtained in traditional queueing theory and derive results for systems with heavy-tailed behavior such as Internet traffic that are not believed to have been previously available.

Further, they analyze single-class queueing networks using their framework and present computational results where they compare their approach, dubbed the Robust Queueing Network Analyzer (RQNA), with results obtained using simulation and the Queueing Network Analyzer (QNA) developed in traditional queueing theory. Their observation is that RQNA’s results are often significantly closer to simulated values than QNA’s.

Optimal mechanism design for multi-item auctions. (Full paper.) In this problem, an auctioneer is interested in selling multiple items to multiple buyers with private valuations for the items. The auctioneer’s goal in the robust optimization approach is to maximize the worst-case revenue, with his beliefs on buyers’ valuations modelled through uncertainty sets. His decisions are the allocation of items and the payment rules, which should satisfy the following properties:

  • Individual Rationality: Bidders do not derive negative utility by participating in the auction, assuming truthful bidding, i.e., bidding their true valuation of an item.
  • Budget Feasibility: Each buyer is charged within his budget constraints.
  • Incentive Compatibility: the total utility of the i-th buyer under truthful bidding is greater than the total utility that Buyer i derives by bidding any other bid vector.

Bandi and Bertsimas provide a robust optimization mechanism (ROM) that solves the problem and consists of (1) an algorithm to compute the worst-case revenue before a bid vector is realized, and (2) an algorithm to compute allocations and payments afterward, which also uses the worst-case revenue as input.

In addition, they investigate the special case where the buyers do not have any budget constraints, for which they compare the resulting algorithms in their model with a mechanism called Myerson auction. They argue that their method in that setting exhibits stronger robustness properties when the distribution or the standard deviation of the valuations is misspecified.

Pricing multi-dimensional options. In this finance problem, Bandi and Bertsimas (along with their collaborator Si Chen) propose to model the underlying price dynamics with uncertainty sets and then apply robust optimization rather than dynamic programming to solve the pricing problem. They illustrate their approach in the context of European call options and reformulate the problem as a linear problem.

The approach has the flexibility to incorporate transaction costs and liquidity effects. It also captures a phenomenon known in finance as the implied volatility smile, which can be explained in the context of robust optimization using risk aversion arguments. Bandi and Bertsimas further give examples of the accuracy of their method relative to observed market prices. You can download that paper here.

The authors’ central argument is that “modelling stochastic phenomena with probability theory is a choice” and that “given the computational difficulties in high dimensions, we feel we should consider alternative, computationally tractable approaches in high dimensions.” They provide compelling evidence that robust optimization is well-suited for that purpose.

Comments

The comments to this entry are closed.