Research

Robust risk adjustment in health insurance

This is a short paper my doctoral student Tengjiao Xiao and I recently completed. 

Here is the abstract: "Risk adjustment is used to calibrate payments to health plans based on the relative health status of insured populations and helps keep the health insurance market competitive. Current risk adjustment models use parameter estimates obtained via regression and are thus subject to estimation error. This paper discusses the impact of parameter uncertainty on risk scoring, and presents an approach to create robust risk scores to incorporate ambiguity and uncertainty in the risk adjustment model. This approach is highly tractable since it involves solving a series of linear programming problems."

The paper also contains, in the section where we motivate the need for robustness, the graph about ranking changes using proxy and actual Value-Based Purchasing factors that are used to give the about 3,000 hospitals considered bonuses or penalties. A negative ranking change indicates a loss in ranks and a positive one indicates a gain. The interesting thing about this graph is that losses and gains can fluctuate enormously, meaning that some hospitals that would have stood to receive very high bonuses (for the amounts of money considered: every hospital contributes 1% to fund the scheme) under proxy factors found themselves at the very bottom of the ranking, and vice-versa. To the best of our knowledge, this is not something that has received much if any attention in the media. 

The core of the short paper is to show how robust risk scores can be computed by solving a series of linear programming problems, with the aim of minimizing worst-case regret between the actual risk scores, used to implement transfer payments between health payers, and the true scores, which we don't know. We show on a simple test case with 10 insurers that the change in payments can be substantial.

Comments welcome!


Optimal facility in-network selection for healthcare payers under reference pricing

My working paper with Victoire Denoyel and Laurent Alfandari of ESSEC Business School is online! Read it here.

Abstract: "Healthcare payers are exploring cost-containing policies to steer patients, through qualified information and financial incentives, towards providers offering the best value proposition. With Reference Pricing (RP), a payer or insurer determines a maximum amount paid for a procedure, and patients who select a provider charging more pay the difference. In a Tiered Network (TN), providers are stratified according to a set of criteria (such as quality, cost and sometimes location) and patients pay a different out-of-pocket price depending on the tier of their chosen provider. Motivated by a recent CalPERS program, we design two original MIP optimization models for payers that combine both RP and TN, filling the gap of quantitative research on these novel payment policies. Carefully designed constraints provide the decision maker with levers for a trade-off between cost reduction and patients' satisfaction. Numerical experiments provide valuable insights in that respect, displaying also how the tiers are scattered on a cost/quality plane. We argue that this system has strong potential in terms of costs reduction for public or private payers, quality increase for patients and visibility for high-value providers."

To the best of our knowledge, our paper presents the first quantitative model about tiered-networks design for a healthcare procedure under reference pricing, a model the CalPERS pilot study will certainly popularize.

Our paper is here. (Comments welcome!) We're now working on extensions that we hope to share and discuss soon.

If you're interested in learning more in reference pricing for healthcare procedures, I recommend the work of James C. Robinson of UC Berkeley, who led the analysis of the CalPERS pilot, for instance "Payers test reference pricing and Centers of Excellence to steer patients to low-price and high-quality providers" (Health Affairs, subscription required). 

 


To New-Course or Not To New-Course? #NSFGrantProposals

I was chatting with a friend of mine via Skype the other day, and she mentioned that she was preparing a NSF CAREER proposal. One thing we talked about was the broader impact requirement, and in particular the fact that just about everybody seems to say they're going to create a new course.

In her previous try (and she doesn't apply to my directorate, so don't try to figure out who it is), she'd written she'd incorporate the results of her research into an existing course, and a reviewer had apparently taken issue with the fact that she wouldn't create a new course.

And we were wondering (i) how many researchers who get those awards and have said in their proposal that they were going to create new courses actually do so (she had someone in mind...), and (ii) whether it really helps the wide dissemination of the research to create a new course. Doesn't it make more sense to incorporate results into an existing course with already established enrollment, which will reach more students and is more likely to be offered in the long term?

I wonder how many new courses based on the NSF-funded research of one faculty member have consistently high enrollment. Will the students of other advisers really care about taking that new course based on research they haven't had a hand in shaping, when they hopefully find their own research more interesting and more valuable? (It's going to be a long six years for them otherwise.) If only the PI's own students care to take the course, then there is no point in pretending the work is being disseminated any more widely than through regular research meetings. [PI=Principal Investigator]

I'm not saying that creating new courses is always a bad thing. I'm saying, however, that creating new courses should not be the automatic answer to the NSF's Broader Impacts requirement, and a case should be made that a new course will attract students beyond the instructor's immediate research group in a sustainable manner.

If it doesn't, then really the researcher's tool for broader impact is really sending his or her doctoral students into the workforce after graduation and let them shine (which is an excellent method, as a matter of fact. It also happens to be my method of choice, although I do like to blog a lot.)

This entire discussion also assumes that incorporating research into doctoral-level teaching materials, whether through new courses or existing ones, is best to foster wide dissemination. I would also love to see novel research results trickle down to Master's level courses and perhaps senior electives, although of course they couldn't be the whole course.

Staying a bit longer with the idea of doctoral-level teaching as broader impact, implicit is the "push" approach to dissemination: students equipped with new tools push the knowledge in the real world once they graduate. But perhaps its cousin, "pull", should be preferred.

In the "pull" model, industry practitioners are made aware of the new tools through other means than the knowledge of a new hire (and surely the NSF expects more creative means than publishing papers in academic journals), and then insist that their employees implement these tools to gain an advantage over their competitors. After all, a new hire may have great, novel research tools at her disposal but they will only have an impact if her boss cares to have her use them.

I could write about this all day, but going back to the more manageable issue of achieving a broader impact through teaching: do you think creating a new course is best (if you were asked to evaluate standard NSF grant proposals of 3-year duration or longer) or do you favor incorporating results into an existing course?


Decision-making under uncertainty using robust optimization

Decision-making in most applications involves dealing with uncertainty, from random stock returns in finance to random demand in inventory management to random arrival and service times in a service center. This leads to the following two questions:

  1. how should this randomness be modelled?
  2. how should system performance be evaluated, and ultimately optimized? 

When assessing possible answers, it is fair to ask what makes a decision-making methodology more appealing than another. Intuitively, an “ideal” approach exhibits strengths in both dimensions outlined above, in the sense that (1) it allows for a modelling of randomness that does not require many additional assumptions, especially in settings where the validity of such assumptions is hard to check in practice, and (2) it allows for efficient optimization procedures for a wide range of problem instances, which in these days of “big data” most likely includes instances of very large scale. In other words, an “ideal” approach should build directly upon the information at hand – specifically, historical observations of the randomness – without forcing this information into rigid constructs that are difficult to validate even with the benefit of hindsight, and while preserving computational tractability to remain of practical relevance for the decision maker.

Interestingly, the method of choice to model uncertainty has long scored poorly on not just one but both criteria above. Indeed, probability theory assigns likelihoods to events that, when defined as a function of several sources of uncertainty, may occur in many different ways (think for instance of the sum of ten random variables taking a specific value, and all the possible combinations of those ten random variables that achieve the desired value as their sum). To assign a probability to this complex event, it is then necessary to resort to an advanced mathematical technique called convolution, made even more complex when random variables are continuous – and this is just to compute a probability, not optimize it. Additional hurdles include the fact that the manager observes realizations of the random variables but never the probability distributions themselves, and that such distributions are difficult to estimate accurately in many practical applications.

So why have probabilities become the standard tool to model uncertainty? A possible answer is that they were developed to describe random events in the realm of pure mathematics and have since been used beyond their intended purpose to guide business managers in their optimization attempts. This might have happened because academicians and practitioners alike lacked other tools to describe uncertainty in a satisfactory manner, or because the systems under early consideration remained simple enough to fit easily within the probabilistic framework. But as the size of the problems considered by today’s managers increases and today’s business environments are defined more and more by fast-changing market conditions, it seems particular urgent to bring forward a viable alternative to probabilistic decision-making that exhibits the attributes outlined above. This is precisely what researchers in a field called “robust optimization” have done. (Disclosure: robust optimization is also one of my key research areas.)

By now robust optimization has been a hot topic in operations research for almost two decades, and I won’t go over its distinguished contributions to fields as varied as portfolio management, logistics and shortest path problems, among many others, in this post – for that purpose, the interested reader is referred to one of the several overview papers available on the Internet.

For today’s post I will focus on a specific aspect of robust optimization that is currently gaining much traction, specifically, the use of robust optimization as a “one-stop” decision-making methodology that (a) builds directly upon the data at hand, (b) remains tractable, intuitive and insightful in a wide range of settings where probabilistic models become intractable and (c) leads to the axioms of probability theory as a consequence of the modelling framework.

While (a) and (b) have been standard arguments in favor of robust optimization for at least a decade, the connection with the world of probabilities provided in (c) is a novel development that should appeal to many decision makers. The resulting framework is at the core of a paper by Chaithanya Bandi and Dimitris Bertsimas – my former dissertation adviser – published last summer in Mathematical Programming.

They suggest modelling sources of uncertainty not as random variables with probability distributions but as parameters belonging to uncertainty sets that are consistent with the limit laws of probabilities (which bound the deviation of a sum of independently and identically distributed random variables from their mean, using their standard deviation and the square root of the number of random variables). Other asymptotic laws can be incorporated to the uncertainty set in specific circumstances.

The objective of the problem then becomes to optimize the worst-case value of some criterion, instead of its expected value as would have been the case in stochastic optimization.

Bandi and Bertsimas discuss:

  • Using historical data and the central limit theorem,
  • Modeling correlation information,
  • How to use stable laws to construct uncertainty sets for heavy-tailed distributions that have infinite variance,
  • Incorporating distributional information using “typical sets”, first introduced in the context of information theory, which exhibit the properties that (i) the probability of the typical set is nearly 1 and (ii) all elements of the typical set are nearly equiprobable, and are defined in the Bandi and Bertsimas paper in terms of uncertainty sets with examples drawn from common distributions.

The rest of the paper shows the results obtained by applying this framework to three applications.

Performance analysis of queueing networks. (Some of the work described in this section is joint work between Bandi, Bertsimas and Nataly Youssef - download their paper here.) The authors introduce the concept of a robust queue where arrival and service processes are modelled by uncertainty sets instead of assigning probability distributions and derive formulas on the worst-case waiting time for the n-th customer in the queue. They connect their results with those obtained in traditional queueing theory and derive results for systems with heavy-tailed behavior such as Internet traffic that are not believed to have been previously available.

Further, they analyze single-class queueing networks using their framework and present computational results where they compare their approach, dubbed the Robust Queueing Network Analyzer (RQNA), with results obtained using simulation and the Queueing Network Analyzer (QNA) developed in traditional queueing theory. Their observation is that RQNA’s results are often significantly closer to simulated values than QNA’s.

Optimal mechanism design for multi-item auctions. (Full paper.) In this problem, an auctioneer is interested in selling multiple items to multiple buyers with private valuations for the items. The auctioneer’s goal in the robust optimization approach is to maximize the worst-case revenue, with his beliefs on buyers’ valuations modelled through uncertainty sets. His decisions are the allocation of items and the payment rules, which should satisfy the following properties:

  • Individual Rationality: Bidders do not derive negative utility by participating in the auction, assuming truthful bidding, i.e., bidding their true valuation of an item.
  • Budget Feasibility: Each buyer is charged within his budget constraints.
  • Incentive Compatibility: the total utility of the i-th buyer under truthful bidding is greater than the total utility that Buyer i derives by bidding any other bid vector.

Bandi and Bertsimas provide a robust optimization mechanism (ROM) that solves the problem and consists of (1) an algorithm to compute the worst-case revenue before a bid vector is realized, and (2) an algorithm to compute allocations and payments afterward, which also uses the worst-case revenue as input.

In addition, they investigate the special case where the buyers do not have any budget constraints, for which they compare the resulting algorithms in their model with a mechanism called Myerson auction. They argue that their method in that setting exhibits stronger robustness properties when the distribution or the standard deviation of the valuations is misspecified.

Pricing multi-dimensional options. In this finance problem, Bandi and Bertsimas (along with their collaborator Si Chen) propose to model the underlying price dynamics with uncertainty sets and then apply robust optimization rather than dynamic programming to solve the pricing problem. They illustrate their approach in the context of European call options and reformulate the problem as a linear problem.

The approach has the flexibility to incorporate transaction costs and liquidity effects. It also captures a phenomenon known in finance as the implied volatility smile, which can be explained in the context of robust optimization using risk aversion arguments. Bandi and Bertsimas further give examples of the accuracy of their method relative to observed market prices. You can download that paper here.

The authors’ central argument is that “modelling stochastic phenomena with probability theory is a choice” and that “given the computational difficulties in high dimensions, we feel we should consider alternative, computationally tractable approaches in high dimensions.” They provide compelling evidence that robust optimization is well-suited for that purpose.


On Scientific Publishing

Here are links to a few articles on scientific publishing, especially related to problems or misconduct, that appeared in The Economist over the past year or so. This is obviously not an exhaustive discussion of the topic. I also wrote about an ethics scandal at Duke in May 2008; the reader might find a January 2007 article in Nature about recent cases of academic misconduct particularly interesting. One of those cases is described at length in the 2009 book "Plastic fantastic: How the biggest fraud in physics shook the scientific world."

Let's start with an August 2010 article about Harvard researcher Mark Hauser ("Monkey business?"), "who made his name probing the evolutionary origins of morality, [and] is suspected of having committed the closest thing academia has to a deadly sin: cheating." The issues are described in a New York Times article as pertaining to "data acquisition, data analysis, data retention, and the reporting of research methodologies and results."

While both The Economist and the NYT took pains of upholding Hauser's presumption of innocence from deliberate wrong-doing at the time, The Chronicle of Higher Education painted a darker portrait: "An internal document... tells the story of how research assistants became convinced that the professor was reporting bogus data and how he aggressively pushed back against those who questioned his findings or asked for verification." Hauser resigned from Harvard in July 2011, about a year after being found solely responsible for eight counts of scientific misconduct.

An array of errors” (September 10, 2011) focuses on the work by Duke University researchers, who reported in 2006 that they were able to “predict which chemotherapy would be most effective for an individual patient suffering from lung, breast or ovarian cancer” using a technique based on gene expression, which seemed to hold tremendous potential for the field of personalized medicine.

Unfortunately, a team of researchers at the MD Anderson Cancer Center in Houston quickly ran in trouble as they attempted to replicate the results. They also realized that the paper was riddled with formatting errors, for instance in the tables, for instance.

The Duke University team decided to nonetheless launch clinical trials based on their work.  Another researcher at the National Cancer Institute expressed concerns about the work, and Duke launched an internal investigation but the review committee, having “access only to material supplied by the researchers themselves,” did not find any problem and the clinical trials resumed.

But “in July 2010, matters unraveled when the Cancer Letter reported that Dr [Anil] Potti [of Duke University] had lied in numerous documents and grant applications,” for instance lying about having been a Rhodes Scholar in Australia. This led to Potti’s resignation from Duke, the end of the clinical trials, and the retraction of several prominent papers. A committee was formed at Duke to investigate what went wrong.

The most interesting part of the article, I found, was the description of the academic journals’ reaction: “journals that had readily published Dr Potti’s papers were reluctant to publish [a scientist’s] letters critical of the work.” The article also touches upon Duke’s being slow to deal with potential conflicts of interest by the researchers, and the fact that “peer review… relies on the goodwill of workers in the field, who have jobs of their own and frequently cannot spend the time needed to check other people’s papers in a suitably thorough manner.”

An editorial in Nature Cell Biology was written with Potti's case in mind, but extends to scientific misconduct in general. Sadly, it takes the easy way out, by stating that "Although journals can do much to promote integrity and transparency in the practise and presentation of research... it is not our mission to police the research community." This lack of responsiveness seems unfortunately in line with the paragraph I quote above.

The topic of “Scientists behaving badly” (October 9, 2010) is about a Chinese scientist and a blogger who publishes claims of scientific allegations on his website, some that “undoubtedly… shine a light on the often-murky business of Chinese science” while others “are anonymous and lack specifics.” While the article is on the specific and long-running feud between a urologist and a self-proclaimed “science cop”, with one casting doubts on the validity of the other's research results, it can also be viewed as a call for an independent expert committee investigating allegations of misconduct in China.

Here is the quote that most caught my attention: “Measured by the number of published papers, China is the second most productive scientific nation on Earth. Incidents like this, though, call into question how trustworthy that productivity is.”

This provides a great transition into the third article, “Climbing Mount Publishable” (November 13, 2010), subtitled: “The old scientific powers are starting to lose their grip.” Quick summary: “In 1990 [North America, Europe and Japan] carried out more than 95% of the world’s research and development. By 2007 that figure was 76%.” Elsewhere in the article, we learn that “America’s share of world publications, at 28% in 2007, is slipping. In 2002 it was 31%.” The article also discusses metrics regarding gross domestic expenditure on R&D, share of national wealth spent on R&D, number of researchers and share of world patents.

Because countries view R&D output as a measure of their intellectual capital and prowess, the risk of plagiarism from researchers pressured for ever-more output is real. I wrote about this in a recent post on impact factors. In particular: "The following [excerpt of an article in The Chronicle of Higher Ed] caught my attention: "In China, scientists get cash bonuses for publishing in high-impact journals, and graduate students in physics at some universities must place at least two articles in journals with a combined impact factor of 4 to get their Ph.D.’s." Is putting so much pressure on scientists really a good idea? Maybe such high stakes explain some egregious cases of plagiarism over the past few years, such as the one Prof Alice Agogino of UC Berkeley was recently a victim of. You can find her paper (from 2004) here, and the other paper (from 2007) there."

I do not know why those two researchers decided to do this, but as developing countries pressure their scientists to excel on the international stage as a way of demonstrating the value of their education and the brainpower of their citizens, it has to be tempting for scientists struggling to come up with ideas to just take someone else’s paper and publish it elsewhere under their own name, hoping not to get found out. This also discredits the hard-working researchers of the same country who are actually putting in time and effort to come up with innovative ideas.

It is therefore very important for journals to take an aggressive stance toward plagiarism. All articles identified as plagiarizing others’ papers (in Agogino’s case the paper is word for word hers from beginning to end, except that there are other people’s names on top, so it is a very clear-cut situation) should have a highly visible mention added online that this paper has been found to be a plagiarization of another paper (leaving out the issue of who among the authors plagiarized the paper and who just marveled at his co-author’s sudden perfect mastery of English). Maybe that would make would-be plagiarizers think twice before they act.   

Finally, “Of goats and headaches” (May 28, 2011) discusses the very lucrative field of academic publishing. The focus is on academic journals, whose business model involves getting university libraries to pay for very expensive subscriptions with little information as to whether these journals are useful or often consulted, since the librarians are not the primary consumers. As a reminder, academic journals get their articles for nothing (authors receive no royalties) and reviewers review for free. A few journals such as the Journal of Economic Dynamic and Control actually charge a submission fee of $100 for authors hoping to be published in their pages.

Personally, I feel that my institution’s subscriptions to the journals I consult online have been useful (my favorite journal is Interfaces), but I wonder how many journals are rarely browsed; besides, there are only a few journals I consult regularly and I can find many pre-prints available for free download on researchers' websites. A mechanism to rein in subscription prices would certainly be useful. This would start by having librarians share the price of subscriptions with researchers so that researchers can help them decide which subscriptions are simply not worth it and which ones should be kept. In addition, librarians should track how many times a given journal is accessed online to gauge its usefulness.

But what I dislike the most about academic publishing is the business of textbooks. Professors, whose royalty rates - according to one of my former colleagues - “aren’t going to make anyone rich”, write the whole textbook by themselves. There is simply no editorial help in the way a book of fiction or nonfiction would be revised by an editor with a keen eye. The marketing around textbooks is minimal – frankly, the emails and newsletters I receive about new textbooks vaguely related to my area of expertise feel a lot like spam.  And for that, publishing houses charge upward of $100 (often closer to $200) and change editions every few years to limit the market for used textbooks.

In France we paid the cost of the photocopies for a course packet put together by the professor. Showing up in the United States and learning how much money we were expected to spend on books definitely came as a culture shock. (Campus bookstores add a sizable premium for the privilege of putting all the books in one convenient location too.) It should come as no surprise that I do not require textbooks for my courses. But, in contrast with the cases mentioned earlier in this post, the practice of the academic publishing business is completely legal.


On Impact Factors

The International Federation of Operational Research Societies, or IFORS, recently reposted an article that first appeared in The Chronicle by Richard Monastersky. When I looked up the original article, I realized it was now 6 years old, but many of the issues remain very timely. The article describes the use and over-use of impact factors, and the techniques that some less scrupulous people employ to game the system.

First, my non-academic readers might ask: what are impact factors? They are the brainchild of a Philadelphia-based researcher named Eugene Garfield, who came up in the late 1950s with "a grading system for journals, that could help him pick out the most important publications from the ranks of lesser titles." The system relies on "tallying up the number of citations an average article in each journal received." Fair enough. But the tool now plays an important role in tenure and hiring decisions, because universities want a simple metric to evaluate the quality of the research, and papers accepted by journals with high impact factors seem likely to be of high quality. This, predictably, has led to some abuse.

The calculation of impact factors, by the company ISI (owned by Thomson Corporation), goes as follows: "To calculate the most recent factor for the journal Nature, for example, the company tallied the number of citations in 2004 to all of the articles that Nature published in 2002 and 2003. Those citations were divided by the number of articles the journal published in those two years, yielding an impact factor of 32.182 — the ninth-highest of all journals."

But trying to draw conclusions about average quality from an average number is bound to generate misconceptions and errors. "For example, a quarter of the articles in Nature last year drew 89 percent of the citations to that journal, so a vast majority of the articles received far fewer than the average of 32 citations reflected in the most recent impact factor." If a paper is accepted by a top-ranked journal, does its impact on the scientific field reflect the importance of the journal, or is it on the contrary one of the 50% accepted by the journal whose quality will be below the median?

The following caught my attention: "In China, scientists get cash bonuses for publishing in high-impact journals, and graduate students in physics at some universities must place at least two articles in journals with a combined impact factor of 4 to get their Ph.D.’s." Is putting so much pressure on scientists really a good idea? Maybe such high stakes explain some egregious cases of plagiarism over the past few years, such as the one Prof Alice Agogino of UC Berkeley was recently a victim of. You can find her paper (from 2004) here, and the other paper (from 2007) there.

But what should impact factors really blamed for? I do find the claim that they are "threatening to skew the course of scientific research" and that "investigators are now more likely to chase after fashionable topics" because they want to get published in journals with high-impact factors frankly outlandish. Investigators chase after fashionable topics because they need grants to support their research group and get promoted. Journals in my field (operations research) publish contributions in a variety of fields, from theory to logistics to transportation, and no one needs to know impact factors to be aware that the hot topics right now are in health care and energy, but the fact that your research focuses on a hot topic doesn't mean you'll make the sort of meaningful contribution that will warrant publication in a top journal.

Another dynamic that was left unanalyzed in the article is that between journals and senior researchers. The article spends a lot of time and space discussing the idea that editors might reject a paper they feel won't be cited enough and thus would be detrimental to the journal's impact factor, but if the paper is co-authored by a top researcher and falls within that researcher's core area of expertise, it is hard to imagine that the paper would not be cited many, many times.

So a more relevant question would be: among the authors published by those top journals, how many are authored by junior researchers without senior researchers co-authoring? how many by junior researchers with other junior researchers? how many by junior researchers with students? If many highly-ranked journals publish mostly the work of senior researchers, then it is not reasonable to ask junior researchers - who are supposed to demonstrate their own research potential without the guidance of senior staff - to publish in those journals too in order to benefit from the golden aura of the high impact factor. (Some junior researchers certainly will be able to, but that shouldn't be a widespread standard.) This reminds me of those statistics about average salaries per profession, where journalists forget to mention that the averages are computed for employees ranging from 25 to 64 in age, thus giving the impression that any college graduate can hope to make $80k a year within a few short years out of college. Maybe the high impact factors are due to the predominance of articles by well-established scientists with decades of experience in the profession.

This being said, a lot of the article rang true. For instance: "Journal editors have learned how to manipulate the system, sometimes through legitimate editorial choices and other times through deceptive practices that artificially inflate their own rankings. Several ecology journals, for example, routinely ask authors to add citations to previous articles from that same journal, a policy that pushes up its impact factor."

I had heard of the practice before, although I've never experienced it. I have, though, been in a situation where the anonymous reviewer requested that a citation be added to my (and my student's) paper, and when I looked up the citation there didn't seem to be anything in common with the area of our paper. On the other hand, if the reviewer was one of the authors of the other paper, as seems likely, then having his paper cited increases his number of citations, which is a metric universities take into consideration at promotion time. I could have complained and have my paper left in limbo for several more weeks or months. I decided I didn't have the time to wage that fight and simply made the citation as inconspicuous as possible.

Self-citations, where previous papers published in a journal are cited in that same journal, are certainly a potential issue. The article mentions the case of a researcher who "sent a manuscript to the Journal of Applied Ecology and received this e-mail response from an editor: “I should like you to look at some recent issues of the Journal of Applied Ecology and add citations to any relevant papers you might find. This helps our authors by drawing attention to their work, and also adds internal integrity to the Journal’s themes.”"

The editor who sent the email later said that "he never intended the request to be read as a requirement," which is disingenuous, given that the paper had not been accepted yet and researchers can't submit their paper to other journals until it has been withdrawn or rejected, thus putting them at the editors' mercy.

Recently ISI has developed an alternative methodology to address the flaws of the impact factor metric, called eigenfactor, in which citations from high-impact journals are weighted more heavily. To evaluate a scientist's value to his field, people should use the H-index rather than drawing inferences from journals' metrics, although the H-index also suffers from drawbacks. And, as it becomes more used, it is sure to be more misused too.


Robust Timing of Markdowns

My student Mike Dziecichowicz, former student Daniela Caro and I recently completed a paper on the robust timing of markdowns in revenue management. We apply robust optimization to the arrival rates of the demand processes, in an approach that does not require the knowledge of the underlying probability functions and instead incorporates range forecasts on those rates, as well as captures the degree of the manager's aversion to risk through intuitive budget-of-uncertainty functions. These budget functions bound the cumulative deviation of the arrival rates from their nominal values over the lengths of time for which a product is offered at a given price.

A key issue is that using lengths of time as decision variables - a necessity for problems on timing markdowns but a departure from the traditional robust optimization framework - introduces non-convexities in the problem formulation when budget functions are strictly concave. Concavity is a common assumption in the literature (it reflects that the longer a product is on sale, the more the various sources of uncertainty tend to cancel each other out, in the spirit of the law of large numbers, so that the marginal increase in time of the protection level decreases) and therefore must be incorporated in a tractable manner.

Specifically, we make the following contributions:

  • We model uncertainty on the arrival rates of the demand processes through range forecasts and capture the manager's risk aversion through a "budget of uncertainty" function (of time on sale at a specific price point), which limits the cumulative deviation of the arrival rates from their mean and is determined by the decision-maker.
  • In the nominal case and in the case where the budget of uncertainty function is linear in time, we provide closed-form solutions for the optimal sale time in the single-product case.
  • In the case where the budget of uncertainty is concave and increasing, again for a single product, we derive a mixed-integer problem (MIP) that approximates the robust non-convex formulation.
  • We develop a policy about the optimal time to put products on sale, which depends on both the number of items unsold and on the time-to-go, and use our robust optimization model to determine its parameters.
  • We extend our analysis to the case of multiple products. In particular, we present the idea of constraint aggregation to maintain the performance of robust optimization for that problem structure.
  • We provide numerical experiments to test the performance of the robust optimization approaches described in this paper.

The full paper can be downloaded here. Mike, a former NSF IGERT fellow, is a doctoral candidate who will be graduating next month (and is looking for a job - you can learn more about him here); Daniela received both her Bachelor and her Master's degrees from our department in 2008 and 2009, respectively (the Master's as a Presidential Scholar) and now works as a corporate pricing specialist at MillerCoors in Chicago. Feel free to email us any comments!

This research was funded through NSF Grant CMMI-0540143.


Lehigh ISE graduate students continue to shine on national stage

This has been a great few years for my department of Industrial and Systems Engineering here at Lehigh, with former students Dr Ying Rong and Dr Jim Ostrowski receiving national recognition last year for their work:

  • Ying was a finalist in the 2009 MSOM Student paper competition for his paper entitled "Bullwhip and Reverse Bullwhip Effects under the Rationing Game" supervised by my colleague Prof Larry Snyder (MSOM is a society of INFORMS, the Institute for Operations Research and Management Science - MSOM stands for Manufacturing and Service Operations Management),
  • Jim received second prize in the George E Nicholson student paper competition (which is the general student paper competition run by INFORMS for papers in operations research and management sciences) for his paper entitled "Orbital branching", which was advised by my then colleague Prof Jeff Linderoth, now at Wisconsin-Madison.

I am happy to report that the department has received another finalist spot, this time through Ban Kawas PhD'10 in the 2010 Informs Financial Services Section paper competition, for her paper entitled "Short sales in Log-robust portfolio management", which I supervised. The winners will be announced at the annual meeting in November. Ban will start a postdoctoral fellowship at IBM Research Lab in Zurich in a few weeks.

It is very exciting to see the work of our graduate students receive such recognition in a wide array of operations-research-related areas, from supply chain management to portfolio management to pure optimization. We hope the trend will continue!