Previous month:
February 2009
Next month:
April 2009

March 2009

Science and Online Journalism

A colleague forwarded me the link to this editorial in a recent issue of Nature, "Filling the void", which urges scientists to "rise up and reach out" as newspapers cut jobs, and the number of science journalists in particular decreases. It summarizes the issues facing science outreach quite nicely - more and more younger scientists blog, but science blogs can't introduce the general public to scientific issues, since laypeople don't look for them in the first place.

On a related topic, the March/April issue of Technology Review has an article ("But Who's Counting?") about the challenge of determining numbers of website visitors accurately. This matters because web traffic plays a key role in the rates websites can charge for advertisements, and online journalism is moving toward an advertisement-only business model. The article enumerates the following issues with traditional methods to measure web traffic: "in ascending order of impact, overcounting individuals with multiple computers or web browsers; counting 'mechanical visits' by Web 'bots' and 'spiders' (for example, when Google crawls the Web to estimate the popularity of sites) as visits by real people; and overcounting individuals who periodicially flush out the 'cookies' of code that sites stash on browsers so that returning visitors can be recognized." (I often do the last one.)

The journalist uses a couple of annoying words, like "digerati" and "misprisions" - I consider myself cultivatived, and yet I had to look them up - but overall the article is an insightful read, with valuable discussions on how to use "a methodology inherited from television audience research: the panel", where "panelists agree to have their Web browsing monitored through interviews and through 'meters', or spyware, installed on their personal computers." The issue? "[Panel-based audience research] tends to undercount people who look at sites at work."

A company named Quantcast is currently trying to develop new technologies to measure audience; its website offers an interesting overview of its methodologies; for instance, see here for the "Cookie to People Translation Overview" and here for the complete white paper. Quantcast competes for that business with Google, which plans to leverage its ownership of DoubleClick to combine audience data analysis with the ad-serving system, "so that media planners know which sites are best suited for which ads". This of course raises issues of dominance on Google's part.

Companies that had become used to television's Nielsen ratings might never know their web audience as precisely. But because it's possible to track customers' online behavior - especially when the customers click on display ads - online advertisement might also generate higher-quality data than a more traditional medium like television. After all, companies have never been able to tell how many customers they gained by running a TV commercial during this or that time slot. Managers might be forgetting a bit too quickly the uncertainty of the times they feel so nostalgic about.  


Robust Linear Optimization With Recourse and Cutting-Plane Methods

Prof. Marina Epelman from the University of Michigan, her doctoral student Tara Terry and I have finally put the finishing touches to our paper, "Robust Linear Optimization With Recourse," available for download at optimization-online.org. We propose a robust optimization approach to two-stage linear programming with recourse, and solve the resulting model using a cutting-plane algorithm based on Kelley's method. The advantage of the robust optimization approach is that it allows us to model random variables as uncertain parameters belonging to a polyhedral uncertainty set, and does not require the knowledge of the underlying probability distributions. The decision-maker's conservatism is taken into account through a budget of uncertainty, which determines the size of the uncertainty set around the nominal values of the parameters. We show that the robust problem is a linear programming problem with a potentially very large number of constraints, which is why we use a cutting plane algorithm as solution technique.

In addition, we formulate the robust problem with simple recourse as a series of m linear programming problems similar in size to the model without uncertainty, where m is the number of uncertain random
variables, and provide techniques for finding worst-case realizations of the uncertain parameters. We also demonstrate performance via computational experiments, which yield insights into the structure and performance of the robust solutions.

This is a paper we started working on in 2005, when Marina and I ran into each other at the INFORMS annual meeting in San Francisco, where I had given a talk on preliminary ideas I had on the topic, and while an earlier version of the work has been around since 2006, the revised algorithm and the new computational experiments have significantly improved the paper. Feel free to email us your comments and questions!


Should All Universities Be Research Universities?

The New York Times published an article entitled "State Colleges Also Face Cuts in Ambitions" earlier this week. It describes the changes at Arizona State over the past seven years, when a new president led the drive for drastically increased research programs and enrollment (a goal stated more dramatically with the slogan "100,000 students by 2020".)

Needless to say, the current budget situation is forcing the university - and others throughout the country - to reconsider its expansion plans, and has "rais[ed] questions about how many public research universities the nation needs and whether universities like Arizona State, in their drive to become prominent research institutions, have lost focus on their public mission to provide solid undergraduate education for state residents." Interesting questions indeed!

The article mentions budget cuts at the University of Florida, the University of Nevada - Las Vegas and the University of Vermont, although it also notes that the budget at the University of Michigan is largely intact.

While "not every university can be in the top 20," in reference to a list published by the National Science Foundation a year or two ago (scroll down to the list about institutions without a medical school), the article paints a very positive picture of Arizona State, with its in-state tuition of less than $6,000 for most programs, its "7,000 students with no family income at all" and its very generous aid packages for National Merit Scholars, while trying to increase the percentage of out-of-state students (who pay much higher tuition) to 40%.

The author only states but does not discuss a related question: if four-year universities are drifting toward the research model, "what share of resources should go to less expensive forms of education, like community colleges"? I like to think that in this age of globalization and "creative class" (a term coined by Richard Florida), the greater emphasis on innovation will make research training a basic requirement of any curriculum. It is, after all, the surest way to learn the creative thinking skills that are becoming so important.

But there is research and research - undergraduate research is different from Master's research, which is different from PhD research. And even in terms of PhD research, a dissertation written at, say, Stanford, is different from a dissertation written at a fourth-tier university, although the top schools graduate more academia-bound students than there are spots at other top schools, so quality does cascade down. 

The goal for the new generation of research universities, it seems, is to attract companies that will create partnerships with professors and hopefully hire PhD graduates - help business implement more innovative solutions. But Master's students often have to work on a project or a thesis, and demonstrate innovative thinking too, even if it is at a lower level than PhDs - the advantage being that most Master's students pay their own way, which reduces the need for teaching assistantships and other financial support. As the crisis forces institutions to cut programs, maybe it's time to ask whether some companies can benefit from creative thinking on a less transformative scale than what PhDs can offer. Universities will still be dedicated to doing research if they train undergraduate and Master's students to think outside the box.


Are Newspapers Doomed?

There has been a lot of bad news for newspapers recently. The Seattle Post-Intelligencer recently printed its last paper edition and shifted to a web-only model similar to the Christian Monitor's (Seattle Paper Shifts Entirely To The Web, New York Times, March 16, 2009). A previous article in the NY Times has the states: "For papers, a downsizing trickle becomes a flood" (March 12, 2009, "As Cities Go From Two Papers To One, Talk of Zero"), which was echoed by the Christian Monitor's "Newspapers' troubles escalate in recession," March 16, 2009. Hearst announced last month that "it might sell or close The San Francisco Chronicle if it cannot wring enough savings from the money-losing newspaper" (article in the New York Times, February 24, 2009)

In January, the Atlantic published an article by Michael Hirschorn on "How the New York Times could survive," where he suggests that the end of the Times's print edition might be closer than we imagine. He also quotes a December 2008 report by Fitch, the ratings agency: "Fitch believes more newspapers and newspaper groups will default, be shut down and be liquidated in 2009 and several cities could go without a daily print newspaper by 2010." We're not quite 3 months into 2009 already and the prediction seems even more accurate now than then.

Hirschorn correctly points out that online citizen journalism cannot be the sole answer - it's one thing to blog from your living-room, quite another to go and talk to sources, follow trails, and use years of training to write a compelling story. Not every article is Pulitzer material, but can one really imagine any individual blogger producing any one article in the Business, Education or US sections of the New York Times?

In Hirschorn's words: "If you're hearing few howls [...] over the impending death of institutional, high-quality journalism, it's because the public at large has been trained to undervalue journalists and journalism." The future he envisions for nytimes.com is one of a "bigger, better and less partisan version of the Huffington Post", with "a healthy dose of aggregation, a wide range of contributors, and a growing offering of original reporting." This doesn't sound terribly appealing to me.

While it's becoming clear people aren't willing to pay for a daily dose of news in print when they can get breaking news on the Internet, there is still a case to be made for thoughtful reporting in weekly and monthly magazines. Many magazines are either national in scope and distribution, or local glossies where the ads take two thirds of the pages, and the text is about the best restaurants in town. Maybe it's time for a weekly local, investigative magazine that give readers thoughtful news of their area.


Global Innovation

The March issue of Harvard Business Review has a thoughtful article on "Tapping the World's Innovation Hot Spots." The author, John Kao, notes that "countries around the globe are creating distinctive innovation models" and describes three models of innovation:

  • The focused factory model is embodied by countries like Singapore and Denmark, which "focus their innovation investments on a handful of industries or research fields." For instance, Singapore is strongly committed to research in the life sciences, considering tax breaks, research grants and state-of-the-art infrastructure for companies in that area.
  • The brute force model "appl[ies] massive amounts of low-cost labor and capital to a portfolio of innovation opportunities." This is the model followed by China.
  • The Hollyworld model focuses on creating a "global creative class". Silicon Valley is the best-known example of that model, which has also guided the development of Bangalore, Helsinki and Toronto.

The author also suggests that global companies use the different models in a mix-and-match approach, using offices in various countries to best take advantage of these countries' specific culture. He writes: "The United States is especially well-positioned to serve as a base for innovation systems integration," because of its cultural diversity, talent base and infrastructure.


Forecasting Attempts

The Economist has two articles on forecasting in its February 28th edition: in its Leaders section, the writer of "To forecast or not to forecast" comments on the recent decision of Unilever, Costco and Union Pacific "not to give annual earnings estimates for 2009." The journalist makes the excellent point that, instead of giving up earnings estimates altogether, companies should issue range forecasts.

I wrote about the case for range forecasts a little over a year ago (see this post, January 2008), and I hope to see the idea take off. As mentioned in my recent post on Gaussian copulas, the key is not to try (in vain) to find the perfect model, in finance as in operations management, but to make people more accustomed to the imperfection of models and forecasts by presenting several of them, with different outputs - hence forcing managers to use their best judgment in addition to the computer results. The article also suggests that companies refusing to issue forecasts are unlikely to have developed contingency plans to address missed targets, since they don't have targets to begin with; this should be a red flag for investors.

The topic of forecasting uncertainty is investigated further in the long article "Managing in the fog", which opens the Business section. A good idea when historical data does not cover a large range of business conditions is to resort to scenario planning; for instance Lego, the Danish toymaker, "has developed contingency plans for each scenario so that it can react swiftly whatever the coming months throw at it." Adaptability is the new game in town. Another great idea is to use rolling forecasts, where the end is always twelve months away, thus "discourag[ing] executives from becoming too fixated on the present at the expense of the future."

Another forecasting article ("An uncertain future") discusses the novel concept of prediction markets, which are used to help companies "pool the collective wisdom of their employees". Unsurprisingly, employees are more likely to take part in such markets, using virtual accounts to "buy and sell 'shares' that correspond to a particular outcome", when they are rewarded for their trouble and when they can see how the results are used.

Wall Street analysts used to judge managers on the ultimate accuracy of the growth forecasts they produced. This has never been a healthy approach to business. Hopefully, the current crisis will help put front and center executives' ability to adapt to changing market conditions, rather than rewarding those who hurt the long-term success of their companies to ensure short-term goals were met. The single, point forecast for brick-and-mortar companies is as outdated as the single-parameter Gaussian copula model in finance.


Lehigh ISE department's 60th Anniversary Celebration

This post is for the department alumni and current students who read my blog. The department is celebrating its 60th anniversary in April and we hope to see you on Thursday, April 16, especially at the Schantz lecture delivered this year by alumnus and Air Products CEO John McGlade (5pm, Sinclair Auditorium), and at the banquet (6.30pm-10pm), which will feature an award ceremony, in addition to excellent company and great food.

I keep running into alumni who have not heard of our celebration, either because the email was mistakenly tagged as spam or because they are no longer checking the email address the department has on file for them, so I decided to follow a former student's advice and write a post on my blog. If you are an alum, tell your friends from the department and make plans to attend. It promises to be a memorable event. More info can be found here. To register, please click here. See you in April!


Finance's Gaussian Copulas, Part 2

This is the second part of a long post. The first part is here.

The Wired article I mentioned in yesterday's post describes Li's copula model as "a simple and elegant model, one that would become ubiquitous in finance worldwide," and makes it clear that its simplicity and - its close cousin - tractability became the reason for its downfall: "[Quants'] managers, who made the actual calls, lacked the math skills to understand what the models were doing or how they worked. They could, however, understand something as simple as a single correlation number. That was the problem." (As explained in my previous post, correlation turned out to vary over time, and to be very sensitive to estimation errors of the inputs, so the fact that the model required a single parameter gave managers a false sense of security.)

One of the inserts caught my eye - it describes the copula model in a stylized, colorful formula, and presents the equal sign as "a dangerously precise concept, since it leaves no room for error. Clean equations help both quants and their managers forget that the real world contain a surprising amount of uncertainty, fuzziness, and precariousness." What has the world become, if equal signs are now "dangerously precise concepts"? 

In a January 2009 article in BusinessWeek ("Financial Models Must Be Clean and Simple"), Emanuel Derman and Paul Wilmott emphasize that "at bottom, financial models are tools for approximate thinking." While Derman's and Wilmott's stance in favor of simple and elegant models sounds contrarian given the copula debacle, what it seems they mean is that the blame shouldn't rest on the approximate models, but on the models' users forgetting their models are approximate.

Such stylised representations of the world are valuable as long as people who use the models remain aware of their assumptions and limitations. It is, after all, easier to remember assumptions if one makes only a few. Derman and Wilmott also insist: "The most important questions about any model are: What does it ignore, and how wrong is it likely to be?" - two questions quants, and people impressed by mathematics, have not learned to ask as often as they should. In addition, I enjoyed reading the "model maker's Hippocratic Oath" at the end of the column. 

Derman, now at Columbia University after years at Goldman Sachs, is also quoted in The Economist's "In Plato's Cave -- Mathematical models are a powerful way of predicting financial markets. But they are fallible" (January 22, 2009), which recounts the beginnings of quantitative finance, and then describes the CDOs (collateralised-debt obligations from the mortgage pools) crisis; in particular, CDOs were "impossible to model in anything but the most rudimentary way". This is a case where the alternative to simplicity and elegance would have been intractability, as opposed to richer models.

The article also comments on Value-at-Risk and "tail risk" (for VaR, see this old post of mine about an excellent New York Times article on the topic) and presents the remarkable idea, due to economist Daron Acemoglu at MIT, that "modern finance may well be making the tails fatter", because "you are swapping everyday risk for the exceptional risk that the worst will happen and that your insurer [like AIG] will fail."

The 2005 article in WSJ, also mentioned in yesterday's post ("How A Formula Ignited Market That Burned Some Big Investors", September 12, 2005), quotes the head of market risk management at JP Morgan as saying: "We're not stupid enough to believe [the model] is omniscient. All risk metrics are flawed in some way, so the trick is to use a lot of different metrics." (I wonder how his team has fared in the crisis - was he, in the end, stupid enough, just like the others?) Firms claimed they combined Li's model with their own proprietary frameworks, but fell abysmally short of expectations - maybe everybody assumed everybody else would go to the trouble of creating their own models and went for the lazy option of using Li's formula only, hoping no one would notice.

Despite his bravado, the manager at JP Morgan is fundamentally correct: people need to use many different models to develop a full picture of risk. They won't understand the limitations of a framework better simply because it is simple, elegant and convenient. On the contrary - it will tempt them to forget its limitations because they will want to use it all the time.

Quants and investors must become used to multiple models, focusing on different aspects of the problem, resulting in different optimal allocations, so that it is clear the allocations are only guidelines and the quants can reclaim the responsibility of making decisions, instead of simply following the computer's recommendations without understanding how it got those numbers. Quants' bosses need to become accustomed to the fundamental uncertainty and inaccuracy inherent to mathematical models. There is, after all, always something the models don't capture.

The WSJ article also shows remarkable prescience in identifying the risk of "garbage in, garbage out" - "as with any model, forecasts investors make by using the model are only as good as the inputs" - and gives an example of early hiccup regarding the prized copula function, involving General Motors. Stanford's Darriell Duffie, quoted both in the Wired and the WSJ articles, gets the last word: already in 2005, he stated: "The question is, has the market adopted the model wholesale in a way that has overreached its appropriate use? I think it has." It's a pity no one heeded his warning back then.

A student of mine drew my attention to an article called "Management Science and the Management of Science," which the editor-in-chief of Management Science - one of the leading scholarly journals in my field - wrote for the December 2008 issue of the journal. In it, Hopp comments on the move of operations management research over the past few decades from tactical, low-key considerations (computing the parameters of the inventory replenishment policy, for instance) to strategic, far-reaching ones (how to best structure a supply chain), in part to gain more relevance in the eyes of top management.

Hopp writes: "Although [tactical issues] are of interest to engineers and middle managers, they are not central to the concerns of senior management. So, in the 1990s, [operations management] researchers began to aim higher at questions of strategy." My student suggested - and this is one of the best comments I've heard about the financial crisis in a while - that maybe quants will now reposition themselves and reorient their models away from tactical, precise issues toward more strategic, high-level problems.   


Finance's Gaussian Copulas: The New Frankenstein Monster

A former student of mine recently sent me the link to this Wired article: "Recipe for Disaster: The Formula that Killed Wall Street," by Felix Salmon (February 23, 2009) - I love it when graduates keep in touch, and I love it even more when they email me interesting material for my blog! The article describes the groundbreaking work by a quant named David X. Li, who pioneered the use of Gaussian copula models (more on what that means below) to estimate correlation between two random events, and better quantify the probability of these events occurring simultaneously. The events of interest here are mortgage defaults.

Correlation captures how much random quantities evolve together; for instance, the sale of umbrellas and sunscreen lotions on any given day is negatively correlated - if you need a lot of one, you probably won't need much of the other. In finance, quants used to struggle with the pricing of mortgage pools, popular financial instruments because mortgages were bundled together and homeowners weren't supposed to all default at the same time: a few would, but most of them would continue repaying their loans, and investors would continue receiving their money. Investors wanted to know the likelihood of two homeowners in the pool defaulting, to make sure they were properly compensated for the risks they took. Such likelihood was linked to the correlation between the two events, which was not easy to estimate, since there weren't a lot of data points (there were relatively few defaults during the real estate boom).

Li's contribution was to model default correlation in mortgage pools using available market data for credit default swaps instead. CDS basically provide insurance against defaults and are traded much more frequently than the bonds they insure, giving quants more historical data to feed their models. Moreover (and that turned out to be a problem, but it motivated the model's huge popularity at the time), Li obtained "one clean, simple, all-sufficient figure that summed up everything:" the correlation number in his copula model.

Copula is one of those fashionable buzzwords (Value-at-Risk is another one), which almost no one in the financial world had heard of a few years ago, and which have now become omnipresent. Copulas are used to transform general multivariate distributions into related multivariate uniform distributions, in order to study the dependence of the modified (uniform) random variables. For some reason, this makes the analysis simpler. There are several families of copulas, used to model different types of dependencies, but the Gaussian one is by far the most popular because of its widespread adoption by the financial industry.

Li's model allowed derivatives to be rated simply on that one correlation number. An article in The Economist states ("In Plato's Cave -- Mathematical models are a powerful way of predicting financial markets. But they are fallible" -- January 22, 2009): "the [ratings] agencies’ models were even less sophisticated than the issuers’," for instance not distinguishing between a BBB rated CDO [credit debt obligation] and a BBB rated corporate bond, despite their different risk profile. The issuers were even able to "build securities with any risk profile they chose, including those made up from lower-quality ingredients that would nevertheless win AAA ratings" because they knew (thanks to third-party companies) the models the rating agencies used to rate financial instruments.

From Wired: "Just about anything could be bundled and turned into a triple-A bond—corporate bonds, bank loans, mortgage-backed securities, whatever you liked," simply by creating a derivative with the "right" correlation number. The deceptive simplicity of the model attracted quants and, more dangerously, their non-quant bosses like moths to a flame, with the results we see today.

A key issue, emphasized in the Wired article, was that the copula model assumed correlation to be constant, although in practice it varies with time and is very sensitive to small changes in the inputs. Another problem was that the historical data for the credit default swaps (CDS) covered a very narrow range of market conditions - the period during which CDS were traded witnessed soaring house prices, so models only knew a world where there was a real-estate bubble. A similar comment was made in The Economist: "there was no guarantee that the future would be like the past, if only because the American housing market had never before been buoyed up by a frenzy of CDOs."

Interestingly, Li had cautioned against putting too much trust in his model as early as 2005 -- see "How a Formula Ignited Market That Burned Some Big Investors", by Mark Whitehouse, Wall Street Journal, September 12, 2005; the article appears eerily prescient in hindsight. Unfortunately, Li was ignored by the Wall Street crowds eager to apply his formula no matter what, and make money off it. (Investors had been reluctant to buy financial instruments when they did not have a good understanding of risk. Li's formula solved that problem and created a very lucrative industry.) Li recently returned to China and, according to a statement issued by the company where he now works, is "no longer doing the kind of work he did in his previous job and, therefore, would not be speaking to the media".

But if he follows the current developments, Li must feel he has created the financial equivalent of Frankenstein: his creature overpowered him and destroyed his world. According to Wikipedia.org, the Frankenstein book, first published in 1818, was supposed to be a "warning against the 'over-reaching' of modern man and the Industrial Revolution, alluded to in the novel's subtitle, The Modern Prometheus." Sounds familiar?