Previous month:
December 2007
Next month:
February 2008

January 2008

Internet and the business of news

The January/February 2008 issue of Technology Review has a fascinating article on the networks' view of news ("'You don't understand our audience' - what I learned about network television at Dateline NBC", by John Hockenberry). The author is particularly interested in "explor[ing] how the Internet might create new opportunities for storytelling, new audiences, and exciting new mechanisms for the creation of journalism," and contrasts the networks' obsession with audience size ("[Content] exists to attract passive viewers who will sit still for advertisements.") with the users' desire for smaller-size communities. Maybe Internet will fulfill the promises television didn't keep: "The United States is arguably more isolated and less educated about the world than it was a half-century ago," despite "the ability to transmit pictures, voices, and stories from around the world to living rooms in the U.S. heartland."

The anecdote about the network decision to resume prime-time scheduling after 9/11 because one could not "sell ads around pictures of Ground Zero" was quite sobering, the story about how NBC wanted to cash in on all the emotion surrounding firefighters' deaths at Ground Zero by creating a show similar to Cops at a firehouse made me shake my head, but I will leave you read the paragraph on entertainment driving news coverage by yourselves - no excerpt would do it justice. (It starts at the bottom of p.68, for those of you who have the magazine; for online readers, the first sentence of that paragraph is "Sometimes entertainment actually drove selection of news stories.") How episodes of Law & Order and American Dreams could affect Dateline programs, for the sole reason that the latter was used as lead-in to the former, is just sickening. And if you are not disgusted enough, the following pages will do the trick (the network' reaction to footage of prison guards using deadly force on a mentally ill prisoner, GE [parent of NBC] blocking Dateline's attempts to interview relatives of the most wanted terrorist in Saudi Arabia because it did business with them).

But can we really "use technology to help create a nation of engaged citizens"? Local newspapers (see for instance Allentown's The Morning Call) seem to have etched partial (mis)information into their online business model: writing articles that tell readers little about what's going on guarantees that many will come back to the site later to see if things got sorted out - a boon for advertisers. The game is stacked against concise and relevant reporting, because the goal is, or certainly seems to be, to make banner ads viewed as many times as possible to extract high rates from advertisers. Many posts on my local newspaper's website are nothing more than the transcription of scanner reports and citations. Really, police departments in the county should just get together and hire someone to take notes and post that stuff online - at least the money they'd get from companies to advertise on the site would go to police training or equipment or some kind of valuable purpose. But don't tell me this is journalism.

I enjoyed Hockenberry's example of the entrepreneur Charles Ferguson who spent $2 million of his own money to make a documentary (No End in Sight) about Iraq, which gives a dispassionate account of "how the U.S. military missed the growing insurgency"; I also liked how NBC's David Bloom was able to file live stories from his "Bloom-mobile" thanks to advances in technology, and appreciated the emergence of soldiers-bloggers in Iraq who were able to create and distribute their own content. Technology is certainly bringing about change. But what these examples have in common is their location: far away from here, and you don't get there unless someone pays for the trip; you also don't get your voice heard unless you provide the users on YouTube with a compelling reason to watch your video rather than someone else's. Fires don't start without a spark; viral marketing can only do so much.

Iraq is attracting a lot of attention right now but what about other Asian countries? Where did the "technological insurgency", to use Hockenberry's expression, leave them? For now grass-roots efforts to take advantage of technology seems to drown the voices of people who have a point to make with those of people who talk even when they have nothing to say. The most viewed videos on YouTube aren't breaking-news stories; instead you find a lot of clips with presidential contenders and video games - nothing that would qualify as eye-opener. Maybe we will witness the emergence of a not-for-profit and philanthropy-driven model for the news business, with billionaires paying their own teams of journalists in the interest of the greater good and fair reporting. After all, few people expected Bill Gates to give so much money to charity when he was still running Microsoft, so we can still hope this will happen at some point, and that it won't turn into CNN. In the meantime, my TV is stuck on PBS.      

Some Shameless Self-Promotion

The course evaluations for the undergraduate course I taught this Fall are in, and it turns out most students feel it wasn't a complete waste of their time. With ratings like these you wouldn't guess I threatened them with a D- for the whole course if they failed to hand in an assignment! You wouldn't guess either that I forced them to use AMPL rather than Excel and put them through sums with three indices and variables all over the place and piecewise linear costs that are not convex and all kinds of horrible assumptions they'd rather forget (the production line can only manufacture one type of product every month and there's a switch-over cost when you switch from one to the other...). The course wasn't easy, and the students did a really good job. That was a tough semester for me so it means a lot that they enjoyed the course.

I designed the course so that students would learn to model real-life problems from a wide range of applications, which helps to keep seniors interested (and motivated when the going gets tough). That was my choice when I was first assigned to teach the course my first semester at Lehigh - the only rule I was given was that this had to be a course on deterministic operations research, because we offer another elective on stochastic models. After the first year, where what I taught - based on some of my MIT engineering lectures - was too hard for half the students (although the other half succeeded brilliantly, which speaks volumes about the quality of the student population at Lehigh), I fell back on the quantitative models course that is part of the MBA core curriculum at MIT (I was once a Teaching Assistant for that) and expanded on the deterministic section to include much (much) more modeling and non-Excel software, because if you remember one thing about my course you know that Excel Solver is not to be trusted as soon as you step out of the linear optimization framework.

Sadly, my ratings can only go down from there... Maybe I should just retire now. My department chair pointed out that often faculty grow tired of a course (I've been teaching the course every year since I joined, so this was my fourth time) and move on to something else, but I am not tired of it yet so I signed up to teach it again next year and we'll see when I reach my saturation point. I do plan on making a few adjustments based on the individual student evaluations I received; in particular, I plan to (1) include this year's assignments and solutions in next year's course packet so that the students will have more exercises with solutions to help them learn the material, since I currently don't require a textbook (there's nothing out there that covers enough of my material and when I required a textbook the students in previous years felt that was a waste of their money), and (2) assign shorter homeworks: two exercises every week, instead of four exercises every other week - the last exercises tend to be harder and I initially wanted to avoid having some assignments much easier than others, but it seems that some students get overwhelmed, especially with the computer-based parts, and since I force them to hand in a homework I might as well put all the chances on their side to learn the material correctly. Anyway, those are my thoughts at this point.

And to the (now former) IE 316 students who might find their way to this page: that was a pleasure teaching you and I wish all of you best of luck in your future endeavors.

Technology and Education

The Economist recently held a series of online debates on its website, the first one of which, now closed, was on technology in education. (The second, on university recruiting, is also closed, but the last one, on the value of social networking, is still ongoing.) 56% of voters opposed the motion that "the continuing introduction of new technologies and new media adds little to the quality of most education." The end result, although favorable to technology, doesn't sound like a ringing endorsement, and the vote might also reflect some wishful thinking on voters' part (technology has to be good because it is technology and at some point it will be useful in education so let's vote yes now) rather than a hard look at the facts.

Technology does help a lot on the periphery of the learning environment: I post course materials, assignments and grades online, and if I want to clarify a question in the homework I don't have to wait for the next lecture to do it - I can put an announcement on BlackBoard. In the classroom, however, staring at slides can quickly become boring for the students, and technology then brings an advantage to the teacher rather than his audience because it helps him avoid preparing the lecture: you can always read last year's slides in a very authoritative manner even if you don't remember what you're talking about. (I gave up on using slides a while back - chalkboard for me and paper for students; welcome to the twenty-first century.)

One issue with technology, especially when it comes to online course archives such as OpenCourseWare, is that many students don't feel any pressure to pay attention when it's just them and the computer, the few who do have limited ways to check whether they've understood the material, and students tend to put more effort in courses where they will be held accountable by a real person. But education isn't all about teacher-student interactions, and technology has played an important role in making more resources available. Students taking a course in finance, for instance, and eager to learn more on the topic can now browse and pick a book that suits their interests based on users' reviews; in the past, they had to rely on their university library, which often does not carry popular books aimed at a large audience, or their local bookstore, which cannot stock many books in each category due to space constraints. Others use the Web to gather information - Wikipedia has become popular for quick definitions of math concepts. Sometimes technology can be used against education too, when students text-message under the table or check the latest updates on Facebook from their laptop, although laptop problems mostly arise with MBA students as opposed to undergraduates.

At the same time, it's well-known by now that the predicted market size for PCs was a number I could count on my fingers, because no one believed consumers would have any interest for that kind of machine. The usefulness of technology is particularly tricky to estimate because the limiting factor is on the teacher's side. Granted, instant-messaging is not going to replace face time during office hours (math equations and IM don't go well together). But assignments are handed in on paper because that's the way it's only be done, not because there is something intrinsically better about it - especially in industrial engineering where students write down their model by hand and print the output sheets. Then they complain because they lose points for things that they had incorporated in their model on the computer but forgot to write down on paper. Emailing a spreadsheet puts more work on teachers or graders because we have to learn the conventions used by the students (what's cell B6 again?) and it is harder to write remarks and highlight mistakes. But maybe down the road some assignments, in some courses, will be podcasts or video clips (oops, they're called vodcasts now) where the student explains what he's done in addition to presenting the end result. When students who grew up with their iPod and YouTube become teachers, it seems likely that they will use these tools in their work life too.

The first generation of these teachers is already at work, and for once high school might well be teaching a lesson or two to higher education, since these new teachers are for now predominantly in the secondary system (if only because they haven't had the time to get a PhD and start teaching in college yet - iPod was launched in October 2001, and YouTube in February 2005). Some teachers have begun to use blogs as course webpages and link to course wikis so that the students can contribute too, i.e., take exams, post assignments. Of course that one course - if you follow the links above - is on web design to begin with, but borrows topics from history and literature, and TeacherTube and VoiceThread provide many tech-driven resources on non-tech courses. (I learnt about those sites through Angela Maiers' blog, and in turn Dave Sherman's.) One Dan Meyer even records his lectures when he has to be replaced by a temp, in quite an amazing way. (GarageBand? iBook? Final Cut Pro? But how is the media going to keep portraying high school teachers as a bunch of unionized oldies opposing merit pay while waiting for retirement with people like that?) The Web is becoming a two-way, rather than one-way, tool.

Of course not everyone has a webcam to do vodcasts with, but in the late 1990s not everyone had a cell phone either - you can now date a movie based on whether the main character drops by a pay phone when he has to make a call. Many students have iPods just six years after Apple's product launch, and one of these days teachers putting up with hours of commute on the California freeways will start requiring students to submit their assignments as podcasts they can listen to while sitting in traffic jams. Given YouTube's popularity, in a few years one could also expect webcams to become ubiquitous - and once technology supports rather than replaces face-to-face interaction, it'll be time for Web 3.0, whatever that is.

On Models and Quants

I recently finished reading My Life as a Quant, by Emanuel Derman, and while the finance-oriented crowds will balk at the chapters on graduate school in physics, beginnings in academia, and the years spent in industry at Bell Labs, I found the book made a lot of valid points and gave a good picture of why so many scientists found, and are still finding, their way to Wall Street. It is also one of the few books I've read, if not the only one, where the author tries to provide a honest description of the jobs he's held; too often people yield to the temptation of posing as lone saviors fighting against a bunch of ignorant cavemen (you know who I'm thinking about, which does not mean I do not find the books of said lone savior useful). Derman is very open about the mistakes he's made, such as not giving enough credit to a collaborator and quitting Goldman Sachs for an ill-advised stint at Salomon Brothers, and he does not hesitate to tell humorous anecdotes that do not put him under the best light, such as the day he refused to change seats on a long-haul plane to let a kid sit next to his father because he was tired of being stepped on at work, and then realized the father was, ahem, Robert Merton of Black and Scholes fame, en route to the same conference.

The book makes several important contributions. First, it positions quants' job within a financial company; yes, traders get more respect, and quants devise - and code - risk models to make traders' life easier (as opposed to working in a vacuum), although the different cultures make it sometimes difficult for the two worlds to communicate. Second, it provides valuable lessons on the impact of research in industry, and the role of models in finance. I am referring to the chapter ("Laughter in the Dark") on his work on implied volatilities, which represented an important theoretical breakthrough but was only adopted by traders at a glacial pace: "When traders have no model at all, it's easy to get them to use the very first model available. Once they have something they rely on, it's much harder to get them to accept an improvement. So they simply stuck to using the Black-Scholes single-volatility framework for valuing exotics, even though it produced a flat volatility surface. To compensate, they put all their inventive energy and intuition into picking the "right" single volatility to use in the wrong model. [...] I was therefore very pleased one day to discover that there were certain exotic options [...for which] there was simply no appropriate single volatility that gave the correct answer from the wrong model." (p.244-5)

As Derman points out throughout the book, the difficulty of modelling in finance is that it is never going to be perfectly accurate; to get answers, you often need to make a lot of simplifying assumptions on markets' behavior, stock price distributions, customers' preferences, and the like. Often there isn't a right model. I am reminded of a quote by Nassim Taleb in The Black Swan (see! I do agree with him, once I cut through all the attitude): "Have you ever wondered why so many of these straight-A students end up going nowhere in life while someone who lagged behind is now getting the shekels, buying the diamonds, and getting his phone calls returned? [...] There is this sterile and obscurantist quality that is often associated with classroom knowledge that may get in the way of understanding what's going on in real life." (p.125) Without going that far when it comes to judging classroom knowledge, I would venture that the reason why many superb students falter once they have graduated is that they still expect work to come at them in neat little packages, well-defined questions they have all the information to answer. They hope there is a right model, which they will analyze to come up with the one and only correct solution.

I struggled with the same feeling my first year in graduate school (the feeling you could prove anything if only you knew what it was), and many graduate students, even later in their career, still hope their adviser is going to tell them what to do, so that they can go on and do it and then get their PhD. I've noticed a similar attitude in the year-long industry project I am supervising for the financial engineering program: many of the Master students, I feel, would very much prefer to have a structured project with a step-by-step outline so that their roles are well-defined and their contributions outlined from the start. It's an excellent project with a clear end goal provided by the executives and limited guidance regarding the intermediary milestones; hopefully by the end of the year students will be more comfortable evolving in that kind of environment. An issue is that the whole project seems daunting when one focuses on the final deliverable rather than putting building blocks together, but it is a great learning experience for the students - they definitely have the skills to pull the whole thing off - and one that gives them a taste of real-life assignments. They remind me of myself at the same age (tell me what to prove!) It does make me wonder about the college students who didn't go to graduate school - didn't put up with a bit of uncertainty and exploration through thesis or project work - and instead joined the workforce right away. I hope they still get to shine.

Log-Robust Portfolio Management

For the past six months or so I have been working with my student Ban Kawas on a "log-robust" optimization framework to portfolio management. By now many operations researchers (the people who devise quantitative models to aid decision making in business) are familiar with the ideas underlying robust optimization: traditional techniques assume the precise knowledge of all the inputs you need to compute the optimal solution to your problem, but in practice cost parameters and distributions of random variables such as demand or stock price are difficult to estimate accurately; robust optimization addresses this issue by modeling random variables as uncertain parameters belonging to range forecasts and optimizing over the worst case within a set of "reasonable values" for the random variables (this means that if you have 100 independent random variables that can vary between -1 and 1, the feasible set will allow for a few of those variables achieving their worst-case value, say -1, but not all 100 of them, as it is more likely that some will be higher than expected and some will lower and the uncertainty tends to cancel itself out.)

While robust portfolio management is not new, the robust optimization techniques have long been applied by considering range forecasts for the random returns or their key parameters such as mean and variance. Our idea was that we should apply range forecasts to the key drivers of uncertainty rather than the end random variables, in part because there is an abundant literature supporting the Lognormal model of stock price returns but empirical evidence suggests that the distribution of the continuously compounded rates of return has fatter tails than what the Gaussian distribution suggests (the Logistic distribution seems to offer a closer fit, but does not lead to formulas as elegant as the Black-Scholes model). In other words, the manager doesn't really know the distribution of the continuously compounded rates of return but the random variables over time seem to obey the same time-independent distribution, the manager is risk-averse - and, after the events of last summer, certainly allergic to fat tails - and this all screams: robust optimization! Which is exactly what we did. Since the robust approach replaces the random variables that obeyed Normal distributions in the Lognormal model, we call our method Log-robust portfolio management - an easy way to distinguish our framework from other approach in the literature.

For the quants out there, an exciting by-product of our research efforts, where we focused on maximizing the worst-case value of a static portfolio without short sales, was that we obtained a linear programming formulation of the robust problem; linear programs are particularly appealing in finance due to the number of assets involved in portfolio management, i.e., the large scale of many problems. (Efficient algorithms exist that solve large-scale linear programming problems very fast; typically large-scale nonlinear problems cannot be solved that quickly by far.) It was not obvious at all that we would obtain linear problems, although the deterministic uncertainty-free formulation is linear, because the uncertainty affects the arguments of exponential functions, but we were able to derive a closed-form solution for the inner worst-case problem and get rid of the nonlinearities. Again, this matters because we can solver much bigger problems when the underlying structure is linear. We also offer some insights into the optimal allocation and worst-case value taken by the uncertainty.

Last but not least, we compared our results with traditional robust portfolio management (I can't quite believe I am using the word "traditional" right next to "robust" but what can I say, the concepts really have gone mainstream in the ten years since robust optimization was pioneered by El-Ghaoui and Lebret as well as Ben-Tal and Nemirovski). There is a toy numerical example out there that suggests that traditional robust portfolio management achieves high degrees of diversification, but I have often wondered at the numerical values involved in the confidence intervals - the numbers changed very slightly from stocks to stocks to achieve a large degree of overlap across the intervals and while it has always been very clear that those numbers were completely fictitious this begged the question: does the traditional robust approach achieve diversification in practice? Well, on our example (which uses real stock price data for 50 well-known representative companies), it doesn't, but the log-robust framework does. Obviously there is still a lot of work to do but we feel the results are promising because our approach reconciles the observations of finance professionals regarding stock price dynamics with the need for a methodology that protects portfolios against downside risk and fat tails.

The paper is available for download here.

Book Rage and Opera Fights

To follow up on a recent post of mine, here are two additional examples of revenue management in non-traditional areas: the book industry and performing arts. The book industry in Canada has come under heavy fire because US and Canadian retail prices, printed on the jackets, do not come anywhere close to reflecting the current exchange rate, leaving Canadians with a strong feeling of being cheated. The issue, I believe, is that booksellers do not have a say in prices and must accept distributors' terms, while the distributors are based in the US and impose the biggest premium they can get away with on their Canadian neighbors as a dimension of their revenue management strategy. In this day and age, you would have thought they would have realized earlier this was bound to backfire, or in the words of the New York Times staff writer, "book rage has now been added to the list of neurotic human behaviors." This really should be turned into a case study of management inertia. Managers pretend to be well aware that Canadians will shop across the border if they don't decrease prices, but most companies have only taken action around Christmas time if at all, although anger has been simmering for a while (readers' comments are as insightful as the post itself). Of course you have a personal limit of items you can buy in the States and bring back to Canada, but $400 [the current amount, I believe] buys you a lot of books.

Toronto's Globe and Mail had a long and thoughtful article on the issue in November, which you can read here. While a few retailers have been implementing discounts, the most popular adjustment has been to sell US-originated books at US price, but that will only create more trouble if the Canadian dollar returns to its old ways ("The Canadian dollar last topped the greenback more than 30 years ago and only five years ago it was valued at 62 cents (U.S.)"). Many Internet users in online forums have cautioned against buying items online and having them shipped using fast shipping options (as opposed to regular mail via USPS) because the carriers take advantage of the cross-border activities to impose customs clearance surcharges that can reach $50. The advent of technology could turn this into a dynamic pricing opportunity for retailers many years from now: if the US price is set, the exchange rate fluctuates, and people have some latitude regarding when they want to buy the book, you could imagine a system where booksellers periodically decide which price to charge in Canadian dollars. That could even become a feature of frequent buyer programs at the big chains: become a member and get to pay the Canadian price. More likely, the whole drama should incite publishing plants to postpone determining and writing prices on the cover until they know whether the book is US-bound or Canada-bound, as books are printed many months ahead of release based on advance orders by booksellers.

Another innovative revenue management technique, which we owe to leading American opera houses, is the practice of creating a whole new market segment from scratch by making performances accessible to a much wider public at a much smaller price by broadcasting in movie theaters. I had mentioned the practice, pioneered by the Metropolitan Opera in New York, in an October post on Internet2 technology; in its December 19, 2007 issue the New York Times described the competition between the San Francisco Opera and the Met to attract new customers. The numbers speak for themselves: a premium ticket (second- or third-best category) at the Met sells for about $200, high-definition tickets for about $20, but capacity at the Metropolitan Opera House is fixed at 3,800 while "the latest broadcast [at the Met, of Romeo and Juliette] drew an audience of 97,000 worldwide." In other words, 25 times more. The article adds: "With gross weekend sales of $1.65 million, the broadcast would have been No. 11 at the movie box office."

San Francisco Opera opted for a digital format that supposedly offers higher quality than the method chosen by the Met, and does not broadcast live; in his comments to the New York Times journalist, the Met's general manager thinks this makes a big difference. (The question of whether people will get out of their houses for the San Francisco Opera rather than the world-famous Met is not addressed in the article, beyond the fact that heads-on competition will be minimized by broadcasting the operas at different times of the year with little overlap: March to November for San Francisco, December to May for the Met. Yet, there are only so many operas people can be expected to watch in any given year.) A live broadcast might certainly add to the user's experience, but by definition it can only happen once, and indeed each of the eight operas that were selected by the Met for this year's HD schedule will only be shown once in the movies: not ideal for shows that have a history of selling out in the major markets, and for people who just so happen to have other things planned on that very day. Showing each opera multiple times, which is only realistic from a cost perspective when you record the performance and accept that the shows will not be live, might prove a better option, especially if broadcasts sell out early (the Met broadcasts do). Ironically, the opera house that has the best chance of commoditizing its shows, i.e., attracting an audience large enough for multiple re-runs of its productions in movie theaters, is the one who decided against it because the digital-format technology was not available when the project got started. There is a while to go before opera becomes the new movie. 

Truth in Numbers, Sort of

I recently finished reading Nassim Taleb's The Black Swan, and while the author comes across as remarkably self-infatuated after the success of his first book Fooled by Randomness, he does make some valuable points. A few of those have been present in the psychology literature for a while now - for instance, the fact that people tend to find more likely events that are more precise because they can visualize them more easily (in the public consciousness, an earthquake in California causing massive flooding has higher probability than a massive flood somewhere in the United States, simply because people find earthquakes in California likely, but in truth the first event is less likely than the second, because the first requires earthquake and California and flood and the second only flood - p.76). And of course, anyone who has ever done math proofs knows that to prove something is not true, you just need to find one counterexample (you can have seen one million white swans, to prove that not all swans are not white you just need a single one.) Taleb makes also good points on cause and consequence - it's easy to say this or that policy caused a company's success if the companies that have tried the same policy but later declared bankruptcy are no longer there to tell the tale; the issue of biased samples occurs not only in gambling (p.109) but in hedge funds, or in professions with high attrition rates where partners earn very high salaries after ten or fifteen years of experience, but few of the first-year hires are still employed by the firm a decade after their start date: only the most successful employees show up in the sample.

The book could have been much shorter, though; the author rambles a lot about his life, his knowledge, and the stupidity of the rest of the world. In particular, while it is important to realize random variables in real life don't conveniently obey a Gaussian distribution just because that's the only one MBA students are being taught (I know: at MIT I was a Teaching Assistant for the core MBA course on quantitative decision models), and it cannot hurt to know earthquake sizes and income distributions obey power laws, the book provides abundant criticism of traditional methods but remains extremely vague in terms of solutions, even in the last chapter (three pages!) that is supposed to help people implement these insights, and I believe many finance professionals eager not to repeat last summer's mishaps will find it disappointing, shallow and weak, as Taleb's few good ideas have no relation whatsoever with the industry employing most of the book's likely readers. Furthermore, although Gaussian random variables are commonly used in finance, it is doubtful whether any specialist of earthquakes, income distribution and the like would use a Gaussian model to begin with, because such distributions are symmetric. (If you wanted a symmetric income distribution, the fact that Bill Gates earns millions would require some people to earn negative amounts of money, that is, give more money than what they earn to the tax man. While credit card companies would no doubt be thrilled, this doesn't sound too realistic.) So it is nice of Taleb to remind everybody that income distribution is heavily skewed, but that can hardly be considered breaking news.

Here are a couple of ideas I wish a book on the difficulty of forecasting accurately would have expanded upon.

  • Practitioners should keep in mind why they need a forecast. Specifically, the forecast is usually fed to an optimization problem to, say, determine the optimal order or the optimal portfolio allocation. While forecasting penalizes equally undershooting and overshooting the actual data point (because the forecast is produced by minimizing some symmetric measure such as mean square error), the underlying problem typically penalizes undershooting (forecasting less than the actual demand leads to backorders and dissatisfied customers) a lot more than overshooting (too much inventory leads to a lot of items on the shelves). Practitioners should know where their biggest risks are and work on forecasts that depend on the optimization model and take into account problem-specific downside risk. You really don't care about predicting the next data point with 99% accuracy if the missing 1% means you're incurring hundreds or thousands of dollars in penalties for unmet orders and backlogged demand. In other words, forecasting shouldn't be viewed as a stand-alone discipline but integrated to the problems at hand.
  • Long-term forecasting is tricky and has always been; Yossi Sheffi in The Resilient Enterprise and David Simchi-Levi and his co-authors in Managing the Supply Chain (two must-read books for supply chain professionals) had pointed that out long before Taleb had even begun the first draft of The Black Swan. The key application of long-term forecasting - in the sense that long-term forecasts are used to take now decisions that will be difficult to change later - is politics, where proponents and opponents of a tax bill or a measure to reform Social Security cite point forecasts about growth or health costs thirty years from now using assumptions they have not bothered to define. What is critical there is for their constituents to recognize the impossibility of producing a point forecast so far away in the future, and to start requesting range forecasts (more on that in the next bullet point) and scenario-dependent predictions to get a more realistic picture of the possible outcomes. Just think about how fast the budget surplus turned into a gaping deficit once new tax laws were enacted. Much of the current mortgage crisis is due to overly optimistic predictions on house price growth and interest rates. The general public has to understand that decision-makers hold a vested interest in producing numbers that give them an advantage (help them get their tax bill passed or win business from unsuspecting homeowners). Many people are so scared of math that they don't even understand numbers can be fudged. Furthermore, what matters a lot more than the ability to predict random variables accurately (which is extremely hard if not impossible to achieve) is the ability to respond quickly to unexpected changes. Building flexibility in supply chains, for instance, is what keeps suppliers in business when a manufacturer faces a surge in demand; if the supplier cannot meet the additional orders the manufacturer will turn to a more reliable partner to supply him in the future.
  • There has been a lot of discussion on point forecast, that is, exact numbers as forecasts. More recently, researchers have focused on incorporating range forecasts as input of the problem parameters, due to the difficulty in estimating parameters precisely. (Even Taleb recognizes the exponent in the power law model is difficult to estimate accurately, as its correct estimation depends on the occurrence of large events which, by definition, aren't occurring much.) Optimization software, however, produces one single solution as "the" answer, even when several allocations are optimal. This reinforces students in their belief that they should implement that very specific solution without giving it much thought. A way to correct that would be for software to use range forecasts as input, and also produce a range of good (close to optimal) solutions as output. This would also allow the decision maker to retain ownership of the solution he ends up implementing (nobody likes to trust black boxes) and incorporate factors that the optimization problem had not captured when he finalizes his strategy. In other words, optimization software should return to being a decision aid rather than a substitute to managers' brains. 

Another important theme of Taleb's book is that managers don't understand the risks they are taking. The issue in my opinion stems from an over-reliance on a single number to measure risk; managers are naturally reluctant to incorporate in their analysis rare events that they have never observed and cannot even quantify, and which distort "the" number their risk management strategy is based on. But why should risk be represented by a single number to begin with? This reminds me of the discussion about the non-subadditivity of Value at Risk, which means that a company using this concept (as required by the Basel II agreements) could decrease its official measure of risk by turning its subdivisions in stand-alone businesses, a counter-intuitive move that goes against the benefits of diversification. But if managers considered 95% VaR and 99% VaR as opposed to just 99% VaR, or both 99% VaR (quick review of VaR: things don't get worse than that number 99% of the time; in other words, 99% VaR is the 1% quantile) and 99% CVaR (given that we are in the 1% of the cases where things are really bad, how bad do they get in average), it would be harder to find examples where diversification increases risk. Maybe a good strategy relies on several risk measures to begin with.

In summary, the general public should be educated against the fallacy of forecasts, but I doubt non-technical readers will take more away from The Black Swan than the fact that Taleb finds himself much, much smarter than everybody.

Role Reversals

Research in corporate labs, or is it called development now, has been the topic of quite a few articles over the last year. In March, The Economist published Out of the dusty labs, which discussed the shift in industry from big-picture innovative thinking to more prosaic, bring-to-market issues. Gone are the glory days of Xerox Labs or Bell Labs; "long-term research was a luxury only a monopoly could afford. [...] Instead, researchers have become intellectual mercenaries for product teams: they are there to solve immediate needs." Apparently, "turn[ing] ideas into commercial innovations" has not proved as smooth as the sponsors of the 1980 Bayh-Dole Act, which made it possible to patent the results of federally-funded research, had hoped the process to be. Maybe one day the National Science Foundation, which currently requires all principal investigators to describe not only the intellectual merits but the broader impacts (beyond the immediate research community) of their proposals, will have them enlist the help of business majors to write marketing plans for their output.

While the distinction between research in industry and research in academia is becoming more pronounced, the sources of funding are blurring: in "Corporate Labs Disappear. Academia Steps In", the New York Times (issue dated December 16, 2007) describes how more and more universities are receiving few-strings-attached funding from industry: "Stanford has paired with Exxon Mobil in a deal worth $100 million over 10 years. The University of California, Davis, is getting $25 million from Chevron. And Intel has opened collaborative laboratories with Berkeley, the University of Washington and Carnegie Mellon. [...] Last month, BP pledged to spend $500 million over 10 years on alternative-energy research to be carried out by a new Energy Biosciences Institute at Berkeley." The article suggests this is part of an "emerging model for how corporations can tap big brains on campus without having to pay their salaries", which I guess we can call an outsourcing of industry research activities to a supplier - academia - that can provide theoretical ideas more efficiently, in the latest example of how boundaries between suppliers and buyers/retailers are blurring in the global supply chain (here, of knowledge). The move might also be related to the sizable profits realized by oil companies recently, profits they need to put to good use if they want to silence the critics and avoid seeing their revenues capped - in a sense, if universities can truly use the research results as they please ("Berkeley professors are free to publish results of BP-funded research. The university also will own the rights to any resulting intellectual property."),  big business has become a philanthropist.

And, in the most interesting role reversal, and the most original article I've read on this, The Associated Press describes in "Nanotech Firms Find Room on Campus" (published by the Washington Post on December 11, 2007) how small companies can rent the laboratory facilities at better-equipped universities to avoid incurring prohibitive setup costs. "Thirteen nano-level university laboratories across the country are hiring themselves out to businesses eager to make their mark in the millennium of the minuscule [nanotechnology]. The intimidatingly named National Nanotechnology Infrastructure Network, begun in 2004, is funded in part with $14 million a year from the National Science Foundation," and "Host universities can apply the fees they receive to anything they like, including beefing up their lab equipment." This allows small businesses to level the playing field and hopefully will foster more innovation; it would be interesting to see if the trend extends to disciplines other than nanotech, although the issues get trickier when they involve software rather than hardware - academics usually purchase much cheaper licenses and the software providers will not exactly rejoice at the idea of industry practitioners using those. But in the same way that call centers got outsourced abroad, research projects might well become outsourced to academia, the end of the line in the knowledge chain. Rent-a-brain, rent-a-lab: if after all this academics are still accused of producing results irrelevant to industry, I don't know what will make them change their ways.

Examples of Revenue Management

Here are two examples of non-traditional revenue management I have noted over the last few weeks.

In the November 2007 issue of ARTNews, Eileen Kinsella describes in the article "Betting the House" a new risk-averse, profit-sharing strategy of collectors: "In guarantee arrangements, the auctioneer promises to pay the seller a minimum amount, usually somewhere below the low estimate, regardless of how the artwork performs at auction. If the work sells for less than the guaranteed minimum, the house must pay the difference; if it sells for more, the house shares in the excess." Sotheby's, which in August reported outstanding guarantees of $378.1 million for works "with a collective midestimate value of $400.2 million", has apparently been implementing this practice profitably for 15 years, although that might say something about the relevance of midestimates in today's overheated art market. Guarantees have become one of the many tools in the rivalry between Sotheby's and Christie's for ever more consignments, although "some art-world experts take a skeptical view" at the auction houses' innovative methods for market share, which they say encourage speculation.

The second example is about the secondary market (i.e., resale market) for tickets to live performances such as concerts and football games ("If you can't beat 'em, join 'em", The Economist, December 8, 2007). The article notes that "the rise of online ticket-exchanges has expanded the market by making it much easier for sellers and buyers to find each other" and values ticket resales in America this year at $3 billion. What is new is that sports teams and performers "are now trying to get a piece of the action, by signing revenue-sharing deals with online ticket exchanges." The teams "refer ticketless fans to the exchanges", and share their profit in return. (This reminded me a bit of the callable revenue management model proposed by Guillermo Gallego of Columbia University and his collaborators a few years ago, where airline passengers were able to buy tickets at low prices on the condition that they would return the tickets - and for instance, be reassigned to another flight departing on the same day - if there was more high-paying demand than expected; in that case of course passengers could not share the airlines' profits, but plane tickets are not as easy to trade as concert tickets to begin with. I cannot understand why by now airlines don't let some of the customers pay for an itinerary with a specific departure and return dates, not times, and tell them at what time they should be at the airport just a few days in advance for destinations that are serviced several times each day. That would save everyone time and money, rather than having to put up with all the "we have a very full flight to Orlando this morning and we are offering $300 certificates to customers willing to take a later flight", as I heard recently while waiting at Newark Airport. That must have been a very full flight indeed.) By encouraging people to take part in online exchanges, the sports teams are able to determine more precisely what consumers are willing to pay and gather valuable data that, maybe, they will use to improve their pricing in the primary market, where they get to keep all the profits.