Previous month:
September 2008
Next month:
November 2008

October 2008

Operations Research and The Police

Some time ago, my local newspaper published an article about operations research applied to an unusual domain - the Allentown police department. While the newspaper does not use the term 'operations research' (its mistake!), the new strategy is exactly that. For instance ("Allentown Police Will March to a New Beat", Morning Call, August 7, 2008): "The city will be divided into four 'police service areas,' which will be further divided into patrol areas or beats. There will be fewer beats, but increased staffing in some, to match demand" - that sounds like a resource allocation problem to me.

Also: "The department plans to classify calls based on their priority, responding to high-priority calls immediately while delaying responses to others, and in some cases, not responding at all." Oh, a queueing system with customer classes! The article also gives a new appreciation of what the Allentown police has to deal with: "Over the last 18 months, police have responded to 127 addresses at least 50 times each, mostly for noise complaints, parking problems, juvenile complaints or traffic complaints."

That is just mind-boggling to me. If you have to call the police every week and a half or so, or if the police is called about you at that frequency, doesn't that say something about how much your life is in a tailspin? (There is this article in today's paper about a man who has been "harassing the police with ''nonsense'' calls [101 calls in two years] about such things as pool water ruining his grass, children making noise and unfounded reports of damage to his property," Morning Call, October 28. Someone needs a hobby.) I guess that's the 80/20 rule in action, or even 90/10 (90% of the work is due to 10% of the people). In mathematical terms, that is called a power-law distribution. Populations of cities and net worth of individuals are considered to obey power-law distributions too.

The August article mentions "the department lacks a crime analyst who can map and track data," and explains that "the plan also calls for the city to implement COMSTAT, a widely used police program that among other things collects and analyzes data as a way to hold police brass responsible for increases in crime." Collecting data to identify patterns, which was popularized by Rudy Giuliani in New York City a few years back, has become an increasingly important part of police work. Wherever there are numbers, there is pressure to fudge them, in particular when it comes to public safety, and the police is no exception, as described in this article called "The Trouble With CompStat", by Robert Zink in PBA Magazine. He explains: "So how do you fake a crime decrease? It’s pretty simple. Don’t file reports, misclassify crimes from felonies to misdemeanors, under-value the property lost to crime so it’s not a felony, and report a series of crimes as a single event."

Sometimes, it seems that people have overly high expectations of what data can do for them. For instance, the article "Lehigh County goes forward on a crime data center but Northampton County isn't ready to follow suit", also from the Morning Call, dated March 30, 2008, states that "If established, the center would provide police with immediate information on crimes committed throughout the Lehigh Valley, and would help establish if a crime committed in Allentown is similar to one committed later in the day in Bethlehem or if a series of crimes may have been committed by the same person."

I think it would be great for police departments to pool their information to become aware of what is happening in other townships, but the farther you go in distance, the more difficult it is to see a relationship. If five cars on the same block are broken into, that might be the work of one person. If five cars throughout the Lehigh Valley are broken into, it's more likely to be the work of five different people. In the hope of establishing a correlation, one has to keep a lot more data, and the more data one has, the harder it is to make sense of it all. The article explains that the center would be staffed by five to eight people analyzing the data - for the center not to be a waste of money, these people had better be good at what they do. While it all does sound like a very good thing, it is probably more needed in areas of dense population.

There is no better example of a good integrated system for police work than what is described in a November 2006 article in the New York Times, "Connecting the Dots on 9 Holdups in 4 Boroughs." It describes the crime sprees of 3 criminals who robbed all-night pharmacies in New York City, two years ago. They were caught because New York had just established a citywide robbery squad, which allowed the police to "connect the dots early on in a string of crimes across several boroughs."

What is interesting about that squad is that it is staffed by detective-analysts, while the Morning Call article seems to imply the staffers at the Lehigh/Northampton Counties would just be 'data analysts' rather than police officers. (The NY police does have a Real Time Crime Center computer too.) Because of the great police work in 2006, which allowed the citywide robbery squad to identify cross-borough patterns, officers were staking out Duane Reades all over the city ("I was dreaming Duane Reades", says one), so after the last robbery occurred, they were able to chase and arrest the suspects quickly. In the end, good data is worthless if you don't have good people to act on it. Maybe, as the Lehigh Valley becomes more populated, we will see more multi-township squads making sense of the information collected by small-town police departments.

This also reminded me of Laura McLay's post "Make better figures and maybe a bar graph" in her blog Punk Rock Operations Research. (I had bookmarked it around the same time as the Morning Call article and never got around to writing about it. Until now.) A few months ago, Laura attended a conference organized by the National Institute of Justice; in her post, she recounts the talk by someone from the San Diego Sheriff’s Department, who described how his departments used to "produce a 28-page report every month", full of data metrics no one really could make sense of - when so much data is available, it is tempting to produce a lot of outputs, just because one can. To its credit, the sheriff's department realized this strategy was not working, and thanks to the book "Measuring What Matters", was able to turn the 28-page monster blob into an 8-page focused report that actually became widely read by the agency. When it comes to data, more is not necessarily better. 

Food for thought for law enforcement in the Lehigh Valley.


Numbers - Stuff I'm Reading

PhD Comics. ""Academic" Salaries" (you'll understand the quotes around academic when you see the comic strip) and "Enrollment [in Grad School] vs Unemployment Rate."

Number of Malaria Cases. As The Economist explains, "the number of malaria cases is down sharply, for reasons good and bad." The bad reason is that they had been counted improperly before. "The new methodology takes the actual number of malaria cases reported by local health authorities as a starting point [rather than estimates dating back to the 1950s and 1960s in some countries.] Nearly half the fall comes thanks to counting cases in India by the new method." (September 20, 2008)

Number of Poor People in the World. "The world is poorer than we thought, the World Bank discovers," states The Economist in its August 30, 2008 issue. The author points out that "this does not mean the plight of the poor had worsened - only that the plight is now better understood." In other words: the previous estimates of the cost of living in various countries have been improved. The change is due to "freshly collected prices" and the use of a new standard for the poverty line, which is now of $1.25 a day in America using 2005 prices, rather than the better-known benchmark of a dollar a day. (Such definitions always make me sad for the people who have to survive in America on two dollars a day.) The change has not mollified some of the World Bank's critics, who argue that "its cost-of-living estimates are based on the prices faced by a representative household, [...] but the poor are not representative", because they buy in much smaller quantities than the typical household, and are charged higher unit prices for the convenience of buying, say, a cupful of rice rather than a whole bag. Other researchers have tested this assumption, and have found that "in nine of those countries the poor in fact pay less" because "they save money by buying cut-price goods from cheaper outlets." The next global price survey is due in 2011. The debate goes on.

Race and the London Police. This September 18th, 2008 article in The Economist ponders whether the London police remains racist. Apparently, the fact that it was racist in the 1990s is not in doubt, but "so far this year five senior non-white officers have brought, or are reportedly preparing, claims of racial discrimination against the Met (to date, one has been won and another lost)." So is the Met still racist, and what represents an appropriate percentage of people of color among high-ranking police officials? "Many of the complaints levelled against the force are about being passed over for promotion." The article suggests this is because officers of color are relative newcomers - "Among white officers in the Met, 29% have been in the police for 20 years or more, the sort of time it takes to reach the senior ranks. Only 12% of non-white officers have served that long." While the author's article recognizes they might be forced out by bullies, those who don't give up do well, and even slightly better than white officers: "among officers with ten to 15 years’ service, 1.9% of non-whites have reached at least the rank of chief inspector, against 0.9% of whites."

Numbers Don't Count Unless We Stare at Them. This op-ed in The New York Times in July explains that we are so sensitive to changes in gas prices because we are made to stand at the pump while we fill the tank, and typically buy multiple units (gallons), compounding the price increases. "Perhaps it would be better if gas station attendants filled the tank for us, as they used to, so we did not stand at the pump watching the rising price of our gasoline." In the meantime, health care costs keep rising, but items deducted from people's paycheck don't seem to count as much.


The Deluded Scientist

Earlier this month, The Economist published an article on a new type of winner's curse, this time in academia rather than in real estate (in Wikipedia's words: "in such an auction, the winner will tend to overpay") - "With so many scientific papers chasing so few pages in the most prestigious journals, the winners could be the ones most likely to oversell themselves—to trumpet dramatic or important results that later turn out to be false." A scientist named John Ioannidis argues that "the leading journals [such as Science or Nature are more likely to] publish dramatic, but what may ultimately turn out to be incorrect, research." Most fascinating is a statistic taken from Ioannidis's previous work: "within only a few years, almost a third of the [49] papers [that had been cited by more than 1,000 other scientists] had been refuted by other studies."

While the article's author emphasizes Ioannidis and his co-authors are not suggesting fraud, it is sometimes difficult to distinguish between a scientist's delusions (where someone convinces himself that something is correct even when it's not, because the person is so emotionally invested in obtaining the result) and outright ethical breach. The case of Homme Hellinga at Duke and his student Mary Dwyer is well-known by now (I wrote about it back in May); more recently, Purdue University stripped one of its professors of his "named professorship" and the $25,000 a year that accompanied it, citing research misconduct. At issue was the professor's claim that he had "created energy-generating fusion in a tabletop experiment", a finding scientists working in other laboratories have not been able to reproduce.

When it comes to high-profile scientific discoveries, I am always hesitant before I label something a fraud, especially involving a professor - it seems so obvious that other researchers are going to try to replicate any breakthrough, that I can't quite believe academics would deliberately claim something that they know is wrong. In contrast with some of their students, who might not seek a faculty position and use instead their PhD in science as a steppingstone to a career in, say, consulting or finance, professors are stuck in academia, which is a very small world. (At Duke, the scandal originated in a mistake made by the graduate student in her protocol; another high-profile case, this time at Columbia University, was also due to a graduate student. This is of course not to say professors never succumb to the temptation either - as documented in this fantastic "Where are they now?" article in Nature - simply that it is not as common.) The situation is rarely clear-cut; at Purdue, for instance, the panel found the professor guilty of misconduct for "falsely claiming independent confirmation of the work," because the postdoc who duplicated the experiment worked under the professor's guidance, but it also dismissed "10 other accusations of misconduct, including improper presentation of data."

Thankfully, scientific findings can now be discussed and challenged more quickly thanks to recent developments in Web 2.0 - specifically, the launch of the latest version of Research Blogging, "a website which acts as a hub for scientists to discuss peer-reviewed science", as pointed out in The Economist. I particularly enjoyed "Being closer to God linked to more depression", courtesy of the British Humanist Association Science Group, and "Should you let your toddler watch TV", on ScienceBlogs.com. (ResearchBlogging.org is only a portal linking to these resources.) While The Economist article discusses in needless details the risk of unethical researchers using the blogs to "steal" other people's ideas (I don't understand why anyone would submit a post without having a preprint ready, submitted to a journal and available on the authors' website - isn't the idea to speed dissemination? Every post I've seen on ResearchBlogging.org includes a reference to the technical paper the post is based upon), and while the portal itself still has a long way to go before it becomes a widely-used resource in academia, it provides a welcome resource for people interested in scientific topics without the obscure prose.


The Misplaced Math Student

The Brookings Institute released last month an advanced version of its report, "The Misplaced Math Student: Lost in Eighth-Grade Algebra", due in December. For those of you not quite sure of what is called algebra nowadays, this Wikipedia entry should clarify things - basically, it's math using symbols like 'x' and 'y' to represent quantities, like the number of apples and oranges you want to buy. In particular, it requires students to think about problems in a more abstract manner than what they have been previously taught. The author of the report argues that "the nation's push to challenge more students by placing them in advanced math classes in eighth grade has had unintended and damaging consequences, as some 120,000 middle-schoolers are now struggling in advanced classes for which they are woefully unprepared," as summarized in the press release.

The advanced report has received national media coverage, including from USAToday in its September 22 issue. American journalists, who attended school when algebra was relegated to the backwaters of high school education, naturally have a biased view of the need to learn algebra in school, echoing the sentiment of the study's author when they talk about a "high-minded mantra [in] the idea that virtually all students should take algebra by eighth grade." Well, folks, I am sorry to break the news to you, but yes, that would be a good thing. The issue is that you can't do that unless the students have solid foundations, because being able to take algebra is only the tip of the iceberg. Of course, since few people out there - and certainly no politician - have any clue what algebra is, the goal of "algebra for everybody" sounds like a great campaign slogan, bringing promises of continued U.S. dominance in science and engineering.

The study's author explains: "The push for universal eighth-grade algebra is based on an argument for equity, not on empirical evidence. [...] By completing algebra in eighth grade—and then completing a sequence of geometry as freshmen, advanced algebra as sophomores, and trigonometry, math analysis, or pre-calculus as juniors—students are able to take calculus in the senior year of high school. [..] From this point of view, expanding eighth-grade algebra to include all students opens up opportunities for advancement to students who previously had not been afforded them. [...] Democratizing eighth-grade algebra promotes social justice."

He might be reading a bit much into it - I would think preserving America's edge in innovation would be a goal valuable enough, at a time where other countries are trying to ramp up their own research programs and recruit graduate students who would have traditionally gone to the States - but if he is right, this points out to a deep desire by many American parents to see their children succeed in what they correctly identify as an important avenue for social advancement. Some of the statistics he uncovered are startling. The USAToday article explains: "Among the lowest-scoring 10% of kids, nearly 29% were taking advanced math, despite having very low skills. How low? On par with a typical second-grader's, [the study's author] says."

I believe these kids' parents have the same reasoning as the first-generation immigrants who push their kids into science or engineering: better stay away from the soft fields, where the teacher decides whether she likes an English essay enough to give the child a good grade, and gravitate toward the hard disciplines, where nobody can take the fact that the kid gave the right answer at the math quiz away from him. Those cut-and-dry outcomes especially appeal to parents worried about discrimination. Sadly, they do not realize you don't learn algebra from scratch - you need to understand pre-algebra first. The name "pre-algebra" doesn't do the topic any good - it sounds like something minor that cannot stand on its own, but that students need to put up with in order to learn the important stuff. It turns out to provide fundamental skills that my own seniors at my own Top 50 university (and Top 15 department) in the country often struggle with, ten years after learning the material.

Now, I don't quiz them on every single intricacy of pre-algebra, but I have said before and I will say it again, there is something very wrong in the way U.S. students are being taught pre-algebra, in particular the use of parentheses to evaluate expressions. This sounds like an obscure geeky topic unworthy of mention, but it plays a critical role in developing quantitative decision-making models, because it affects the way the computer evaluates the costs students write in the objective and constraints of their mathematical problems. And every year, many of my students, no matter their hometown, get the parentheses wrong. If I were to make a guess, I would estimate the proportion of students who don't put the parentheses correctly until I tell them (over and over) at over one half of the class, any given year, and bear in mind that they are the best students in the country - they are at a top institution and picked industrial engineering as their major. From my little sample, I'll venture many middle-school pre-algebra teachers in this country don't have a clue of what they are saying.

So I really don't think politicians should focus on providing more opportunities to students by giving them more choices in eighth grade or later. They should focus instead on providing more opportunities to middle-school math teachers to go through re-training, for instance by giving lectures in front of selected mentors, receiving coaching to explain the topics properly, and getting a pay raise when they have demonstrated their ability to teach in a way that is not going to hurt students' education ten years down the road. It's tempting to believe that another teacher will straighten the students out if they were not taught properly the first time around, but often the issues get compounded over time, and students get dragged down forever by the holes in their middle-school education. Pre-algebra matters.


First Book News

Below are the links to the two most recent newsletters from First Book, an organization dear to my heart that gives books to American children from low-income families:

You can learn about Target's and Cheerios's work with First Book and the Make a Difference Day this Saturday throughout the country. And of course there is a link to donate for First Book, because just $2.50 gets a book to a disadvantaged child in this country. While helping American kids learn how to read isn't as exotic as lending a hand in a place far away, it should hopefully be a more relevant goal to the American readers of this blog.


Trends in Revenue Management

I am back from the INFORMS annual meeting in Washington, DC, so I am going to start blogging more regularly again after a couple of hectic weeks preparing for my talks and wrapping up research papers. To start, here are a few trends in revenue management.

  • The music industry has been struggling for years with the fact that teenagers do not want to pay to download songs. It tried to make them change their ways by initiating legal actions, but - according to an article in a recent Economist ("Qualms with Music", October 4th), "record companies are realising that their efforts to get young music fans to pay up are not working." Enters Nokia. Buyers of some of its new handsets "will be able to download as much digital music as they like." In the industry's lingo, the handsets "are bundled with a year's free online-music subscription."

    Of course, nothing really comes for free - part of the subscription costs are hidden in the handset's prices, and Nokia hopes that when the subscription expires at the end of one year, people will buy the newer model and get another "free" subscription, rather than keeping the old handset, which will still allow them to listen to the tracks they've already downloaded. This is an interesting approach because it has the potential to create competition with Apple's iTunes, but it also raises many questions - in particular, by how much should the record companies be compensated? (Nokia appears to be footing part of the bill in order to launch the new handsets.)

    Some analysts worry openly "Nokia will end up overpaying", but the deal is certainly good news for the record companies, as they could make about $340m in additional income, "equivalent to more than 1% of global recorded-music sales in 2007 and 12% of the digital business." I found most interesting the mention, toward the end of the article, of possible unintended consequences - since Nokia is apparently giving away music for free, this will only reinforce teenagers' widely held belief that they shouldn't pay for their songs, further decreasing their willingness to pay.  

  • Even lawyers have started implementing novel revenue management techniques. As stated in the August 28th issue of The Economist ("Killable Hour"), more lawyers are implementing flat-fee structures "with a performance bonus related to results", instead of charging for their time in increments of 15 minutes, which - clients complain - encourages quantity at the expense of quality. The article points out that a fixed fee is not always appropriate, in particular in the case of litigation, because of the unpredictable duration of trials.

    As a result, some firms have begun offering what the industry calls a "blended fee", which is a combination of number of hours worked and either a fixed fee or a contingency fee (where a fraction of the money awarded to the client goes to his lawyers). For instance, a 2006 article in Lawyers USA describes the growing popularity of contingency fee for patent litigation, because clients and lawyers view it as a risk-sharing mechanism. That article explains: "A lot of companies [especially small R&D companies] can't afford to pay $2 million chasing a recovery they may not get." ($2 million is actually a lower bound on the average cost of patent litigation.)

    The author also notes, however, that contingency fees not only help spread risk, but also bring downside risk to bear on the lawyers' side, since they might lose the case and end up with very little money for their time. That explains why some lawyers file only 10% of the cases they examine. While most headlines are about the huge payments some lawyers are able to get for its clients, they are just as risk-averse as the rest of us.

  • The subway is a more traditional application of revenue management, but even then, differences in implementation are striking. I am used to the New York and the Boston subways, which have a flat-fee structure ($2 a ride in NY, $1.70 in Boston). DC's Metrorail is the exact opposite, in the sense that the fare not only depends on where you are going, but also on when you are planning to ride the subway. For instance, if you are planning to use Metrorail within downtown DC on the weekend, or during off-peak hours during the week, you will only have to pay $1.35 a trip. However, from opening to 9.30am, from 3 to 7pm, and from 2am to closing, the fare increases, for instance to $1.85 in downtown DC, with a maximum of $4.50 if you are travelling to or from the suburbs. The one-day passes apparently do not even work on weekdays if you try to use them before 9.30am.

    You can also buy a 7-Day Short Trip Pass, which comes with the following description on the MetroRail website: "Valid for seven consecutive days for Metrorail trips costing up to $2.65 between 5-9:30 a.m. and 3-7 p.m. on weekdays. If the trip costs more than $2.65, you must use the Exitfare machine to pay the additional fare. The pass is valid for any rail trip at other times. The pass will be returned for continued use during the valid period." Oh yes, I forgot to mention - you need to keep the pass once you've gone on the platform because you will need it to exit, which is how Metrorail checks you have paid the correct amount. I am proud to report I always got my fares right when I went sight-seeing, and I completely agree with charging more for people who travel longer distances (because if they took their car, they would also be paying more in tolls and gas), but the complexity of it all is mind-boggling.

    According to this blog post by Rob Goodspeed, the other two metro systems that do not have a flat-fee structure are Philadelphia's PATCO and San Francisco's BART. (A related blog post by the same author, focusing on DC's Metrorail, can be found here.) The author then compares BART and DC's Metrorail in a remarkable analysis with excellent graphs. Especially interesting is the last graph, which plots the per mile cost of each fare. Something has to be said for good visual aids. The one missing data set would be ridership numbers over the years - do fare increases deter potential customers? force current metro riders to find alternative modes of transportation? And if the increases are justified by infrastructure improvements, ridership should increase at some point, because people would be attracted by, say, the frequency of the trains. But according to an article in the Washington Post, the latest fare increase, which was toned down compared to the original plan, was simply needed to close a budget shortfall. Legislators hope Metrorail users have a high willingness to pay.


High School Math

The New York Times has an article in its October 10th edition on the US failing to develop the math skills of its most talented students. While the fact that the US K-12 educational system lags behind many countries is not new, the study stands out in its focus on the best students, especially girls, rather than on the assessment of average trends in the general student population. It achieves that goal by analyzing results from international math competitions, as opposed to SAT scores. The report offers obvious reasons for the situation - "American culture does not highly value talent in math, and so discourages girls" - and reinforces the well-known fact that "many students from the United States in these competitions are immigrants or children of immigrants from countries where education in mathematics is prized." The Times articles profiles former participants to math Olympiads, but frustrated me in its lack of suggestions for improvement. No parent will finish reading the article with the slightest clue on helping foster his or her kid's talent in math. It is also not clear whether the "intensive summer math camps" students have to go through to make the US Olympiad team increase students' preparedness in other scientific disciplines, although anything that forces students to think creatively is always welcome.

Shockingly, the Times article does not make any mention of the type of problems students have to solve in these Olympiads, but you can find such problems from past competitions on this website. It is hard to miss the fact that the Olympiad questions do not have multiple-choice answers; instead, they are about proofs. That's right, proofs. My own opinion about all this is that the US's mediocre performance on math tests is linked to the American obsession with multiple-choice quizzes. Such tests are easy to grade and even machine-readable, but students won't ever be taught to think outside the box if the answer had already been found for them and they just have to identify it correctly. The very fact that students can find the right answer by default if only they can eliminate all the wrong ones is a textbook example of anti-creativity. I would have at least expected the Times journalist to mention that debate.


Interest Rate Risk

My local newspaper, in an article by Steve Esack entitled "Bethlehem district losing on money deal", provides an excellent example of why everyone should have a minimum understanding of math, and why people should stop finding the fear of numbers acceptable. The article is about the Bethlehem school district swapping fixed-interest-rate bonds for floating-interest-rate ones between 2003 and 2007, a move that was supposed to save the district money, because the floating rates have historically been lower than the fixed ones.

This caught my eye because of the project I supervised last year for the Master of Science in Analytical Finance at Lehigh, which also considered a tradeoff between floating and fixed interest rates. I remember the executives mentioning that about a third of the debt at most could be invested in floating-rate bonds, because the trustees were not comfortable with higher levels of risk. In contrast, the school district ended up with "more than three quarters of its entire debt tied into adjustable rates." This has led to ballooning payments due to the financial crisis, "costing Bethlehem Area School District taxpayers an extra $1 million [last] month on the swap deals."

The article also mentions the recent default of more than $1 billion in bonds of Jefferson County, Alabama, another municipality that did not understand the level of risk it was taking. A Bloomberg article dated May 2008 provides a sobering account of that crisis, with an emphasis on the role of the banks that put the deal together, and in particular their excessive fees. This is what happens when you let laypeople take financial decisions by themselves regarding money that is not theirs. They are taken advantage of way too easily. (JPMorgan, which had engineered the deal and was facing a federal probe, recently announced its exit from the municipal-bond market.)

Two quotes from the Morning Call article stand out (you should read the whole two-page piece to get the full measure of the situation - it is certainly one of the best articles I have read in the Call in a while):

  • "The school board listened as Bear explained in technical financial terms how a swap worked. Some members expressed confusion, but the board agreed to do the swap."
  • My all-time favorite: "''These [swaps] are very complicated and the board that accepted this made it fairly clear it did not understand all the mechanisms behind it,' said School Board President Loretta Leeson. 'I feel the board, myself in particular, asked some meaningful questions like, is this a safe investment?'" (I have to admit, that one bowled me over. At least the woman isn't afraid to admit she doesn't have a clue. What did she expect the financial advisor to say? "No, it's not safe, but if you agree to the deal, I'll make a nice commission, so let's do it anyway"?)

Lehigh University's very own Professor James Greenleaf is also quoted explaining (rightly) that bond swaps represent "a calculated risk." The fact that no one on the Bethlehem school board understood that there was some level of risk involved is just mind-boggling. If there was no risk, everyone would invest in floating-rate bonds to get the cheaper rates. (When something is too good to be true, it usually is.)   

In this day and age, elected officials have to make more and more quantitative decisions, especially finance-related, if only because financial instruments are becoming more complex in the business sphere, and then find their way into the nonprofit world. Officials have to stop thinking that it's okay not to understand a thing and that the public will forgive them for their ignorance because, well, it's math. Asymmetric information never leads to anything good when the party with the least knowledge enters agreements that impacts people unaware of the negotiations.


More on Risk Management

This post is on the second article on risk in the September 2008 issue of the Harvard Business Review, "Owning the Right Risks" by Kevin Buehler, Andrew Freeman and Ron Hulme, three McKinsey consultants who also authored "The New Arsenal of Risk Management" in the same issue (see my previous post).

The authors suggest the concept of "natural owner" of risk to describe risks that a company has a competitive advantage in owning. They outline a five-step process to help managers incorporate risk considerations into their decision-making, in any industry.

  1. Identify and understand your major risks,
  2. Decide which risks are natural,
  3. Determine your capacity and appetite for risk,
  4. Embed risk in all decisions and processes,
  5. Align governance and organization around risk.

Before describing these steps in detail, the article mentions how TXU Corporation approached risk under the leadership of John Wilder; for instance, TXU increased the company's risk capacity by reducing debt obligations, after it had sold non-core divisions to repurchase debt and convertible securities. Later, when the company had excess risk capacity (too much equity for the risks it bore), it repurchased a significant fraction of its shares - an interesting move that extends the reach of risk management outside the financial industry beyond the traditional actions of, say, selecting an alternate supplier or a backup warehouse location. The authors also quickly go over the concept of Cash Flow at Risk, the operation manager's equivalent of Value at Risk in finance.

I particularly enjoyed the real-life experience the authors bring to the topic of risk management and the hint of quantitative techniques they drop here and there, from "Functional and business unit heads may understate or dismiss some of their risks in order to hang on to their share of the budget" to "four to six key risks typically account for most of a company's cash-flow volatility" (such as demand risk, operational risk, foreign exchange risk). While the article's terse style occasionally borders on the obscure ("companies often fail to make the connection between cost overruns and arbitrary return hurdles - a good example of single-point forecasting that ignores or misprices risks"), it even discusses ways to estimate probability distributions, dropping names such as agent-based modeling and Delphi method surveys.

While the authors do not define these concepts (as if they were familiar to the audience), they still take the time to explain, after advising to run a Monte Carlo simulation: "Widely used in the financial sector, this technique offers an extremely efficient way to run numerous what-ifs across multiple variables." The fact that some HBR readers interested in risk management could possibly not know about Monte Carlo simulations is quite mind-boggling - sadly, I am sure the authors took the pain to explain the technique for a reason. (As an aside, you do not want to run Monte Carlo simulations, because they underestimate tail events. You want to run Latin Hypercube simulations instead. It is actually the default in business simulation software such as @Risk. Latin Hypercube sampling reduces the variance in simulation results, i.e., if you re-run a simulation, you are more likely to see the sample average of the objective remain close to the previous value.) 

The article states the obvious for many quant-oriented professionals ("Instead of a single NPV [Net Present Value] estimate, companies can assess probability ranges [...] along with the probability of a negative NPV") but again, if it was obvious for all HBR readers, the authors would not have bothered writing all this. ("Most industrial companies neither coordinate nor evaluate the above decisions [supply-chain design, outsourcing, etc] using risk management tools. [...] Typically these people take a base case or a high and a low case and use them as a forecast, ignoring the rest of the probability distribution.") While the authors present good, solid ideas, such as the use of risk books (the analog to "trading books" in finance) and the implementation of a centralized approach to risk, I found the article most valuable with respect to the window it offers on non-finance practitioners' attitude toward risk. Or maybe their lack thereof.