Previous month:
December 2008
Next month:
February 2009

January 2009

Fair-Value Accounting and the Financial Mess

I found this excellent article on the website of CFO magazine: "Everything we learned about the financial crisis, again", by Tim Reason and Marie Leone (December 9, 2008). It describes the views the chairman of the Financial Accounting Standards Board, Robert Herz, on the financial crisis, and his take on the banking industry's (ultimately thwarted) attempts to roll back fair-value measurements, on the grounds that such practices exacerbated the meltdown.

The summary of the speech touches on many important points, such as the "'misplaced investor enthusiasm' for securitized loans, which [Herz] said was the result of 'widespread financial illiteracy in our country'" [emphasis added], as well as "how dependent the investing public has become on the financial markets", due to the looming insolvency of Social Security and defined-contribution retirement plans. The next paragraph of the article mentions again Herz's view that such an approach can only work if the public becomes more financially literate. Unfortunately, he provides no guidance regarding how the public is supposed to do that.

Herz, an accounting guru, naturally spent a subtantial amount of time commenting on the place of accounting in the crisis. In particular, he took a swipe at "America's 'continued addiction' to off-balance sheet treatment, particularly via securitization." Securitization is, of course, the process by which financial assets are pooled, repackaged and then sold to investors. In the words of the Wall Street Journal, July 10, 2008 ("The Future of Securitization"), "Securitization involves transferring a loan or pool of loans into a trust and then having that trust issue securities, or bonds, that are rated by the large rating agencies and purchased in the institutional bond market." The procedure became hugely popular due to the returns on equity it offered.

Both Herz and the author of the WSJ article point out that the issue with securitization is the duration mismatch between assets and liabilities, i.e., the fact that long-term commitments are financed with short-term debt. Both also note this was one of the factors underlying the S&L (Savings & Loan) crisis, which shares many features with the current mess (reading this 1989 article in TIME ["The Savings & Loan Crisis"] brings about an eerie feeling of having been catapulted twenty years in the past, as many sentences resonate today; the fact that the President back then was also called Bush doesn't help in keeping the two epochs separate) - this similarity explains the title of the CFO.com article. 

Going back to the topic of fair-value and mark-to-market accounting, so dear to the FASB chairman: despite pressure and intense media coverage dating back from the summer ("In Bear Stearns Case, Question of an Asset's Value", New York Times, June 20, 2008, "How to Start the Healing Now", Wall Street Journal, October 1, 2008, "Bipartisan Bailout Folly", Salon.com, October 1, 2008, "The Accounting Rule You Should Care About", CNN.com, October 1, 2008, "Momentum Gathers to Ease Mark-to-Market Accounting Rule", WSJ, October 2, 2008, "Has the Crisis Ended the Fair-Value War?", CFO.com, November 21, 2008), a report by the Securities and Exchange Commission recently recommended no change in the fair-value accounting rules [the famous FAS 157], but suggested "the number of impairment models [the models used for a downward revaluation of assets] could be reduced" (WSJ, December 31, 2008).

Banks had lobbied for a change because "mark-to-market accounting requires companies to value financial assets at their fair value -- the price they can fetch in the market. That has led companies to take big write-downs on thinly traded securities, even if the underlying assets aren't severely troubled" (WSJ, December 8, 2008) The full SEC report is available here; the part on impairment models is most closely related to Recommendation #4, summarized on page 19 and described in more details on page 204. I wish there had been a more thorough description of the actual models (can't help myself!), but I guess they are proprietary and bank-dependent.

The relatively recent FAS 157 has undoubtedly ushered a new era of fair-value accounting, as described in this other CFO.com article: "Fair-Value Revolution", September 1, 2008, which describes among other things how local company PPL Corp. (based in Allentown, Pa. and employer of many Lehigh University alumni, in particular at the treasury level) is dealing with the new regulations: "The initial challenge in meeting the requirements of FAS 157 entails sorting each fair-value estimate into one of three levels, or "buckets," as Paul Farr, CFO of PPL Corp., calls them. [...] In bucket 1, the value of an asset or liability stems from a quoted price in an active market; in bucket 2, it is based on 'observable market data' other than a quoted market price; and in bucket 3, fair value can be determined only through 'unobservable inputs' and prices that could be based on internal models or estimates. It's that third bucket that critics say has some serious holes."

The role of information systems in regulatory compliance represents an interesting side note, rarely broached upon in the mainstream media: "PPL's accounting and technology staff had to go back and "force" its computer system to plug in "literally thousands of energy transactions in a given year" into one of the three buckets." (Of course, the mainstream media, even at the height of the lobbying efforts in the Fall, does not print 3,000 words on an accounting rule either.) High-profile areas that should be soon affected by fair accounting are mergers and acquisitions (FAS 141(R)) and hedging (FAS 133 - "considered by many to be the most notorious example of the complexity of U.S. financial reporting" - I don't even want to think about what that looks like.) FAS 157 has managed the rare feat of putting accounting rules in the limelight.

Herz, who finds relaxing fair-accounting standards much too convenient for the banking industry, and problematic for investors, concluded his speech with the following piece of advice (paraphrased by the journalist): "Don't make industry exceptions. They are invariably abused, which ultimately forces standard setters to eliminate them." Which might be the reason why companies had lobbied for the exception in the first place.

Main two articles quoted in this post:


Obama and Science

Scientists around the country have greeted Obama's inauguration speech - and its mention of "restor[ing] science to its rightful place", with unsurprising enthusiasm (see for instance Fast Company, "Obama Promises Science-Centric, Eco-Friendly Presidency", January 21, 2009). In Obama on Science, a lucid journalist at Portfolio wonders whether there will be enough money to implement the new president's program in science and technology (November 5, 2008), which makes the New York Times article "In 'Geek Chic' and Obama, New Hope for Lifting Women in Science" (on what Obama could do to increase the low numbers of women in science, January 19, 2009) appear completely out-of-touch.

Here is a quote: "[A female academic at UC Berkeley] and other legal experts suggest that President Obama might be able to change things significantly for young women in science — and young men — by signing an executive order that would provide added family leave and parental benefits to the recipients of federal grants, a huge pool of people that includes many research scientists." Except that this all comes at a cost, and the cost would be passed on into the budget of the grant proposal, which translates into requesting more money for the same amount of work. There isn't enough money to fund all the good grant proposals already; we don't need to add on more costs. It's going to be difficult enough to convince Congress that the National Science Foundation, the National Institutes of Health and similar institutions really do need the money.

Some of the media coverage misses the mark even more wildly; for instance, Dennis Overbye has produced a forgettable essay for the New York Times ("Elevating Science, Elevating Democracy") where he asserts that good science and good democracy go hand in hand ("If we are not practicing good science, we probably aren’t practicing good democracy. And vice versa."), and gives puzzling examples about China to supposedly prove his point. Did he forget the Cold War already? Sputnik? Totalitarians regimes have long valued math and science because that is something their residents could excel in without expressing themselves as individuals (as artists and entrepreneurs do).

Science also provided an easy way to compare performance with foreigners, in particular the hated capitalist regimes, without resorting to intermediaries such as translations and the need to provide context. Sputnik showed the USSR should be taken seriously in the race to advance knowledge; facts spoke for themselves. Despite the emphasis on math and science in the Soviet education, though, it took three decades for the Soviet Union to disappear - that hardly warrants an endorsement of science as a tool to bring about democracy. Science did not topple the Berlin Wall, nor did it cause the reunification of Germany. What did was the flow of ideas that, although constrained and censored, showed East Germans a world without penury on the other side of the wall.

Overbye also writes: "The knock on science from its cultural and religious critics is that it is arrogant and materialistic. It tells us wondrous things about nature and how to manipulate it, but not what we should do with this knowledge and power." I've never heard anyone say science was "arrogant and materialistic". If anything, the lack of interest of many Americans in scientific careers can be traced to the appeal of more lucrative careers, such as law and business. In the sense that it doesn't pay salaries high enough to compete with these other paths, science isn't materialistic enough. Instead, it counts on its academic disciples to relish the love of knowledge and the opportunity to make discoveries, and to use that feeling to counterbalance the lower monetary compensation. 

Elevating science means attracting the best brains to solve today's technological challenges. It brings into the limelight another power besides those of the hard and soft varieties - let's call it cold power, for cold, hard numbers, which leaves an impact on the world as a whole not because of tanks or diplomacy, but because of the tangible solutions it brings to world problems such as providing clean water. It means the United States are ready to lead in that third, overlooked dimension again.


Too Much of a Good Thing: The Perils of Low Volatility

The Economist published a Buttonwood column entitled "In Praise of Volatility" in its January 17, 2009, which complements nicely the danger of low volatility described in The New York Times's "Risk Mismanagement" (more on that here) - if you don't remember the article in the Times, the danger is that models fed numbers with low volatility will assume this state of low fluctuations will continue into the future and put insufficient amounts of cash in reserve, although periods of extreme calm often precede ferocious crashes.

The Buttonwood columnist briefly mentions the well-publicized Ponzi scheme orchestrated by Bernard Madoff ("Low volatility was a large part of Bernard Madoff's appeal"), before connecting volatility aversion with the way fund managers' performance is analyzed: since investors' preferences on the matter are so well-known, it is tempting to game the system by creating fake low volatility, for instance by (1) investing in illiquid assets (rarely re-valued, because rarely traded), (2) selecting small-gain strategies, which in the end tend to go wrong spectacularly, and wipe out the fund when they do (in other words, the fund has a stellar track record until just before it goes out of business, which bears a striking similarity with what happened to the dinosaurs at the end of the Cretaceous period), and (3) "resort[ing] to fraud when things go wrong", especially due to the pressure for companies to meet their quarterly forecasts.

The columnist argues that life is not linear, so people should make their peace with volatility ("Markets do not rise at a steady pace and business conditions do not allow for a smooth rise in profits.") It of course leaves out a big part of the story, as one would expect given the limited amount of words the column has on the page. Too many investors, starting during the heyday of the dot-com boom, have entered the market with their eyes on the ups without being able to stomach the downs. If their neighbors made a good return on their investments, or at least professed to make one, they wanted in too. That statement also applies to real estate, and people who did not have the means of becoming homeowners but did so anyway, fooled by the banks' happy reassurances until their mortgage rates adjusted.

Just like Bernard Madoff's investors, first-time homeowners and investors all longed to be part of the in-crowd. (A Bethlehem resident who lost millions in the scheme described the situation as follows: "We'd been on a waiting list for six months because he just didn't have room for new clients, and finally they accepted us. We felt like we were entering an exclusive fraternity. We felt lucky." [Morning Call, December 16, 2008])

The situation is about more than telling people that volatility is a part of life, or warning some who really are averse to losses that they shouldn't invest in high-return instruments, because high returns come with high risks these investors can't afford. In the country of self-made men, where stories abound of entrepreneurs retiring at 40, investing in the stockmarket or owning a house is a sign of status, and people will believe almost anything to hang on to their dreams - the rules have changed, the house bubble won't burst, the prices will just keep on increasing forever, etc.

An older article in The Economist raises the issue of learning the lesson too well: will people stop on investing because of the amounts they lost in the stock market? (When the golden egg runs out, December 4, 2008) The author argues that would be a mistake, using well-known arguments. I particularly liked the following sentences: "The problem is that investors do not regard financial assets as they do other goods; lower prices do not encourage them to buy more, but simply reduce their confidence. Past returns are the main determinant of flows into the stockmarket; investors buy when prices have gone up, not down."

In the end, we are just one big pendulum, swinging from one extreme to the next with unbridled enthusiasm. The problem isn't low volatility - it's the herd mentality that sets in when people don't understand what's going on and yet don't want to admit they don't have a clue, hiding behind a crowd in the hope no one will notice their ignorance. (If everyone is buying this stock, they must know something I don't, right?) A lot of good it did.


On Gifted Children

The Washington Post published an article in December about schools in Montgomery County (according to this Wikipedia entry , "one of the most affluent counties in the nation") dropping the gifted label to designate children ("Montgomery Erasing Gifted Label", December 16, 2008). The journalist states: "Officials say the approach slights the rest of the students who are not so labeled. White and Asian American students are twice as likely as blacks and Hispanics to be identified as gifted."

Now, does it slight the children or rather their parents? They would do well to remember that many people who had average or mediocre grades in school have turned out to be successful entrepreneurs, and some students who excel in the educational system occasionally struggle in the real world, where problems don't come neatly packaged with all the information one needs to solve them. Of course, the gifted label is also about the parents of the gifted children and their pride in feeling that their kids are better than others.

I, personally, have misgivings about calling young students 'gifted', and find myself thrilled at Montgomery County's actions, even if the county is simply bowing to political correctness (which I don't approve). At that age, the children we call gifted are the ones who succeed in class, but it is not clear whether they succeed because of natural abilities or because their parents, in that very wealthy Washington suburbs, are able to help them with their homework, explain complicated concepts again or hire an army of tutors.

Children who amazed their teachers at age seven might have become average by the time they turn eighteen (the example of a former student at my high school comes to mind) - it'd be interesting to know whether the so-called 'gifted' children in Montgomery County go on to exhibiting exceptional promise as high school or college students. Think about the burden on the teenager, to remember he had been found gifted as a kid and somehow did not fulfill his promise: twenty years old and already a disappointment... In addition, we often use the word 'gifted' for artists and sometimes athletes, who excel in disciplines not typically valued in the educational system. I doubt the fact that their gift was not recognized by schoolteachers early on makes them any less talented.  

I do worry when I read sentences such as "School system leaders say losing the label won't change gifted instruction, because it is open to all students" and "A school that tells some students they have gifts risks dashing the academic dreams of everyone else". Abundant testimony documents the negative effect of placing ill-prepared high school students in AP or honors classes (I wrote a post on that a while back), and it is doubtful struggling students reach the conclusion that this is a good idea by themselves. Gifted instruction might go the same way and turn into one giant feel-good endeavor, where students who shouldn't be there slow down the rest of the class because their parents want to believe they are geniuses.

Parents should tell their kids that receiving a label from a school doesn't matter. After all, a former classmate of President Barack Obama confided to the German newspaper Die Welt about a year ago (edition of January 4, 2008): "Besonders gut in der Schule war er nicht." Which means, for those of you who don't speak German: "He wasn't particularly good in school." Parents of non-gifted children in Montgomery County, rejoice!


Personalized Learning: Physics and Other Topics

The New York Times recently published an article about MIT's new way to teach introductory physics ("At M.I.T., large lectures are going the way of the blackboard", January 12, 2009).  I was at MIT as a graduate student, not an undergraduate, so I never took that course, but I sat in the 26-100 amphitheater, and to this day I remember that it is quite dreadful - "windowless" and with "rows of hard, folding wooden seats", just as mentioned in the article. The journalist explains that "the physics department has replaced the traditional large introductory lecture with smaller classes that emphasize hands-on, interactive, collaborative learning", and that it has made the change permanent despite initial opposition, including from the students. This has translated into increased attendance and decreased failure rates at the final exam.

Engaging students is obviously better than letting them daydream in a huge auditorium - it is hard to imagine anyone opposing the change toward personalized learning (selected by the National Academy of Engineering as one of the greatest challenges for the twenty-first century here), and yet students petitioned the physics department in 2003 to halt the change, known at MIT under its acronym, TEAL, for Technology-Enabled Active Learning ("Students Petition Against TEAL", The Tech, March 21, 2003). Issues back then involved "the time allotted for experiments, the use of Powerpoint presentations in lectures, and worksheets." It appears that the TEAL program has adapted significantly since then; even in 2003, changes were in the works to address criticism, planning in particular for professors to spend more time on the chalkboard rather than reading Powerpoint slides.

No student has ever liked lectures - in the TEAL format or in the traditional one - that involved reading off slides. I would say the advent of Powerpoint has, on average, negatively affected the quality of higher education, except that reading off transparencies projected on a blank screen was hardly better. (That was my introductory physics course in Paris.) But my engineering school and MIT did have one thing in common: the use of recitations, led by research staff (not professors) in Paris and teaching assistants in Cambridge. The idea of the traditional format is that lectures may be boring, but recitations in smaller groups provide students with the opportunity to learn how to do exercises by themselves. Not every course lends itself to fancy equipment and lab work, though, and yet the material should be taught. What happens to math, for instance calculus and linear algebra? Not every course can be given a do-over to become more enjoyable. Recitations are the low-tech answer to hands-on learning. They don't attract the attention of NY Times journalists, but they significantly impact the learning experience.

I did have a good laugh at the title of the New York Times article ("At MIT, large lectures are going the way of the blackboard".) I actually don't use slides and, instead, always write on the blackboard. I sometimes bring up files from a software program on the screen so that students can follow what I do to run the software, so that they can do it too for the homework, but that's where my in-class use of technology ends. (Outside class, I use technology to load up assignments on the course webpage and write announcements. Ironically, the web service that allows me to do all that is called BlackBoard.)

I don't like fancy-looking slides that I would read aloud throughout the lecture - this is not a movie. When I prepare lectures, I write handouts with blanks that students have to fill. I give them a few minutes to think about the solution, and then I do the exercise with them on the board. (Courses in our department do not have recitations.) Obviously, the system has its disadvantages - students sitting in the back sometimes worry they misread something and wrote the solution wrong. This is not a perfect solution to the problem of creating interactive lectures. But anything that keeps students engaged during class is a good thing, and in many courses, the only way to do that is to have students write along - not so much that the whole lecture is spent in a mad dash trying to keep up with an instructor scribbling furiously on the board, but enough so that they have a chance to think about the concepts by themselves. (At least if they want to. I can't prevent students from staring blankly in the void or text-messaging under the table before I give them the solution.)

For courses that don't lend themselves to TEAL-like experiments and labs, the blackboard is the best tool to keep students engaged. It's not quite an answer to personalized learning - class size can get in the way, and also students who don't raise their hand when they don't understand. But technology doesn't solve all problems either. In its technology classrooms, my university has similar clickers to the ones described in the NYTimes article, which allow the instructor to ask a question and students to push a button to answer A, B, C or D. I have yet to find one question in my handouts that would fit a multiple-answer framework. The blackboard is here to stay.


Simpson's Paradox

If a baseball player has a higher batting average than a rival in each year of a two-year period, he must also surpass the other player when both years are combined, right?

Well, no.

I've written before about mean versus median (the median of a sample better captures the main trend, while mean can be biased by outliers [extreme values]), and about the distortions that occur when a group appears homogeneous but isn't actually made of comparable data points (for instance, when one quotes average salary numbers for a profession, which aggregates people with little experience with others who have spent thirty years on the job). But it seems that the opportunities to get fooled by numbers never end. I recently came across something called the "Simpson's Paradox", which states that the success of a group in each of several instances can be reversed when the instances are combined.

In the example above, given by Wikipedia and due to Ken Ross in his book "A Mathematician at the Ballpark", David Justice had a higher batting average than Derek Jeter in both 1995 and 1996, but not when 1995 and 1996 were combined.

Here are their specific numbers (source is Wikipedia.org, "Simpson's Paradox"):  

                  1995     1996          Combined
Derek Jeter 12/48 .250 183/582 .314 195/630 .310
David Justice 104/411 .253 45/140 .321 149/551 .270


You might notice that in 1995, Justice played a lot while Jeter didn't, and in 1996, Jeter played a lot while Justice didn't. Hence, the fractions for each year were computed for very different sample sizes, which made it possible for Jeter's batting average to become higher once the two years were combined.

This would remain nothing more than an intellectual curiosity if it weren't for the resulting opportunities to mislead the general public on much more serious matters. Wikipedia also gives the example of a real-life clinical trial comparing treatments for kidney stones, as reported in "Confounding and Simpson's Paradox", a paper by Steven Julious and Mark Mullee. In that example, Treatment B appears to have a higher success rate than Treatment A (83% vs 78%); however, differentiating between the size of the kidney stones (small or large) shows that Treatment A is more effective for each sub-group. You can find the exact numbers on the Wikipedia page, but again, this happens because of the difference in sample sizes: in the case of small stones, Treatment B was used for many people and Treatment A for only a few, while in the case of large stones, Treatment A was used for many people and Treatment B for comparatively fewer. If you have kidney stones, you certainly want to understand Simpson's paradox before you fall for publicity praising Treatment B.

Or before you initiate a lawsuit. Another example on the Wikipedia page is a lawsuit that was filed against UC Berkeley in the 1970s, accusing the school of bias against women applying to graduate school ("Sex Bias in Graduate Admissions: Data from Berkeley", by P. Bickel, E. Hammel and J. O'Connell). At the time, 44% of male applicants were admitted to Berkeley, but only 35% of female candidates. But when you look at the numbers department by department (available on Wikipedia), you can see that there was no apparent bias at the level of the individual departments.

The explanation turned out, again, to be about the different sample sizes: many women and few men tend to apply to graduate programs such as English; few women and many men tend to apply to graduate programs such as engineering. (The Wikipedia sentence starting with "The explanation turned out to be..." appears to have written half in jest [engineering departments are "less-competitive departments with high rates of admission among qualified applicants" while the reverse is true for English graduate programs? just plain funny] and I am curious to see how long it's going to stay online.) Bickel et. al. sum it best at the end of their abstract: "Women are shunted by their socialization and education toward fields of graduate study that are generally more crowded, less productive of completed degrees, and less well funded, and that frequently offer poorer professional employment prospects."

That paper was published in 1973; thankfully, there are now many women in engineering, both at the undergraduate and graduate levels. But it remains just as easy to let numbers hide the full story. 


Mismanaging Risk

The New York Times published an excellent article on the financial crisis earlier this year (Risk Management, by Joe Nocera, January 2, 2009), which investigates the responsibility mathematical models bear in the meltdown, and especially Value-at-Risk (VaR).

The journalist does a very good job of explaining what VaR is in simple terms ("if you have $50 million of weekly VaR, that means that over the course of the next week, there is a 99 percent chance that your portfolio won’t lose more than $50 million") and why it became so popular - "it expresses risk as a single number, a dollar figure, no less," "it is the only commonly used risk measure that can be applied to just about any asset class," and "it can measure both individual risks — the amount of risk contained in a single trader’s portfolio, for instance — and firm-wide risk." Furthermore, in the late 1990s, the Basel Committee on Banking Supervision allowed firms to use their own internal models to compute VaR and determine their capital requirements.

The journalist quotes plenty of people thinking the crisis has put a glaring light on the limitations of VaR, calling it "a limited tool" or even "a fraud". The last word is due to Nassim Nicholas Taleb, self-proclaimed great priest of the quants, who manages to appear even more full of himself in this article than he normally does, refusing to give his age, calling the risk managers he is invited to lunch with "imbeciles" and deeming the quants at RiskMetrics [the leading provider of risk management tools] "intellectual charlatans". (Don't take me wrong - I found his first book, "Fooled by Randomness", absolutely excellent, and also enjoyed reading "The Black Swan," which I reviewed here. But in "The Black Swan", Taleb makes no attempt to hide his feeling of superiority over the rest of the finance professionals, which becomes annoying after a while. The New York Times mentions Taleb "went [after the success of his books] from being primarily an options trader to what he always really wanted to be: a public intellectual." But by openly despising a large segment of the audience that cares about his pronouncements, Taleb risks alienating the very people who give him the public role he so craves. Otherwise, a more people-friendly quant might turn himself into the next finance intellectual and shove Taleb back into obscurity - I am sure the people Taleb badmouths every chance he gets would gladly give that newcomer a hand.)

Taleb's outbursts against VaR do not stop the journalist from digging out a story that "made the rounds in the summer of 2007" - how VaR and other quantitative models helped Goldman Sachs realize things were going wrong as early as December 2006, and ultimately saved the company lots of money while other big names faltered. (On the other hand, as described in this Wall Street Journal article [How Goldman Won Big On Mortgage Meltdown, December 14, 2007], it took the obstination of a few Goldman Sachs traders to apply these insights and - more or less - stay the course despite pressure, before it became clear they had taken the right position. A risk measure is worthless if people don't act on the information it gives.) The New York Times journalist raises the following question: "could VaR and the other risk models Wall Street relies on have helped prevent the financial crisis if only Wall Street paid better attention to them? Or did Wall Street’s reliance on them help lead us into the abyss?"

He gives a long account of a talk by Taleb in front of an enthralled audience at Columbia Business School (a talk where Taleb does make good points, in-between all his snide comments, e.g., "the greatest risks are never the ones you can see and measure, but the ones you can’t see and therefore can never measure"). The journalist also excels at highlighting the flaws in the way VaR is computed - in particular, the fact that the risk measure only uses recent historical data; this neglects information from crashes that occurred many years ago. Periods preceding meltdowns tend to have little volatility, thus inducing firms, right before the storm, to only set aside small amounts of money to meet their capital requirements.

I enjoyed reading about the beginnings of VaR at JPMorgan, and how it was very effective at the time, because it was invented for a specific purpose and the people who used it, including the CEO, remained aware of the full picture. Because VaR was so new, quants (and others) had not yet forgotten what it meant and how the number was to be interpreted. Then VaR "became institutionalized," it "became a crutch," people forgot what it meant, and the financial crisis happened. But despite all of VaR's flaws, all its shortcomings, and all of Taleb's vituperations, Value-at-Risk does get the last word in the end, at least in the article: "Taleb says that because VaR didn’t measure the 1 percent, it was worse than useless — it was downright harmful. But most of the risk experts said there was a great deal to be said for being able to manage risk 99 percent of the time, however imperfectly, even though it meant you couldn’t account for the last 1 percent."

Maybe the lesson of the crisis is that one should not try to summarize risk using a single number.

The article is, without a doubt, a must-read for anyone interested in the financial crisis. I highly recommend reading it in full here - and then reading it again.


Research Update: Log-Robust Portfolio Management

My student Ban Kawas and I have extended our log-robust optimization approach to portfolio management, which I describe in detail here and here, to the case with short sales, i.e., the case where decision-makers can sell shares they do not own (they borrow shares from a third party and bet the stock price is going to decrease - they hope that, when they have to buy shares on the market because the third party asks to have his stock back, they will be able to pay a lower price than what they received from the person whom they sold the shares to, netting the difference.) From a mathematical perspective, this means the amounts invested in can be negative, and this wreaks havoc on the optimization model.

The tractability of robust optimization, which is inherently a max-min model, lies on the manager's ability to rewrite the max-min model as one big maximization problem using a concept called strong duality, which holds for convex programming problems. (Strong duality means that a minimization problem has an equivalent maximization problem, and vice-versa. Convex programming problems are a class of "well-behaved" problems for which we know the computer will stop at the right answer, as opposed to stopping prematurely at an answer that looks optimal but is not. To understand non-convexity, think of someone in Africa who has decided to climb the tallest possible mountain and sees Mount Kilimanjaro in the distance. If there's no one around to tell him about Mount Everest, he's going to climb Mount Kilimanjaro and congratulate himself at the top for a job well-done, sadly unaware he has not accomplished his life goal after all.)

The easiest convex problem is linear programming, which is why many robust optimization researchers have focused on linear problems until now. When short sales are not allowed, the log-robust optimization problem is non-linear, but convex, so we can still derive equivalent tractable reformulations despite the non-linearity. When short sales are allowed, however, we lose convexity, so we need to develop additional techniques, which are the purpose of this second paper. We do achieve exact reformulations when asset classes are independent (gold and real estate, for instance), and resort to heuristics when they are correlated.

We also analyze numerically the advantages related to short-selling. We make the side observation that, while short-selling is a highly valuable technique, it is even more important to model randomness at the true level of the uncertainty drivers: the continuously compounded rates of return, rather than at the level of the returns themselves. In other words, implementing a fancy technique in the wrong model doesn't make up for the fact that the model is wrong. (Lipstick on a pig...) The log-robust optimization approach with short sales combines a technique widely used in industry with a framework that builds upon one of the major insights in the field - the fact that the continuously compounded rates of return drive the uncertainty, but that researchers disagree on their distribution, and whatever it is, it has fatter tails than the Gaussian model implemented in practice. The full paper is available here.

In addition, we are thrilled to report that our first paper in log-robust portfolio management is available online on OR Spectrum's website using Springer's "Online First" feature. From the quality of the reviews to the speed of publication, OR Spectrum has impressed me beyond words. (We submitted the paper in December 2007; reviews came back in July 2008; we submitted the revised version in August; the paper was accepted in December; I sent the source files immediately and received the proofs during the break.) The reviewers had many good questions and provided excellent feedback. It has been a pleasure working with the editorial staff as well. I highly recommend that journal.


MOPTA'09

Save the date! The MOPTA conference will be held at Lehigh University this year on August 19-21, 2009, just before the ISMP conference in Chicago. MOPTA stands for Modelling and Optimization: Theory and Applications.

From the conference website: "The conference is planned as an annual event aiming to bring together a diverse group of people from both discrete and continuous optimization, working on both theoretical and applied aspects. The format will consist of a small number of invited talks from distinguished speakers and a set of selected contributed talks, spread over three days. Our target is to present a diverse set of exciting new developments from different optimization areas while at the same time providing a setting which will allow increased interaction among the participants. We aim to bring together researchers from both the theoretical and applied communities who do not usually have the chance to interact in the framework of a medium-scale event."