Previous month:
October 2007
Next month:
December 2007

November 2007

On Teaching, Prestige and Small Class Size

Teaching has been quite a bit in the news recently. For instance, The Economist published an article on "How to be top" in its October 20th, 2007 issue, which summarizes the findings of a study by McKinsey on how some countries top the (primary and secondary) education rankings again and again. "Those findings raise what ought to be a fruitful question: what do the successful lot have in common? Yet the answer to that has proved surprisingly elusive. Not more money. Singapore spends less per student than most. Nor more study time. Finnish students begin school later, and study fewer hours, than in other rich countries."

An often-proposed remedy to the poor quality of teachers is to increase their salary. But "if money were so important, then countries with the highest teacher salaries - Germany, Spain and Switzerland - would presumably be among the best. They aren't." What the countries with the best educational systems have in common is that they make teacher selection an extremely competitive process. "Singapore [...] accepts only the number [of would-be teachers into teacher training programs] for which there are places. [...] Finland also limits the supply of teacher-training places to demand. In both countries, teaching is a high-status profession (because it is fiercely competitive) and there are generous funds for each trainee teacher (because there are few of them)." 

I particularly enjoyed reading about the possible explanatory variables, which make sense but in the end might have the opposite effect to what was intended - what is wrong with a small class size? The article notes: "Almost every rich country has sought to reduce class size lately. Yet all other things being equal, smaller classes mean more teachers for the same pot of money, producing lower salaries and lower professional status. [...] After primary school, there seems little or no relationship between class size and educational achievement."

Class size might well be a myopic (and wrong) proxy for the school budget. If the school has enough money to hire more well-prepared, well-qualified teachers, it is hard to see how keeping the class small would not benefit students: teachers know their name, demand more accountability. One thing the article does not mention is that, rather than hiring more teachers at a lower salary, the school district might also require teachers to teach more sections with fewer students in each. Parents are happy - classes are small - but teachers spend more time repeating the same lesson to the same total number of students, which increases fatigue, decreases prep time for the next lecture as well as the time the teacher could spend helping to her students after class. And let's not even talk about the likelihood of a teacher knowing a student's name when he sees hundreds of faces day after day, even if they show up in blocks of only fifteen.

I have been thinking about that same issue for some time in the context of introductory college courses, because of the importance in the all-powerful US News rankings of small class size - it seems easy to game the system by having instructors teach more, thus diluting the energy they would spend on a more limited number of sections. Keeping classes small is a well-intentioned policy that, pushed to the extreme (US News counts as small a class that has 19 students or less - sorry, 20 won't do), might end up doing more harm than good. A better measure to quantify the time instructors really spend on each student would be the number of students each professor teaches every semester. Maybe in 2009?

Data-Driven Rip-Off: Google and pay-per-click

Last month I wrote on data-driven decision-making as a way to address fast-changing conditions when computers identify discrepancies between their forecasts and the real data. The idea is that you don't have to underlying the dynamics of, say, stock prices - what is driving a specific output - to take action and adjust your strategy in light of the observed behavior. (It sounds quite obvious, but analysts have long preferred probabilistic descriptions of uncertainty and until a few years ago, computers could not process large amounts of inputs fast enough for data-driven models to make practical sense.)

In his comment on my post, CPC mentioned the role of data-driven techniques to assess consumer behavior, the most famous of which being of course Google's pay-per-click business model. He gave the link to a report commissioned by Google on click fraud; you can read what he has to say about it here. The report arose from the controversy surrounding Lane's Gifts and Collectibles, an Arkansas-based gift shop, which reached a settlement with Google in March 2006 regarding bogus clicks. The company is briefly mentioned in articles about online advertising in a November 2006 issue of The Economist (this one is the Leader and that one the real article), which mention that internet advertising (half of which is due to "pay-per-click" ads) brought $27 billion in revenue in 2006, and is expected to represent 20% of total advertising expenses in just a few years, as opposed to 5% now.

The Economist describes pay-per-click as follows: Google, Yahoo! and similar providers put the ads on their search pages when users type in queries related to their line of business (the advertiser who bid the most money gets to be on top of the list), and on affiliates' websites, but the advertiser is only charged for clicks on the ad. This way, he is supposed to only pay for real traffic generated by the ad to his site. The articles also explain that "The price per click varies from $0.10 to as much as $30, depending on the keyword, though the average is around $0.50." At that rate, it is not difficult to imagine the damage malicious competitors can inflict on an unsuspecting advertiser; affiliates also get a small profit when someone clicks on an ad on their site, and therefore have an incentive to game the system as well.

I was wondering what led Lane's Gifts to determine it was the victim of click fraud (as opposed to, for instance, a large number of customers being turned away by a poorly designed website or too expensive prices); unfortunately there is no information whatsoever on the web about that. The class-action lawsuit led to a settlement of $90 million, and I couldn't decide whether that was a lot or not (the plaintiffs got no cash - only credits toward more Google ads), but Google's cavalier attitude about it was enough to infuriate many a small business. After all, "Bill Gross, the entrepreneur who pioneered the pay-per-click model back in 1998, was aware of the problem even then," so it is no surprise that Google's CEO "caused an uproar [in 2006] when he seemed to suggest that the 'perfect economic solution' to click fraud was to 'let it happen'."

A Washington Post columnist writing in March 2006 made a similar comment: "Google has repeatedly pooh-poohed click fraud, contending that it is a minor annoyance that it has under control with automated detection technology. At a meeting with analysts two weeks ago, chief executive Eric Schmidt said click fraud "is not a material issue." Co-founder Sergey Brin said such cases amount to "a small fraction" of Google's ad clicks." (This declaration came six days before the settlement with the gift shop in Arkansas.) The Post column also provided a real-life small company struggling with click fraud, which helped to put the situation in perspective: that company pays Google and Yahoo! $40,000 every month, at the rate of $0.80 to $1.20 per click, and it turns out  "35 percent of the referrals that Radiator [think car radiator] paid Google for stemmed from bogus traffic. Likewise, 17 percent of the leads that came from Yahoo search results were illegitimate." It also has worryingly statistics, that suggest click fraud represents a larger chunk of clicks than the 10% usually accepted. Even just 10% of $13 billion (The Economist's estimate of pay-per-click revenue) values the money flowing to the click fraud business to a billion dollar every year.

But the best article I've found on click fraud comes from an October 2006 issue of Businessweek. I liked that article because it described real people on all sides of the scheme and put numbers on the issue. Here are a couple of highlights. (It really is a fascinating article, and I encourage anyone interested in the issue to read it in its entirety. No registration/subscription necessary.) In 2005, an Atlanta entrepreneur paid Google and Yahoo! $2 million in advertising fees (his company has 30 employees and generates revenues of $6.4 million).

  • "Over the past three years, [the entrepreneur] has noticed a growing number of puzzling clicks coming from such places as Botswana, Mongolia, and Syria. This seemed strange, since MostChoice steers customers to insurance and mortgage brokers only in the U.S. [He discovered] that the MostChoice ads being clicked from distant shores had appeared not on pages of Google or Yahoo but on curious Web sites with names like and"
  • "The trouble arises when the Internet giants boost their profits by recycling ads to millions of other sites, ranging from the familiar, such as, to dummy Web addresses like, which display lists of ads and little if anything else.
  • "The search engines divide these proceeds with several players: First, there are intermediaries known as "domain parking" companies, to which the search engines redistribute their ads. Domain parkers host "parked" Web sites, many of which are those dummy sites containing only ads. Cheats who own parked sites obtain search-engine ads from the domain parkers and arrange for the ads to be clicked on, triggering bills to advertisers."
  • "The search engines describe [their] affiliates in glowing terms. A Google "help" page entitled "Where will my ads appear?" mentions such brand names as and the Web site of The New York Times. Left unmentioned are the parked Web sites filled exclusively with ads and sometimes associated with click-fraud rings." (The description of the click-fraud rings is particularly interesting.)

Who in their right mind would build a business model around pay-per-click and then condone domain-parking? The owners of parked websites often count on people misspelling the name of popular sites to stumble on theirs. Does that sound like a fine business practice? At the very least, Google and Yahoo! should limit the resale of ads to websites that actually have a track record of selling a product (as opposed to only listing recycled ads), or have been vetted as legitimate (especially blogs).

Most stories about click fraud are about small businesses straddled with gigantic advertising costs, or seeing their budget depleted without any benefit. For those the downside of other people's get-rich-quick scheme can mean bankruptcy, and it makes sense then to prefer the "pay-per-action" model described in The Economist, where companies are only charged for clicks that lead to a sale. Google might complain it shouldn't be held responsible for small businesses' poorly designed websites. But given the current state of online advertising, this might be the only option to prevent small-size companies from going back to their old paper ways.

The Wall Street Journal's new online strategy

When it came to accessing its website, the Wall Street Journal held its ground for a long time: only subscribers would read its articles. This was an unusual strategy, because most newspapers and magazines let users browse their website for free in order to attract advertisers (see The Economist's report and its Leaders column, both dated August 24, 2006), but the WSJ felt unique enough in its offering to extract revenue from subscribers instead. The New York Times tried a hybrid approach in making its columns part of TimesSelect, a part of the website reserved to subscribers, while other articles were free for a limited amount of time after publication, but in September the Times announced it was putting an end to the experiment (read my post about it here). Over the summer the WSJ was taken over by Rupert Murdoch, and the new boss "confirmed this week [first week of November] that he intends to make its website free" (The Economist, November 15, 2007).

Does it make sense to reduce the cachet of WSJ's online edition by making it more widely available? After all, Facebook lost a lot of its glitz once it allowed everybody to register. But in this case I think it is a good idea. Any self-respecting person who works in finance already subscribes to WSJ, so the move means opening the website to non-professionals who are keen on staying up-to-date with the latest business and finance developments. Am I the only one to whom this screams "side ad by Harvard Business School" and "if you want to know more about collateralized debt obligations [or whatever term was entered in the search box] here is a book you should read"? Now, of course, if the WSJ wants to run a banner ad on the latest Coen Brothers film (which is what the NY Times is doing tonight), that's its choice, but one can only hope for more targeted advertising in a website that has such a narrow focus. (The Economist tonight has banner ads about Airbus and Credit Suisse; after I typed CDOs in the textbox, the search page had advertisements about Chevron and Microsoft. I repeated my little experiment on, and at least most of the ads on the front page were about finance, but after I typed my query the search page had an ad for and advertisers' links on small business finance, web-based procure-to-pay purchasing software, and business performance management software.  How sad that there wasn't a single link on learning more about CDOs.)

This got me thinking about book advertising: would it be possible for websites to advertise books in the same way that Google displays links to companies' sites using AdWords? (WSJ has published a number of popular guides to investing that could be listed, and since Murdoch owns a media empire, maybe he'll expand into the book publishing industry at some point - there is a sizable market for "lifelong learning", especially when it comes to retirement planning and investing.) The first issue for now is of course that books aren't businesses, and the actual businesses such as, Barnes & Noble and Borders compete with each other to sell the same books. What do you do if you have a deal with B&N and Amazon, and they can both ship the book tomorrow?

The second issue is: which book to choose? A search on "collateralized debt obligations" in the Books section of returned 255 results, but we have all read those reviews on that made our jaw drop to the keyboard: five stars for that piece of junk? Some books please the laypeople but make the specialists shrug. Some books are too mathematical for beginners but perfect for experienced professionals. I've bought books for the sole reason that staff writers at The Economist had written a rave review on them, but am not particularly swayed by laudatory reviews in the New York Times of the New York Review of Books (I stay away from books they pan, though). Others don't care about what The Economist thinks but buy whatever the New York Review of Books recommend.

Internet services such as LibraryThing allow users to view others' libraries, in particular others with whom they already have many books in common. (LibraryThing operates via donations and $20 membership per year for users who want to catalogue more than 200 books; a membership for life costs $35. There is no link on the website to buy books others have listed and given a high rating to - the site is run by booklovers rather than businesspeople.) My point is: if you want to make good recommendations, you need to identify people who behave like the user, and that means everybody has to register for free. (Actually, my point might well be that when recommends these or those books because other supposedly similar users have bought them, I would like to keep the suggestions of the people whom I agree are like me, and discard the others, but that's a whole other story.) The last issue is: would book advertisements be cost-effective? Is a $20 book, or even a $50 or $100 book (yes, some finance books cost that much) worth the space on the WSJ's website? Or should online ads remain about whole companies rather than the specific products they market?

All in all, I doubt that the Wall Street Journal will ever bother to put ads regarding finance- and business-related books on its website, but such ads certainly would be more relevant to users than banners about Chevron.

College Coach

I recently came across this article, entitled "I can get your kid into an Ivy," in BusinessWeek (October 22, 2007) The article focuses on the very lucrative business of college admissions coaching, where the entrepreneur profiled charges as much as $40,000 to help a student "package" himself to his college of choice, and has even started offering "college-admissions counseling for students in eighth grade." The article gives frightening statistics: "According to the Independent Educational Consultants Assn, 22% of first-year students at private colleges - perhaps as many as 58,000 kids - had worked with some kind of consultant."

With the numbers of high school students bulging, and those of spots in top colleges staying flat (see this old post of mine for details), it should come as no surprise that many parents are frantically trying to give their kids any edge they can think of. The admissions coach in the BusinessWeek article "selects classes for students, reviews their homework, and prods them to make an impression on teachers. She checks on the students' grades, scores, rankings. She tells parents when to hire tutors and then makes sure the kids do the extra work. She vets their vacation schedules. She plans their summers. And through it all, she is always available to contend with the college angst that can consume whole families."

And it works. We'd like to think it doesn't; we'd hope the kids who tell the truth get in and the ones who don't have their little white lies exposed, but the coaching does turn the kids into who they say they are in their application essay. And that frightens me. In the same way that it is always easier to get something right the first time around, rather than doing it wrong first and having to correct one's mistakes later (home improvement, anyone?),  I wonder what all this brain-washing does to kids who didn't know how to put their best foot forward in high school, and who have to figure out when they are in college - away from their parents, the SAT bootcamps and the Habitat for Humanity projects - what they want to be and do for the next couple of years.

Case in point: the student who was admitted as a music major into the college of his choice after, on the advice of his coach, positioning himself as a talented musician, ended up switching to religious studies. As much as I wish the kid all the best, I am guessing that among all college graduates, the religion majors aren't the ones the top companies are the most after. So was the coaching worth the price? The parents can sleep in peace: they did what they could. But by having their child walk with crutches throughout high school, they increase the chance he will falter once he is on his own, eager to enjoy at last the carelessness of his youth, and clueless about what he really cares about.

Master of Debt

It's that time of the year again: seniors are deciding what to do after they get their degree - accept a job offer, go to graduate school? As an academic, I am obviously biased in favor of graduate school, and I've written elsewhere ("Of Many Masters", June 2007) about the current trend, where a growing number of students pursue Master degrees because of the heightened competition at the Bachelor level - getting an advanced degree is an easy, and efficient, way to distinguish oneself.

What I find interesting is the type of students who apply: a mix of stellar students who will receive a President's Scholarship or the equivalent (free tuition for one year) because their undergraduate GPA exceeds a ridiculously high threshold (at my institution, 3.75 out of 4.00 - no small feat, and yet some students do qualify), and of good students who want to show they are better than what their undergraduate GPA suggests, now that they can focus on the areas that really interest them. In other words, few very good students (those who will graduate with high honors but would need to pay for their Master degree) bother to send in an application. They are usually able to line up excellent jobs; they don't need the extra year - not the degree, and certainly not the debt. It is probably no coincidence that more students where I work stayed for a fifth year when the threshold to receive a fellowship was only 3.50: students do want the extra education, but they also want a manageable amount of debt.

Many students attend college on financial aid, and doctoral students receive Teaching and Research Assistantships to pay for their graduate studies, but students in Master programs are expected to pay out of their own pocket. If they are good enough to be admitted in doctoral programs, this leads to situations where students fake interest in a PhD in order to get their Master for free (which is exactly what happened with a large number of the NSF IGERT Fellows in our department - the National Science Foundation had provided fellowship money restricted to Americans who would enroll in the PhD program, without strings attached, and let's just say many of them didn't stick around for very long). Not only are serious PhD applicants rejected in order to admit students who don't plan to get a PhD, but top undergraduate students are scared away by the sheer amount of loans that will pile up on top of those they already have.

Of course it's harder to get politicians and administrators excited about Master-level work - students who enter the program typically don't know how to do research, and they leave at the very moment where their training would pay off for their advisor. Research conducted by Master students rarely ends up in the New York Times - but that's not the point of the degree. Masters of Science and of Engineering train students in the latest developments in a field and prepare them for that buzzword of today's education system, "lifelong learning". You cannot implement a novel technique if you don't know it exists, and even when you're self-taught it's hard to recognize that a model applies to your specific situation if a professor didn't give you pointers beforehand.

Is it a smart choice to have student enrollment driven by parents' financial means? In the long run, American competitiveness depends on the system's ability to teach the best students about cutting-edge approaches. While more and more PhD recipients have joined the ranks of industry, they often end up in R&D departments rather than in top management; in contrast, many managers hold Master degrees (and not just MBAs). To keep American business at the top, Master degrees should be made more accessible to worthy students; piling debt on them for short-term financial gain is not the answer.

On Quantitative Finance

Two weeks ago I found my copies of Technology Review (November/December) and The Economist (October 22/November 2) in my mailbox on the same day, and they happened to have one topic in common: the August melt-down of the quant funds.

In The Economist, Buttonwood's column "Heart of darkness: the peril for markets when computers miscalculate" explains that quantitative finance "uses computer models to find attractive stocks and to identify overpriced shares," a reasonable enough description. In August, the quant funds traded so often that they "set prices for everyone else" but "added to instability" because of their use of leverage. The point that caught my attention was Buttonwood's mention of an "arms race" between financial companies, each firm trying to outdo the others by implementing faster and faster computerized trading systems (for instance, to dump their shares when necessary before competitors have time to make a move). I follows the topic of computerized trading closely because Lehigh has invested heavily in high-performance computing and a Lehigh alum now on the university's board of trustees, George Kledaras, pioneered such systems when he founded FIX Flyer in the 1990s. Buttonwood's column also mentions that "parts of the portfolio that were previously uncorrelated suddenly fell in tandem," a phenomenon that Andrew Lo from MIT's Laboratory for Financial Engineering discussed as early as March at a finance talk organized by the MIT Club of New York. (Saying that this took the financial markets by surprise is a bit disingenuous, but I guess that eases the pain of seeing billions go in smoke.) Computerized trading has helped financial firms make substantial gains and it is unlikely that they will stop relying on quant funds any time soon. (From "What crisis?" in the September 16 edition of The Economist: "Some quant funds [...], such as Renaissance Technologies, gained back the ground they lost early in the month [of August]" and "even after August's losses, [the industry] is still up 6.2% on the year") What is the solution, then? In Buttonwood's words: "Quants will adjust their models." The world will go on.

Technology Review's "The Blow-Up" takes another look at "the quants behind Wall Street's summer of scary numbers." It defines quantitative finance as "a wide-ranging discipline that includes, among other things, the pricing of financial instruments, the evaluation of risk, and the search for exploitable patterns in market data." James Simons's hedge fund, Renaissance Technologies, plays again a prominent role in the (this time much longer) article, with a stronger focus on Simons himself, physicist extraordinaire before he left academia. (Renaissance's webpage, with exactly two links: "Job Openings" and "Locations", epitomizes the secrecy surrounding hedge funds.) At several places in the article, the author emphasizes the need for high-performance computing systems: "There are quants at hedge funds, crunching years of market data to develop trading algorithms that computers execute in milliseconds," "With the increasing power of computers, [quants] have developed other, more processing-intensive methods of valuing derivatives; in Monte-Carlo simulations, for instance, powerful computers model the performance of a stock millions of times and then average the results." The author also points at "another developing frontier, high-frequency trading, which is a fantastically exaggerated form of day trading. The computer looks for patterns and inefficiencies over minutes or seconds rather than hours or days." The author, Bryant Urstadt, predicts that "high-frequency trading is likely to become more common as the New York Stock Exchange gets closer and closer to a fully automated system." In the meantime, what happened to the quant funds in August? Urstadt's conclusions match Lo's argument  in New York in March: "That the quants were, apparently, long on the same strong stocks and short on the same weak stocks was a result of a number of strategies, pairs trading among them." He also suggests that "the quants' models simply ceased to reflect reality as market conditions abruptly changed. After all, a trading algorithm is only as good as its model." In that respect he reaches the same conclusion as Buttonwood: quants' models need to be adjusted. Whether they will be changed for the better, and whether the quants will succeed in mitigating coupling between strategies next time, remains an open question.

But I enjoyed most of all "On Quants," a short rebuttal to "The Blow-Up" written by a MIT professor in mathematics, Dr. Daniel Stroock, who argues that "[quants'] mission is to blindly keep those stocks moving, not to pass judgment on their value, either to the buyer or to society. Thus, [the author] finds it completely appropriate that quants now prefer the euphemism 'financial engineer.' They are certainly not 'financial architects.' Nor are they responsible for the mess in which the financial world finds itself. Quants may have greased the rails, but others were supposed to man the brakes." While the dilution of responsibility in the financial world is becoming a tad worrisome,  I liked Stroock's advocacy for the term "financial engineer." Memoirs of academics who made it big in quantitative finance, such as Emanuel Derman's My Life as a Quant: Reflections on Physics and Finance, have popularized the term "quant," I suspect in great part because of the aura of secrecy surrounding the job title, which echoes quantum physics - and if quantum physics doesn't deserve awe and respect from us common mortals, what does? Well, as it turns out, quants have little in common with quantum physicists, and the academic programs training said quants have overwhelmingly chosen the term "financial engineering" instead to describe what they are doing (see for instance MIT's Laboratory for Financial Engineering, Columbia's Master of Science in Financial Engineering, Princeton's Department of Operations Research and Financial Engineering - the focus of Lehigh's program is somewhat more computational than those and is called Master of Science in Analytical Finance; similarly, Carnegie-Mellon's Master Program is in Computational Finance). Financial engineer sounds a lot less glamorous than quant, but a lot closer to the mark - and maybe then, when the job title sounds a bit less grandiose, will investors stop believing quants' models never fail. 

Valuing the Good and the Bad

I have already written about the fact that the median captures the general trend of a distribution more accurately than the mean when that distribution has a long right tail, for instance for salaries. (For those of you who haven't done any math in a while, the median is defined so that 50% of the values are to its left and 50% are to its right, while the mean is simply the sum of all the values divided by their number; as a result a very large number is not going to affect the median - we only care about the number being to the right, not about how far to the right - but will distort the mean. The median is also called the 50% quantile because the probability of getting a value lower than or equal to the median is 50%.) The motivation in favor of using the 50% quantile as opposed to the average is that we don't want to be too optimistic in our estimates.

On the other hand, (95%) Conditional Value-at-Risk, which is the average of all the values below or at the 5% quantile, has emerged as an important tool in risk management and is preferred by many academics to (95%) Value-at-Risk, which is the 5% quantile of a distribution and was proposed by JPMorgan in the early 1990s (see "Too clever by half" in The Economist dated January 22, 2004) - as an example, if you have 100 data points, 95% VaR is the value of the fifth lowest data point while 95% CVaR is the average of the five lowest values. VaR is now enshrined in Basel 2, but only tells you that in 95% of the cases, the portfolio's worth will not fall below the VaR - it doesn't tell you anything about how bad things can get in the worst 5% of cases. CVaR, by averaging over the 5% worst scenarios, addresses that issue.

CVaR has also been shown to have a desirable property called subadditivity, which means that breaking up a company into several units does not decrease its risk as measured by CVaR (intuitively, diversification should be good for a company, as it is likely that some divisions will perform better than expected and some worse, mitigating the risk for the company as a whole). This is not true of VaR. The classical example, for which I claim no credit whatsoever, is as follows: consider an insurance company with two divisions. Division A - let's call it the hurricane division - pays out 0 with probability 0.96 and 100 million (that's earnings of -100 million) with probability 0.04. The payout of Division B - the earthquake division - obeys an identical distribution, and is independent of the payout of Division A because earthquakes and hurricanes occur independently. Then it is easy to see that the 95% VaR of both divisions, valued as stand-alone companies, is 0. But the payout made by the whole insurance company made of both divisions is 0 with probability (0.96)^2=0.9216, 100 million (earnings of -100 million) with probability 0.0768, and 200 million (earnings of -200 million) with probability (0.04)^2=0.0016. So the 95% VaR for the whole company (5% quantile) is at -100 million. As a result, if you use VaR to quantify how much a financial company should have in its reserves, as mandated by Basel 2, the whole company must have 100 million in its reserves but the stand-alone divisions would have a requirement of 0. This gives an incentive to break the company into two to decrease risk measured by VaR, which flies in the face of diversification.

Here is what amuses me about all this. When we have to put a number on distributions with long right tails like the ones for salaries, we advocate the use of the median (50% quantile) to avoid being overly optimistic. But when it comes to risk management, where we focus on the left of the distribution - low earnings - rather than its right - high salaries - suddenly taking the mean (of the worst cases) is back in fashion, because we want to incorporate the worst-case values in the long left tail and get a good idea of how dire the situation can become. Of course it is impossible to capture the main features of a distribution using just one number, but it seems that, rather than promoting the use of one or the other in all cases because they all deal with random variables, we really should use the mean when we are interested in outcomes we are averse to and median when we investigate desirable situations.

Services Science and Hillary Clinton

In my last post, I focused on Presidential Candidate Hillary Clinton's agenda to increase the amount and number of graduate research fellowships awarded by the National Science Foundation. But this represents only a small part of her proposal. As interesting, from my perspective, is Clinton's proposal to create a Services Science Initiative to encourage innovation in the services science sector, which "now accounts for approximately 80% of the U.S. economy". The idea is to "promot[e] innovation and productivity in the services sector in the same way that electrical engineering, for example, has led to technological advances in the development of the computer chip." Clinton's program would be modeled after the National Nanotechnology Initiative, which provides funding to researchers as well as educational resources for those interested in nanotechnology. While maintaining American lead in innovation is not a novel preoccupation for politicians (see for instance President Bush's American Competitiveness Initiative) I believe few of them besides Clinton have recognized the need for a focus on services.

Services science is a new discipline that has gained wider recognition over the last few years after IBM started promoting the concept and developed partnerships with universities (disclosure: I am the recipient of a 2007 IBM Faculty award in Service Science, Management and Engineering). Other companies advocating for service-science research include Oracle, Accenture, Hewlett-Packard, Cisco, Microsoft and Xerox. Of course it is questionable whether services management, or any discipline that addresses customer behavior, will ever qualify as a science, but it also has the potential to increase the practical relevance of quantitative decision-making - many companies find mathematical models difficult to implement because they do not capture the "soft", people-driven side of management. Service science, which combines ideas from operations research, computer science, business strategy and the like, will bring academic research more in line with industry needs by recognizing the changes in the US business world, and in the words of Clinton's press release, "will improve the competitiveness of American business." Or at least we are looking forward to the challenge.

Graduate Research and Hillary Clinton

A couple of days ago I stumbled onto Presidential Candidate Hillary Clinton's innovation agenda, thanks to a post on John Hunter's blog. A notable element of Clinton's proposal is to increase the number and amount of the Pre-doctoral Graduate Research Fellowships  administered by the National Science Foundation; specifically, she would triple their number (from 1,000 to 3,000) and increase their amount by 33% a year (from $30,000 to $40,000 per year).

While this is welcome news, I would rather see the NSF budget increased (which Clinton plans to do as well) in order for more principal investigators to receive funding, than have the number of graduate fellowships tripled. My main complaint with the graduate fellowships is that most students apply in the Fall semester of their senior year in college, before they have been admitted to graduate school, and sometimes during the Fall semester of their first year of graduate studies. Don't take me wrong: the students who apply to the fellowship program are of such a caliber that no one doubts their admission to their graduate school of choice. But how can anyone write a thoughtful plan of research (a required component of the application package) before actually finding an advisor and exploring topics with him or her? It's one thing to like, say, particle physics in college, quite another to do research at the graduate level in the area.

These days no self-respecting doctoral program would offer admission to PhD students without guaranteeing funding for at least one year, and I'd much rather see doctoral students apply for the fellowship in their second or third semester of graduate school after securing an advisor, performing some preliminary work and developing a detailed research program in conjunction with that person, who after all is much better-suited than a college student to identify the "hot areas" in a field. (Also, picking an advisor who is "right" for the student is a tricky business. I'll write more on that some other day; for now I'll just mention that advising styles vary widely, and the main reason why high-potential students sometimes flounder in their academic careers is that their advisor's style doesn't match their own needs.) It would certainly be interesting to see how many of the graduate fellowship recipients ended up pursuing research in the area they had described in their essays. Delaying the application would also solve the problem of the fellowship duration: very few students ever finish a PhD in three years, leaving their advisors scrambling to find them funding for their last one or two years of studies - those students deserve better than teaching assistantships. If the amount of the fellowship is indeed increased by 33%, as indicated in Clinton's press release, and if students keep applying for the fellowship during their senior year, I would much prefer see the fellowship duration increased by one year, in order to smooth the awardees' path to graduation. At least in my field, finishing in four years is doable; finishing in three is not.

Another issue at stake is of course one of "brand dilution": those fellowships are the most prestigious that can be awarded to Americans and US permanent residents. Is the NSF really turning down 2,000 stellar applicants every year? I couldn't help wonder when I read about the 3,000-awards target. Maybe the increase could be phased in within ten years, but I found it hard to believe in the value of such a big jump... at least until I found the statistics. It turns out the NSF does keep track of the worthy candidates it could not award fellowships to due to lack of funds, by giving them "honorable mentions". So it seems reasonable to increase the number of awards to the number of current awards plus honorable mentions. Lo and behold, according to the NSF's website, this total number was 3,017 (1,024 awards and 1,993 mentions) in 2005, 2,875 (909 awards and 1,866 mentions) in 2006 and 2,332 (920 awards and 1,412 mentions) in 2007. So tripling the number of awards would indeed be a reasonable decision in light of the quality of the current candidates; sometimes politicians do their homework rather than throwing eye-popping numbers around for the sole purpose of making headlines. Who would have thought.