Previous month:
September 2007
Next month:
November 2007

October 2007

There is "age" in average

I have already written on how misleading the concept of average is for highly skewed distributions, for instance the distribution of salaries at Goldman Sachs. A recent post by XM Carreira reminded me of another source of distortion: the fact that executives with (often widely) different years of experience are lumped together to produce that magical number, the average salary. The post links to an article in New Civil Engineer Magazine entitled "Elderly graduates make nonsense of salary survey." In particular, there were some complaints related to the fact that a salary survey "quoted the average graduate salary rising 19% over the past two years to in excess of 34,000 British pounds," but in truth "graduate civil engineering salaries fall behind those of other graduates" and some survey respondents were in their 50s and 60s, with the eldest respondent being 77. The article points out: "With age comes increasing responsibility and an increased salary, which obviously skews the figure." It had always been obvious that neither the janitors nor the secretaries in Goldman Sachs had received anywhere close to the average bonus publicized in the press, but it is worth remembering that the business analysts and fresh-out-of-B-schools associates probably did not either.

The issue of age in statistics is also touched upon in Basic Economics, where Thomas Sowell points out that: "When some people are born, live and die in poverty, while others are born, live and die in luxury, that's a very different situation from one in which young people have not yet reached the income level of older people, such as their parents. [...] Because of the movement of people from one income bracket to another over the years, the degree of income inequality over a lifetime is not the same as the degree of income inequality in a given year." (p.190) Sowell also quotes another, rarely-discussed source of distortion in census numbers: the number of individuals that make one household. "Family income or household income statistics can be especially misleading as compared to individual income statistics. An individual always means the same thing - one person - but the sizes of families and households differ substantially from one time period to another, from one racial or ethnic group to another, and from one income bracket to another. For example, a detailed analysis of U.S. census data showed that there were 39 million people in the bottom 20 percent of households but 64 million people in the top 20 percent of households." (p.191) (There are a lot more single mothers in the bottom bracket than in the top one, and a lot more two-parent families with two or more children in the top bracket than in the bottom one.) As a last example, "The sizes of families and households have differed not only from one income bracket to another at a given time, but also have differed over time. [...] Real income per American household rose only 6 percent over the entire period from 1969 to 1996, but real per capita income rose 51 percent over the same period. The discrepancy is due to the fact that the average size of families and households was declining during those years [fewer children], so that smaller households were now earning about the same as larger households had earned a generation earlier." (p.192)

Sometimes it feels like numbers should come with a little label affixed to them: trust at your own risk - at least until a greater part of the population becomes math-savvy enough to challenge the assumptions that went into producing said numbers. It wouldn't take much; understanding alternative measures and asking for those as a comparison would already go a long way. You're giving me the mean for the whole company, but what's the median for people most like me? We can talk about attracting more students to science and engineering at length, but everybody - everybody - should be trained to understand what the numbers they read in the paper really mean.

Internet2 and Live Performances

Two weeks ago I drove to Philly (about an hour and fifteen minutes south of where I live) for the all-Beethoven program of the Philadelphia Orchestra. There was simply no way I was going to miss a live performance of Beethoven's Fifth, which happens to be my favorite symphony, within driving distance of my town. What made this concert a bit unusual was that the Philadelphia Orchestra has pursued educational initiatives quite aggressively, and I had learnt just a couple days earlier that the very concert I was planning on attending would be broadcast at Lehigh's Zoellner Arts Center (Lehigh being the university where I work) using an Internet2 connection. I didn't regret buying the ticket, though, since I really wanted to attend the performance "in real".

The concert was a matinee and the trip to the Kimmel Center (a gorgeous building if you have not seen it) was uneventful; the performance itself was fantastic. I lingered in the gift shop afterwards to buy the Philadelphia Orchestra's recording of Beethoven's Fifth for the drive back home, because I had forgotten my recording by the Wiener Philharmoniker, and retrieved my car from the garage around 4.45pm. And this was a Friday. Sometimes living in a small town makes you forget about traffic jams in the big cities... What do you mean people leave work early on Fridays? Can't they just wait until I'm gone, uh? I pulled out of the parking lot and after a few minutes of hope (maybe things will get better after the next traffic light, or the one after that) it finally dawned on me I wasn't going to get home in anywhere close to one hour and fifteen minutes. One hour later I was still trying to merge on 76-W and a live performance of the Philadelphia Orchestra from the comfort of Lehigh University's Zoellner Arts Center didn't seem something to look down upon any more.

One hour and a half into the crawl home with Beethoven's Fifth on the stereo, staring at the taillights of the car in front of me and nowhere close to 476-N, I started pondering philosophical questions on what made a performance "live". Before radio even existed, you had to sit in the performance hall with the musicians; there simply was no other choice. Radio stations have broadcast live performances for decades now, and yet nobody pretends the experience comes anywhere close to attending the concert, because of obvious issues in the sound quality. Performances on television go one step further in giving the TV audience the impression that they are attending the concert, and the view is definitely better than from many seats. At the same time, it all fits into a television screen which, even plasma-sized, doesn't "quite" match the size of the stage. So until now buying a ticket to the actual performance seemed the clear winner.

This has begun to change with the advent of high-definition concerts. The Lehigh broadcast (which was completely free) was made possible by the Internet2 consortium, which "provid[es] both leading-edge network capabilities and unique partnership opportunities that together facilitate the development, deployment and use of revolutionary Internet technologies"  in diverse areas such as health, science, network research, and the arts. The consortium facilitates the broadcast of performance events such as the Philadelphia Orchestra concert, thus allowing music at its best to reach more households - or classrooms - which might not have the means to buy concert tickets, even at the student rates. (That and the fact that student tickets tend not to be in the best sections of the auditorium.) Some of the arts-related projects realized by the consortium are presented in this brochure. I would love to see more middle and high schools take advantage of these opportunities to introduce their students to classical music and opera.

Talking about opera - it should come as no surprise that the Metropolitan Opera itself is spearheading a foray in high-definition broadcasts. Patrons buy tickets to participating movie theaters and get to watch the whole performance live with the highest image and sound quality available. Last year (the first year in the program), six performances were broadcast to a total of 325,000 viewers; this year there will be eight performances and, maybe, half a million viewers, generating a new source of revenue for the Met and making the best in opera accessible to people all over the country - more accessible in fact than that quintessential American genre, the musical, still limited to New York's Broadway and the few big cities where the actor-singers stop on tour.

Does Business-School Research Matter (Part 2)?

In a previous post I discussed the AACSB's recent report on the impact of research in business school. The finance industry has adopted many theoretical results that in their time were considered groundbreaking - the Markowitz mean-variance model, for instance, and the Black-Scholes option pricing formula. Operations management research, however, has not received quite the same validation by practitioners. This is due in part because operations managers do not share a (more or less) common purpose as finance managers do; after all, most of finance is about figuring out how much to buy of each risky or riskless asset, so it makes sense that the Markowitz model developed a large followership.

In contrast, companies dealing with operations management issues belong to many different industries, produce durable goods or perishable items, face long or short lead times, many suppliers or just a few, and so on. It is much harder, if not impossible, to find a common denominator that would make a specific research thrust relevant for everybody, and it seems that the "low-hanging fruit" was plucked a long time ago: linear programming, anyone? (This has become so mainstream that even MBA students are taught it.) I found a list of the ten Management Science papers voted as the most significant by the members of the Institute for Operations Research and Management Sciences [INFORMS] here (Management Science is one of INFORMS flagship journals with a flavor of quantitative models in business, although said INFORMS members are mostly academics so it is possible that the papers they chose are completely ignored by industry.) There is no doubt that these papers pioneered new areas and changed the way researchers approached network flows, integer programming or game theory, but among the ten listed I would say optimization-minded math-savvy businesspeople are vaguely aware of four at most:

  1. Dantzig, GB. 1955. Linear Programming Under Uncertainty. Management Science 1(2)197. (Managers face uncertainty, and they often must make decisions before the value of the uncertain parameters has been revealed.)
  2. Wagner, HM, TM Whitin. 1958. Dynamic Version of the Economic Lot Size Model. Management Science 5(1) 89. (If you face known but time-varying demand over time that you must meet, you can find your optimal sequence of orders using a simple procedure.)
  3. Clark, AJ, H Scarf. 1960. Optimal Policies for a Multi-echelon Inventory Problem. Management Science 6(4) 475. (If goods must transit through a series of suppliers before they reach you, and costs have the right structure, then the network can be decomposed as a series of single-installation inventory management problems, for which the optimal policy is known.)
  4. Lee, HL, V Padmanabhan, SJ Whang, 1997. Information Distortion in a Supply Chain: The Bullwhip Effect. Management Science 43(4) 546. (This is hands-down the paper the most likely to be remembered by practitioners. Not incidentally, this is also the only one which also found its way, in a modified version, into the general management journals - in that case Sloan Management Review. An introduction to the bullwhip effect is available here; the paper explains that the volatility of the orders increases as you move up the chain, away from customer demand, because suppliers' orders fluctuate.)

The reason why the fourth paper seems more relevant to real-life management is that its insights can be summarized without formulas and its appeal does not lie per se in an algorithm or quantitative formulation (although obviously insights cannot be proved without mathematical models to derive proofs from.) In what can only be seen as judicious timing given the AACSB's report, Management Science started requiring not too long ago that every paper be accompanied by management insights oriented toward business professionals - b-school research is getting better at marketing the relevance of its results. Now, let's just hope that practitioners indeed subscribe to that kind of journal and will be able to read the insights written for them.

Computer-based, data-driven decision-making...

...will rule the world. At least that's what Ian Ayres from Yale University says in his book Super-Crunchers, which The Economist reviewed recently (I haven't read it). Readers' comments on have been lukewarm - three stars and a half on average - but don't provide many insights into why the book fails to deliver, and its thesis is indeed controversial: in The Economist's words, "The sheer quantity of data and the computer power now available make it possible for automated processes to surpass human experts," and "[the book] presents a convincing and disturbing vision of a future in which everyday decision-making is increasingly automated, and the role of human judgment restricted to providing input to formulae." Which obviously is exaggerated, but plays on managers' fears to be replaced by machines and will probably make the sales of the book soar.

Of course as managers come to rely more and more on quantitative models, the risks increase of implementing a computer-generated strategy based on erroneous assumptions because no one really understands what is going on inside the black box, and I've written about this topic at length - industry practitioners, in particular in finance, have made a number of mistakes recently (although it is unlikely that their models were data-driven, because of the novelty of that approach). Large-scale, data-driven decision-making is gaining ground as an innovative framework that could decrease the number of modeling mistakes, precisely because it requires fewer assumptions from the decision-maker; the National Science Foundation has made cyberinfrastructure (large-scale computing - see also this post) one of its core thrust areas and established an initiative to fund research on dynamic and data-driven systems. Obviously since my NSF-funded research is on the topic and data-driven management is touted as more realistic and more relevant to industry than traditional, probability-driven models, I am pleased to see that it is drawing some attention outside the ivory tower of academia.

Revenue Management at the Opera

On Thursday evening I attended a talk by the artistic director of the Metropolitan Opera in New York - the Met is faced with an interesting revenue management problem: it prefers to schedule revivals, i.e., performances of the same production of a given opera, because the costs of building the decors etc are sunk (they have already been incurred once and for all), but the public is biased in favor of new productions, as no one is particularly interested in seeing the same production twice.

The issue, of course, is that new productions require more rehearsal times for the singers, which means that the Met has to leave more evenings "dark" (their slang for no performance) in the season calendar, in contrast with the good ol' times when it ran seven performances a week no matter of what. So there is a trade-off between relinquishing profits by rehearsing rather than performing, and increasing attendance by showing a new production. Interestingly, the revenue management problem faced by the Met has a lot more to do with determining the right mix of old vs new productions rather than pricing tickets correctly.

Should doctoral studies be shorter?

An article in the New York Times two weeks ago (October 3) discussed "ways to shorten the ascent to a PhD," and mentioned some frightening statistics: "The average student takes 8.2 years to get a PhD; in education, that figure surpasses 13 years. Fifty percent of students drop out along the way, with dissertations the major stumbling block." But of course the picture is a lot murkier than that - the thought that all these students drop out because they cannot complete their dissertation is downright naive. (The idea, put forward by an education researcher, that the problem lies in professors not "clarify[ing] what they expect in a dissertation" and students "trying for a degree of perfection that is unnecessary and unobtainable" is downright ridiculous.) In truth, many students enter doctoral programs because they do not know what they want to do after college and are eager to postpone their entrance into the real world - many trudge along for a few years while they take classes and drop out at the first setback in their research; others, especially in engineering, leave the program once they realize there are plenty of job opportunities for people without PhDs (typically, those were strongly encouraged by their professors to apply to graduate school by their professors but didn't develop sufficient self-motivation along the way).

Dissertation only becomes a "stumbling block" in the road to the PhD when the advisor runs out of money (or out of patience with the student) before the student has done enough research to write a thesis; in that case the student might be forced to take a full-time job, hoping to complete the thesis in his or her spare time, which in general proves tricky. (The university where I work, Lehigh, requires students who enter a graduate program to complete their studies within ten years; last year a student who had left to follow her husband to the other end of the country but never managed to complete her thesis - the university where she worked as an adjunct made her teach tons of courses - came back full-time for precisely that reason.) In that respect, the desire of universities to provide more funding is in the right place, and I do believe that the number of semesters spent on teaching assistantships has some positive correlation with the total length of doctoral studies - you always get more research done when it is the only thing you have to do. On the other hand, PhD candidates in education take so long to complete their studies because their research isn't marketable ("fundable") to the same extent that science and engineering research is - you don't need a PhD in education to be an educator - and I can't even say off the top of my head what their job prospects are, although if you want to write books on how best to educate students it certainly helps to have "PhD" next to your name on the title page. (And then you get to call yourself an education researcher and have people listen to you when you put forward silly ideas like the one mentioned above.) So in the measure that the length of doctoral studies in education helps turn away students from a field with limited prospects, it provides the right incentive by weeding out the faint of heart and keeping only the most motivated students.   

Besides, the only A.B.D. people I've heard of who dropped out of their doctoral program did so either because they had a falling-out with their advisor or because their experiments-based project did not succeed and completing a thesis would have required them to start another project and delay graduation by two or three more years. So if universities really want to shorten the length it takes for their students to get a PhD, they should implement mechanisms for students to switch advisors without the new advisor fearing retaliation from the old one, for instance, or for students to appeal their advisor's decision that the work isn't yet good enough. Also, many projects in science fail because this or that technique doesn't achieve the expected goal; students (and their advisors) currently see this as time wasted but such "non-results" should also be considered as important, publishable scientific advances, as they can help other teams of scholars by telling them not to go down that road. When all is said and done, the real issue should not be to shorten the PhD but to reduce the (currently absolute) power an advisor  has over his or her students, or at least reduce opportunities for advisors to misuse that power. But that also opens a whole can of worms: if time-to-degree starts being monitored closely, some students might slack off towards the end of their studies because their advisor feels outside pressure to let them graduate no matter what to keep that statistics down. That debate isn't coming to an end any time soon.

Does business-school research matter?

The period to comment on the AACSB's controversial report "The Impact of Research" (in b-schools) is drawing to an end. AACSB is the Association to Advance Collegiate Schools of Business, which accredites business programs; in August, it released a report that received widespread coverage in the business press (see for instance articles in The Economist and BusinessWeek) because it recommends, in The Economist's words, that "the schools be required to demonstrate the value of their faculties’ research not simply by listing its citations in journals, but by demonstrating the impact it has in the workaday world."

The debate isn't new, and the AACSB website provides a good summary of its progression since the 1950s up to the late 1990s (in the 1950s, "criticism of practical research created insecurity in professional schools, and business schools suffered because they felt they were not fully accepted in the community of scholars. Business schools [...] overcompensated because they wanted to feel like equals to their peers in mathematics and physics, and they forgot they were members of a profession") and of the business models that have, indeed, found their way into the real world. Sadly, most of those, such as the Black-Scholes option pricing model, share the same application domain, namely, finance. Some of the other approaches cited, such as multivariate statistical techniques to estimate house prices, really cannot be called research products - they're just quantitative methods that were used by other disciplines, in particular engineering, before being adopted by business pundits. The fact that a company uses math in the twenty-first century doesn't mean it's cutting-edge. (Or at least, let's hope not.)

It would be interesting to retrace the history of some of the operations research applications mentioned, such as fleet management in logistics - I am not convinced that the theory originated in business schools, as opposed to civil or industrial engineering departments (at MIT, Cynthia Barnhardt and Richard Larson are affiliated with the School of Engineering; at Princeton Warren Powell has an appointment with the Operations Research and Financial Engineering department.) Engineering faculty members have to convince funding agencies of the relevance of their research too, and the fact that some academic research results are now applied in industry doesn't mean they were developed in business schools.

Interestingly, we owe one major topic of quantitative management research in academia to industry itself: in 1972, an employee of BOAC (which would become in time British Airways), Kenneth Littlewood, devised a simple rule to determine how many seats to set aside for business travelers, who typically buy their tickets after leisure travelers and are less sensitive to price, hence generating more profit for the company. This spawned hundreds if not thousands of academic research papers aimed at extending the rule (to multiple fare classes, network revenue management, censored demand data and the like). Maybe the best guarantee to produce research relevant to industry is to have a practitioner begin the work.   

The Web and the News

Newspapers are slowly adjusting to the Internet, developing websites tailored to the medium rather than duplicating their print editions online (read this post and the references therein for my opinion on the second approach). It should come as no surprise then that TV news programs are following the trend too. Or, should I say, one TV program, as we owe this to the one anchor who takes news more seriously than his hairdo, the one and only Charles Gibson at ABC. According to the New York Times in an article dated October 12, 2007, ABC has taken to producing an afternoon webcast, sometimes anchored by Gibson himself, and aimed at the younger audience who gets its news online. Rather than putting the TV newscast online as do its competitors, ABC has created a web-only product, which allows for longer features. Of course in-depth analysis remains the turf of magazines, and video (both on TV and the web) will always have a bias for stories with a strong visual element, but it is refreshing to see one of the "big three" taking the web seriously.

On Novelty in Engineering Education

A couple of days ago in an article dated September 30, 2007, the New York Times spent a considerable number of lines vaunting Olin College's radical approach to engineering education. Olin is an anomaly in the higher education landscape in the sense that it was started from scratch just a couple of years ago, charges no tuition, and - last but not least - promises nothing less than revolutionizing the way engineering is taught at the college level. According to the Times, Olin's founders assessed the situation as follows: "Engineering schools [they mean Research I institutions] had structured themselves, largely for the convenience of faculty, around a comfortable way of teaching but not the best methods of learning. There was too much note-taking in the classroom and not enough hands-on learning. Institutions stressed research over undergraduate teaching, because that's where the recognition and grant money come from." The founders then came up with "Olin's DNA: project-based learning. [...] Its method of instruction has more in common with a liberal arts college, where the focus is on learning how to learn, than with a standard engineering curriculum." The article goes on gushing about Olin, piling positive quotes of pundits such as the senior vice president of engineering and research at Google.

So Olin's way of teaching is a good thing, right? "What industry needs"? As much as I enjoy outside-the-box thinking, I can't help but be reminded of the integrated math curriculum that became so popular in high school education in the late 1990s, which, in the words of a Morning Call staff writer ("High School math failing to make college grade", July 5, 2007) "requires vast amounts of reading and writing to identify the math problems, solve them and explain the solutions." This sounds like project-based learning for high school students to me, the idea being that in real life math problems don't walk around with a nice little label "here is what you need to know to solve me, please help". And educators must have strongly believed in the potential of integrated math for them to adopt the curriculum in their schools - no teacher gets up in the morning thinking "hmm, how can I handicap my high school students today and deprive them of the skills they need to succeed in college?" But once a few cohorts had graduated and faced the world of post-baccalaureate education, integrated math lost its appeal - it became apparent that "students [were] heading to college less prepared for math than they were a decade or two ago" because the integrated math programs didn't "emphasize basic skills," and parents complained to school officials. (The Times article mentions that "few of the [Olin College] Class of 2006 are going on to graduate study in engineering or jobs in the field," which is an issue since the whole curriculum is designed to draw more students to engineering, not just a bachelor's degree.)

So is Olin College's headline-grabbing approach the future of engineering education, or a mistake? Does it matter whether this kind of experiment is run at the high school or college level - could it be wrong for high school students but appropriate for college kids? Or can there be too much of a good thing and should project-based learning be limited to senior-year capstone projects in college?