Previous month:
September 2011
Next month:
November 2011

October 2011

Lehigh's 2012 Commencement Speaker

Last week Lehigh University announced that Saudi Minister of Petroleum and Natural Resources His Excellency Minister Ali bin Ibrahim Al-Naimi will deliver the Commencement address on May 21, 2012. Not only was Al-Naimi named on the list of TIME magazine's 100 most inspiring people in 2008 ("Al-Naimi is well regarded in the oil industry, a very articulate and thoughtful minister who is as concerned about Saudi Arabia's energy future as he is about the current situation"), but he serves as Chairman of the Board at King Abdullah University of Science and Technology (KAUST), which focuses on graduate research and education - all in English - since it was founded in 2009 and is the first mixed-gender university in Saudi Arabia.

Al-Naimi is a 1962 graduate of Lehigh University (and a 1963 graduate of Stanford University, where he received a Master's in geology), and will also attend the 50th Reunion of his class during Reunion Weekend. A champion of education, he "grew up in the desert, living there with his Bedouin family until the age of 8. At 12, he joined a training program offered by the Arabian American Oil Co. He rose through the ranks, serving as a foreman, assistant superintendent, superintendent, and manager, before moving into the exploration department in 1964 to work as a geologist and hydrologist." (Source: Lehigh press release).

It is wonderful for Lehigh to attract someone of the caliber of Al-Naimi - and an alumnus no less. He will also receive an honorary Doctor of Science degree from Lehigh. He is already the recipient of several honorary degrees, including from Seoul National University (2008), Peking University (2009) and Akademia Gorniczo-Hutnicza University of Science and Technology in Poland (2011). A KAUST news release about the honorary degree he received in Poland is available here. "In his acceptance speech, Minister Al-Naimi expressed great pride in receiving the degree. He explained that future relations between Saudi Arabia and Poland are linked by three main elements: education, energy and the environment." Also, "AGH-UST Dean of Drilling, Oil and Gas praised Minister Al-Naimi's keen interest in the academic exchange between KAUST and AGH-UST."

Lehigh has already links with KAUST: President Gast is a member of the President's International Advisory Council and a current KAUST faculty member in Electrical and Computer Engineering used to be on the faculty at Lehigh. But Al-Naimi's visit to Lehigh in May 2012 should be a wonderful opportunity to hear more about two important topics in the twenty-first century: energy and education. I am looking forward to his speech.


On Impact Factors

The International Federation of Operational Research Societies, or IFORS, recently reposted an article that first appeared in The Chronicle by Richard Monastersky. When I looked up the original article, I realized it was now 6 years old, but many of the issues remain very timely. The article describes the use and over-use of impact factors, and the techniques that some less scrupulous people employ to game the system.

First, my non-academic readers might ask: what are impact factors? They are the brainchild of a Philadelphia-based researcher named Eugene Garfield, who came up in the late 1950s with "a grading system for journals, that could help him pick out the most important publications from the ranks of lesser titles." The system relies on "tallying up the number of citations an average article in each journal received." Fair enough. But the tool now plays an important role in tenure and hiring decisions, because universities want a simple metric to evaluate the quality of the research, and papers accepted by journals with high impact factors seem likely to be of high quality. This, predictably, has led to some abuse.

The calculation of impact factors, by the company ISI (owned by Thomson Corporation), goes as follows: "To calculate the most recent factor for the journal Nature, for example, the company tallied the number of citations in 2004 to all of the articles that Nature published in 2002 and 2003. Those citations were divided by the number of articles the journal published in those two years, yielding an impact factor of 32.182 — the ninth-highest of all journals."

But trying to draw conclusions about average quality from an average number is bound to generate misconceptions and errors. "For example, a quarter of the articles in Nature last year drew 89 percent of the citations to that journal, so a vast majority of the articles received far fewer than the average of 32 citations reflected in the most recent impact factor." If a paper is accepted by a top-ranked journal, does its impact on the scientific field reflect the importance of the journal, or is it on the contrary one of the 50% accepted by the journal whose quality will be below the median?

The following caught my attention: "In China, scientists get cash bonuses for publishing in high-impact journals, and graduate students in physics at some universities must place at least two articles in journals with a combined impact factor of 4 to get their Ph.D.’s." Is putting so much pressure on scientists really a good idea? Maybe such high stakes explain some egregious cases of plagiarism over the past few years, such as the one Prof Alice Agogino of UC Berkeley was recently a victim of. You can find her paper (from 2004) here, and the other paper (from 2007) there.

But what should impact factors really blamed for? I do find the claim that they are "threatening to skew the course of scientific research" and that "investigators are now more likely to chase after fashionable topics" because they want to get published in journals with high-impact factors frankly outlandish. Investigators chase after fashionable topics because they need grants to support their research group and get promoted. Journals in my field (operations research) publish contributions in a variety of fields, from theory to logistics to transportation, and no one needs to know impact factors to be aware that the hot topics right now are in health care and energy, but the fact that your research focuses on a hot topic doesn't mean you'll make the sort of meaningful contribution that will warrant publication in a top journal.

Another dynamic that was left unanalyzed in the article is that between journals and senior researchers. The article spends a lot of time and space discussing the idea that editors might reject a paper they feel won't be cited enough and thus would be detrimental to the journal's impact factor, but if the paper is co-authored by a top researcher and falls within that researcher's core area of expertise, it is hard to imagine that the paper would not be cited many, many times.

So a more relevant question would be: among the authors published by those top journals, how many are authored by junior researchers without senior researchers co-authoring? how many by junior researchers with other junior researchers? how many by junior researchers with students? If many highly-ranked journals publish mostly the work of senior researchers, then it is not reasonable to ask junior researchers - who are supposed to demonstrate their own research potential without the guidance of senior staff - to publish in those journals too in order to benefit from the golden aura of the high impact factor. (Some junior researchers certainly will be able to, but that shouldn't be a widespread standard.) This reminds me of those statistics about average salaries per profession, where journalists forget to mention that the averages are computed for employees ranging from 25 to 64 in age, thus giving the impression that any college graduate can hope to make $80k a year within a few short years out of college. Maybe the high impact factors are due to the predominance of articles by well-established scientists with decades of experience in the profession.

This being said, a lot of the article rang true. For instance: "Journal editors have learned how to manipulate the system, sometimes through legitimate editorial choices and other times through deceptive practices that artificially inflate their own rankings. Several ecology journals, for example, routinely ask authors to add citations to previous articles from that same journal, a policy that pushes up its impact factor."

I had heard of the practice before, although I've never experienced it. I have, though, been in a situation where the anonymous reviewer requested that a citation be added to my (and my student's) paper, and when I looked up the citation there didn't seem to be anything in common with the area of our paper. On the other hand, if the reviewer was one of the authors of the other paper, as seems likely, then having his paper cited increases his number of citations, which is a metric universities take into consideration at promotion time. I could have complained and have my paper left in limbo for several more weeks or months. I decided I didn't have the time to wage that fight and simply made the citation as inconspicuous as possible.

Self-citations, where previous papers published in a journal are cited in that same journal, are certainly a potential issue. The article mentions the case of a researcher who "sent a manuscript to the Journal of Applied Ecology and received this e-mail response from an editor: “I should like you to look at some recent issues of the Journal of Applied Ecology and add citations to any relevant papers you might find. This helps our authors by drawing attention to their work, and also adds internal integrity to the Journal’s themes.”"

The editor who sent the email later said that "he never intended the request to be read as a requirement," which is disingenuous, given that the paper had not been accepted yet and researchers can't submit their paper to other journals until it has been withdrawn or rejected, thus putting them at the editors' mercy.

Recently ISI has developed an alternative methodology to address the flaws of the impact factor metric, called eigenfactor, in which citations from high-impact journals are weighted more heavily. To evaluate a scientist's value to his field, people should use the H-index rather than drawing inferences from journals' metrics, although the H-index also suffers from drawbacks. And, as it becomes more used, it is sure to be more misused too.


Cross-Training in Business and Executive Development

I enjoyed reading “Making yourself indispensable” by John Zenger, Joseph Folkman and Scott Edinger in the October 2011 issue of Harvard Business Review. (The authors are the CEO, president and executive vice president of Zenger Folkman, a leadership development consultancy. They are also the authors of The Inspiring Leader, published by McGraw-Hill in 2009.) The authors argue that, to make it to the top, executives must develop skills that complement what they already do best, in the spirit of cross-training common among world-class athletes.

For instance, “an experienced marathoner won’t get significantly faster merely by running ever longer distances. To reach the next level, he needs to supplement that regimen by building up complementary skills through weight training, swimming, bicycling, interval training, yoga and the like.” The authors refer to this as nonlinear development and apply this idea to leadership competencies. I appreciated how they were able to connect their arguments about nonlinearities to hard numbers gleaned from their database of 360-degree surveys of developing leaders, such as:

  • Only 14% of leaders who scored in the 75th percentile in focusing on results but weren’t as strong in building relationships reached the 90th percentile in overall leadership effectiveness (called the extraordinary leadership level).
  • Only 12% of leaders who scored in the 75th percentile in building relationships but weren’t as strong in focusing on results reached the extraordinary leadership level.
  • 72% of individuals performing well in both categories reached the extraordinary leadership level.

The authors also provide a helpful exhibit, called “What skills will magnify my strengths?”, to help executives decide how best to cross-train in the business world using their inventory of 16 key strengths (p.88-89 of the magazine). The article is full of non-trivial insights and well-thought-out arguments. For instance, they argue that what makes leaders indispensable is “being uniquely outstanding at a few things” (as opposed to good at many).

  • Executives in their databases who had no skill in the 90th percentile (i.e., no profound strength) scored on average in the 34th percentile in leadership effectiveness.
  • With one outstanding skill, their overall leadership effectiveness score rose to the 64th percentile. (In other words, executives jumped from the bottom third to the top third by simply having one top strength.)
  • With two outstanding skills, their leadership score increased to the 72nd percentile, and 81st with three outstanding skills. (Check the article for the numbers corresponding to four or five outstanding skills.)

The authors give some guidelines to select a strength to focus on and identify complementary behaviors.

I found the article to be truly innovative, and I wouldn’t be surprised if it ends up being recognized as one of the best HBR articles of the year. I highly recommend it. A must-read.