Here are links to a few articles on scientific publishing, especially related to problems or misconduct, that appeared in The Economist over the past year or so. This is obviously not an exhaustive discussion of the topic. I also wrote about an ethics scandal at Duke in May 2008; the reader might find a January 2007 article in Nature about recent cases of academic misconduct particularly interesting. One of those cases is described at length in the 2009 book "Plastic fantastic: How the biggest fraud in physics shook the scientific world."
Let's start with an August 2010 article about Harvard researcher Mark Hauser ("Monkey business?"), "who made his name probing the evolutionary origins of morality, [and] is suspected of having committed the closest thing academia has to a deadly sin: cheating." The issues are described in a New York Times article as pertaining to "data acquisition, data analysis, data retention, and the reporting of research methodologies and results."
While both The Economist and the NYT took pains of upholding Hauser's presumption of innocence from deliberate wrong-doing at the time, The Chronicle of Higher Education painted a darker portrait: "An internal document... tells the story of how research assistants became convinced that the professor was reporting bogus data and how he aggressively pushed back against those who questioned his findings or asked for verification." Hauser resigned from Harvard in July 2011, about a year after being found solely responsible for eight counts of scientific misconduct.
“An array of errors” (September 10, 2011) focuses on the work by Duke University researchers, who reported in 2006 that they were able to “predict which chemotherapy would be most effective for an individual patient suffering from lung, breast or ovarian cancer” using a technique based on gene expression, which seemed to hold tremendous potential for the field of personalized medicine.
Unfortunately, a team of researchers at the MD Anderson Cancer Center in Houston quickly ran in trouble as they attempted to replicate the results. They also realized that the paper was riddled with formatting errors, for instance in the tables, for instance.
The Duke University team decided to nonetheless launch clinical trials based on their work. Another researcher at the National Cancer Institute expressed concerns about the work, and Duke launched an internal investigation but the review committee, having “access only to material supplied by the researchers themselves,” did not find any problem and the clinical trials resumed.
But “in July 2010, matters unraveled when the Cancer Letter reported that Dr [Anil] Potti [of Duke University] had lied in numerous documents and grant applications,” for instance lying about having been a Rhodes Scholar in Australia. This led to Potti’s resignation from Duke, the end of the clinical trials, and the retraction of several prominent papers. A committee was formed at Duke to investigate what went wrong.
The most interesting part of the article, I found, was the description of the academic journals’ reaction: “journals that had readily published Dr Potti’s papers were reluctant to publish [a scientist’s] letters critical of the work.” The article also touches upon Duke’s being slow to deal with potential conflicts of interest by the researchers, and the fact that “peer review… relies on the goodwill of workers in the field, who have jobs of their own and frequently cannot spend the time needed to check other people’s papers in a suitably thorough manner.”
An editorial in Nature Cell Biology was written with Potti's case in mind, but extends to scientific misconduct in general. Sadly, it takes the easy way out, by stating that "Although journals can do much to promote integrity and transparency in the practise and presentation of research... it is not our mission to police the research community." This lack of responsiveness seems unfortunately in line with the paragraph I quote above.
The topic of “Scientists behaving badly” (October 9, 2010) is about a Chinese scientist and a blogger who publishes claims of scientific allegations on his website, some that “undoubtedly… shine a light on the often-murky business of Chinese science” while others “are anonymous and lack specifics.” While the article is on the specific and long-running feud between a urologist and a self-proclaimed “science cop”, with one casting doubts on the validity of the other's research results, it can also be viewed as a call for an independent expert committee investigating allegations of misconduct in China.
Here is the quote that most caught my attention: “Measured by the number of published papers, China is the second most productive scientific nation on Earth. Incidents like this, though, call into question how trustworthy that productivity is.”
This provides a great transition into the third article, “Climbing Mount Publishable” (November 13, 2010), subtitled: “The old scientific powers are starting to lose their grip.” Quick summary: “In 1990 [North America, Europe and Japan] carried out more than 95% of the world’s research and development. By 2007 that figure was 76%.” Elsewhere in the article, we learn that “America’s share of world publications, at 28% in 2007, is slipping. In 2002 it was 31%.” The article also discusses metrics regarding gross domestic expenditure on R&D, share of national wealth spent on R&D, number of researchers and share of world patents.
Because countries view R&D output as a measure of their intellectual capital and prowess, the risk of plagiarism from researchers pressured for ever-more output is real. I wrote about this in a recent post on impact factors. In particular: "The following [excerpt of an article in The Chronicle of Higher Ed] caught my attention: "In China, scientists get cash bonuses for publishing in high-impact journals, and graduate students in physics at some universities must place at least two articles in journals with a combined impact factor of 4 to get their Ph.D.’s." Is putting so much pressure on scientists really a good idea? Maybe such high stakes explain some egregious cases of plagiarism over the past few years, such as the one Prof Alice Agogino of UC Berkeley was recently a victim of. You can find her paper (from 2004) here, and the other paper (from 2007) there."
I do not know why those two researchers decided to do this, but as developing countries pressure their scientists to excel on the international stage as a way of demonstrating the value of their education and the brainpower of their citizens, it has to be tempting for scientists struggling to come up with ideas to just take someone else’s paper and publish it elsewhere under their own name, hoping not to get found out. This also discredits the hard-working researchers of the same country who are actually putting in time and effort to come up with innovative ideas.
It is therefore very important for journals to take an aggressive stance toward plagiarism. All articles identified as plagiarizing others’ papers (in Agogino’s case the paper is word for word hers from beginning to end, except that there are other people’s names on top, so it is a very clear-cut situation) should have a highly visible mention added online that this paper has been found to be a plagiarization of another paper (leaving out the issue of who among the authors plagiarized the paper and who just marveled at his co-author’s sudden perfect mastery of English). Maybe that would make would-be plagiarizers think twice before they act.
Finally, “Of goats and headaches” (May 28, 2011) discusses the very lucrative field of academic publishing. The focus is on academic journals, whose business model involves getting university libraries to pay for very expensive subscriptions with little information as to whether these journals are useful or often consulted, since the librarians are not the primary consumers. As a reminder, academic journals get their articles for nothing (authors receive no royalties) and reviewers review for free. A few journals such as the Journal of Economic Dynamic and Control actually charge a submission fee of $100 for authors hoping to be published in their pages.
Personally, I feel that my institution’s subscriptions to the journals I consult online have been useful (my favorite journal is Interfaces), but I wonder how many journals are rarely browsed; besides, there are only a few journals I consult regularly and I can find many pre-prints available for free download on researchers' websites. A mechanism to rein in subscription prices would certainly be useful. This would start by having librarians share the price of subscriptions with researchers so that researchers can help them decide which subscriptions are simply not worth it and which ones should be kept. In addition, librarians should track how many times a given journal is accessed online to gauge its usefulness.
But what I dislike the most about academic publishing is the business of textbooks. Professors, whose royalty rates - according to one of my former colleagues - “aren’t going to make anyone rich”, write the whole textbook by themselves. There is simply no editorial help in the way a book of fiction or nonfiction would be revised by an editor with a keen eye. The marketing around textbooks is minimal – frankly, the emails and newsletters I receive about new textbooks vaguely related to my area of expertise feel a lot like spam. And for that, publishing houses charge upward of $100 (often closer to $200) and change editions every few years to limit the market for used textbooks.
In France we paid the cost of the photocopies for a course packet put together by the professor. Showing up in the United States and learning how much money we were expected to spend on books definitely came as a culture shock. (Campus bookstores add a sizable premium for the privilege of putting all the books in one convenient location too.) It should come as no surprise that I do not require textbooks for my courses. But, in contrast with the cases mentioned earlier in this post, the practice of the academic publishing business is completely legal.