« Cross-Training in Business and Executive Development | Main | Lehigh's 2012 Commencement Speaker »

October 17, 2011

Comments

Impact factors are influenced by things like how many journals there are in an area (more journals -> more papers -> more citations) and how extensive bibliographies are (the average lit review in my org behavior colleagues' opuses is longer than some of my papers).

I seem to recall an article (in ORMS Today?) a year or so ago mentioning a professor inordinately proud of his citation index -- many of the citations being corrections. So might a journal with lax review standards ”earn” a high impact factor by publishing easily refutable results?

Great points Paul! It's a pity that some people come to cherish their citation index when they get their paper count from correcting others.

I tend to view students' placement record and job record as a more important metric of success, but it seems that the urge of gaming any quantitative measure is deeply ingrained in a significant part of the population. Students want As because that is a sign they are good, hence grade inflation; journals want a high impact factor because that is a sign they are good, hence... (Interestingly, some universities do put journals in tiers and refer to them as "A-journals", "B-journals", etc. We never stop grading everything.)

An idea would be to classify the journals according to impact percentile. This way, they would have a clear incentive to fight "gaming" by others at their expense.

The comments to this entry are closed.

On my shelves

Blog powered by Typepad

Creative Commons License

Aurelie on Twitter

Enter your email address:

Delivered by FeedBurner