Could Ranking Journals Rank You?

Should the journal your article appears in be a factor in assessing the quality of the article itself? A number of European institutions are apparently pushing in that direction and that mindset may be coming to a campus near you.

This week’s Chronicle of Higher Education discusses recent European efforts to rank humanities journals, as a means of measuring an article’s value in tenure and funding decisions. This new program (called the European Reference Index for the Humanities) is apparently part of a sincere effort on their part to promote European scholarship, but as a practical matter it seems quite dubious for journals in the humanities disciplines.

Science and social science journals have been using similar measures for some time now, but they can rely on comprehensive citation indexes to provide some hard data in those fields. But the humanities lack similar indexes, so efforts in our fields tend to be highly distortive. As a case in point, in 1993 a National Science Foundation survey of doctorate programs tried to use citation indexes of history journals as a measure (http://www.historians.org/perspectives/issues/2002/0209/0209aha1.cfm ). But since the larger database consists disproportionately of science journals, their measurements unsurprisingly produced results that were heavily skewed toward scholars in the history of science and medicine.

The European system tries to mediate such deficiencies of quantification by supplementing their information with advice from panels “of four to six experts.” This seems a poor way to measure significance of particular articles to the discipline (even if, as the Chronicle reports, the American Historical Review comes out at the top). Our database of history journals currently contains 379 peer-reviewed English-language journals in the discipline, so it is difficult to imagine skewed citation indexes and a handful of specialists delivering a fair or equitable system of ranking that encompassed the parts, as well as the whole.

Sadly, the problem is not likely to remain an ocean away. Over the past month the deans’ offices at two universities have contacted me, asking where they might look for domestic rankings of history journals. Given the urge to quantify and assess all aspects of higher education by precise statistical measures, it is hardly surprising that similar tests and measures are making their way into the faculty ranks. But scholars in the humanities have good reason to be wary of this impulse, as there is every reason to think these rankings would be a poor measure of the full range and diversity of our journal scholarship.

History faculty and department chairs should be on the lookout for such efforts on their own campuses, and reach out to their colleagues in other humanities fields to develop a common response. As administrators try to impose standards of assessment in all areas of their institutions, it may soon be incumbent on departments and the profession to offer new methods for articulating the value of particular journal articles to those outside the field.

Update: The European Science Foundation posted the full list of ERIH rankings for history journals, as well as the description of their methodology for selection and ranking.

Back to Top

Leave a Reply

Comment

* Required field

  1. Kelly Woestman

    Some additional context – this is part of the larger web phenomenon of popularity ranking related to marketing and is also related to crowdsourcing [http://en.wikipedia.org/wiki/Crowdsourcing] It can also be seen as providing data regarding actual viewing and/or usage of publications and is not that dissimilar from scientists who evaluate their studies based on how many others cite them. We don’t keep exact data on historiographical references even though the most authoritative sources emerge in various schools of thought.

    It’s definitely something to explore and play a role in determining or it will be decided for us with measures we do not play any role in evaluating and/or influencing. Thanks for bringing our attention to this important development that will ultimately affect all historical scholarship – especially what the public and our students see as the most influential as our entire worlds are increasingly focused on evaluating any type of source based on how many other people regard it as reputable, etc.

    Reply
  2. Sherman Dorn

    As someone who works in a professional school, I’ve had to deal with this, and in addition to having supportive associate deans when I’ve come up for promotion, I’ve also used Harzing’s Publish or Perish software, which pulls from Google Scholar. I think someone did comparisons of so-called “bibliometrics” using Google Scholar, Web of Science, and SCOPUS, and they overlap somewhat but are still different in terms of measures.

    I suspect that a standardized approach to the citation issue will essentially kill the monographic book, in combination with the changing economic structure of publishing. If you can get more citations from journal articles (especially if your friends create new journals!), why go to the pain of writing a book? History will go the way of many other disciplines, where only senior scholars have the luxury of devoting time to a book.

    Reply
  3. Larry Cebula

    This sort of thing might make a world of sense in the sciences, but would be pernicious indeed if it were adopted in history.

    The problem for us is that our discipline moves so slowly. Sure there are a few breakout books each year that get everyone excited, but for most of us the establishment of a scholarly reputation is the work of several years, or decades. The time clock by which our impact is measured is far longer than the clock of the sciences, or that of tenure committees.

    I would like to see the AHA and OAH take a stand against any attempt to rank humanities journals.

    Reply