The link above to The Scientific Activitist is to a discussion about journal rankings. The idea behind ranking journals is to assess the impact of a researcher's work. This is done by association, by assessing journals' impact. Presumably, publication in a highly ranked journal (one that has a great impact) means that the researcher's work has greater impact, because the journal editors are selecting for that. Right now, the most common metric used is the ISI impact factor, which is the average number of times that an article published in a journal is cited by other articles.
So, now people are looking at other systems. One possibility is to use something like Google's Page Rank system, in which a journal's rank would be determined not only by numbers of citations but also by the number of times that the citing articles are themselves cited. So, a journal is more highly rated if its articles are cited by other highly cited articles. There seems to be a certain circularity to this that makes me skeptical. Another proposal is to use the product of the ISI impact factor and the Page Rank, the justification for this being that it more closely reproduces the general qualitative ideas that researchers have regarding which journals are "best".
I want to know the following: Why are people pulling these metrics out of their asses in the first place? Clearly, none of these metrics have a deep theoretical basis behind them. Instead, they are relatively arbitrary choices that seem to produce numerical results that match certain a priori biases. It may just be me, but this doesn't seem to be a good way to analyze data for decisionmaking purposes. Shouldn't we figure out what we want to do with this data (what decision we want to make) and then develop an approach that has some fundamental justification? This approach to journal ranking seems to be the kind of method used by someone who is looking for numbers to support their biases, rather than numbers that actually convey useful information. At least the ISI impact factor has the following good points:
- It is easy to understand exactly what it is saying (the average number of times an article in the journal is cited).
- It is not too good, and so it is taken with a grain of salt -- it is less likely to be misused.