Friday, February 17, 2006

Ranking journals

The link above to The Scientific Activitist is to a discussion about journal rankings. The idea behind ranking journals is to assess the impact of a researcher's work. This is done by association, by assessing journals' impact. Presumably, publication in a highly ranked journal (one that has a great impact) means that the researcher's work has greater impact, because the journal editors are selecting for that. Right now, the most common metric used is the ISI impact factor, which is the average number of times that an article published in a journal is cited by other articles.

So, now people are looking at other systems. One possibility is to use something like Google's Page Rank system, in which a journal's rank would be determined not only by numbers of citations but also by the number of times that the citing articles are themselves cited. So, a journal is more highly rated if its articles are cited by other highly cited articles. There seems to be a certain circularity to this that makes me skeptical. Another proposal is to use the product of the ISI impact factor and the Page Rank, the justification for this being that it more closely reproduces the general qualitative ideas that researchers have regarding which journals are "best".

I want to know the following: Why are people pulling these metrics out of their asses in the first place? Clearly, none of these metrics have a deep theoretical basis behind them. Instead, they are relatively arbitrary choices that seem to produce numerical results that match certain a priori biases. It may just be me, but this doesn't seem to be a good way to analyze data for decisionmaking purposes. Shouldn't we figure out what we want to do with this data (what decision we want to make) and then develop an approach that has some fundamental justification? This approach to journal ranking seems to be the kind of method used by someone who is looking for numbers to support their biases, rather than numbers that actually convey useful information. At least the ISI impact factor has the following good points:

  1. It is easy to understand exactly what it is saying (the average number of times an article in the journal is cited).
  2. It is not too good, and so it is taken with a grain of salt -- it is less likely to be misused.
The last thing we need is some opaque, poorly-justified, complex formula that is open to blanket application as an "objective measure" of research quality. And, yes, I'm biased, as I tend to publish in journals that have relatively low impact because their target audiences are specialized and small.

Topics: , .

2 comments:

  1. I did some work to determine what are the good places to publish (for my own purposes):

    http://www.daniel-lemire.com/blog/conferences/

    As you can see, I use *several* metrics and not just one.

    I think that using biased metrics to determine where to publish is probably better than pure random choices, especially given the growing number of crappy conferences and journals. Indeed, at some point, some conferences become so bad that by all metrics, they are bad. It is ok to participate in those, but if that's all you have, over a 10 years period, maybe you will should get in trouble!

    Also, I'm a scientist, I like to measure things!

    Beside a decision support tool for my own decisions, I also use these statistics to help people judging me. I include impact factors in my c.v.s and funding applications, as well as publication counts and other statistics. Even though I don't necessarily shine, I can use these numbers to prove that I do *exist* as a researcher.

    Here's my reasoning:

    - guy number 1 has a good pub. list, funding, lots of projects, but it is kind of hard to tell how much actual impact he is having.

    - guy number 2 has a possibly weaker c.v. (less content), but he offers verifiable measures of the impact he has and you can see it is non zero.

    Which one do you rank highest? You are probably debating the issue in your head. And if you are, this proves my theory that offering metrics to reviewers is a good strategy.

    (Disclaimer: I think that in some instances, possibly most instances, for political reasons, you are better off hoping to get the "benefit of the doubt", but as a serious scientist, I prefer full disclosure.)

    ReplyDelete
  2. "I think that using biased metrics to determine where to publish is probably better than pure random choices, especially given the growing number of crappy conferences and journals. Indeed, at some point, some conferences become so bad that by all metrics, they are bad. It is ok to participate in those, but if that's all you have, over a 10 years period, maybe you will should get in trouble!"

    I think it's perfectly reasonable to use whatever metrics one wants to decide where to publish. What I object to is coming up with a supposed "good" metric to evaluate other people's publishing venues. I might suggest that there are other considerations that may be more important than those metrics, for example the nature of the journal's or conference's audience, and, for conferences, the interactions one has there. And, yes, you really have to know the journal or conference to make these decisions, but that's feasible for one's own publications (unless your publication volume is so high that you can't be bothered to spend much time thinking about where to publish).

    The problem with these metrics is that there is really no way to interpret them without knowing something about the conference or journal. For example, from your web site, the highest-impact journal has an impact factor around 2. This is not too different than many of the journals in my field. Compare that to, for example, Nature Neuroscience (impact factor: 16.98) or Neuron (impact factor: 14.439). Likely, these large impact factors are a result of the size of the journals' audiences, not just the rigor of their reviewing. But someone coming from a field with journals that tend to have impact factors like those would think a 2 is small.

    "Beside a decision support tool for my own decisions, I also use these statistics to help people judging me. I include impact factors in my c.v.s and funding applications, as well as publication counts and other statistics. Even though I don't necessarily shine, I can use these numbers to prove that I do *exist* as a researcher."

    Here I think there are better metrics. One thing that you can do is to find out how many times each of your papers has been cited; this information can be gleaned from Web of Science. In essence, you can compute your own impact factor (or that for a faculty candidate). But is that personal impact factor high or low? You'd need to know the personal impact factors for a fair number of colleagues before you would be able to evaluate these numbers...

    ReplyDelete