Home > Physics, Science and society > Indices again

Indices again

Via Nanopolitan: Current Science carried a letter by Diptiman Sen and S. Ramasesha, two physicists at the IISc, pointing out that the h-index is not a good `scientometric’ index. Unfortunately, the arguments they have used to establish that are not all equally good. In particular they suggest that the Nobel Laureate V. Ramakrishnan has a low h-index. This was picked upon by two other scientists who point out the actual figures are not particularly low. And in between poor arguments and rejoinders, many other important arguments against using the h-index got lost.

Some of these other arguments were given here and here, and many more can be found all over the cyberspace and blogosphere. My arguments against scientometric indices in general, and h-index (and journal impact factor) in particular, are similar to those given in these links, but also try to take into account the conditions special doing science in India. These are, not necessarily in any particular order, are the following.

1. Most (or all?) such indices are based on only the number of citations (total or per year), so no matter how a metric is designed, it is only the number of actual citations which enters the metric — so any of these metrics is ultimately calculated from only one parameter. Or perhaps two, since the rate of growth of citations may be included. Some metrics would include the number of papers (total or per year) as well. That is another parameter, but the number of citations is not completely independent of the number of papers, and the number of papers is usually not a good measure of their quality, so including that number does not improve the quality of the index either.

2. Even if different ways of using the citation count (and paper count) lead to qualitatively different indices, the standard indices are still numbers associated with only one individual (or one institution), the one being evaluated. This cannot make any sense, since high or low values may be systemic. For example, mathematics has fewer papers than  medicine, than even specialized branches of medicine like oncology, and consequently fewer number of citations as well. So any index that can be applied to both mathematics and medicine will have to take into account its behavior specific to that field, and thus require some sort of comparison within the field. This is never done as far as I can gather, either in the construction or in the usage of these indices.

3. Even if we can make a comparative index, for example by taking ratios or percentiles within a field, that is not likely to hold up against historical data.  That is, given some index — h, g, or whatever — for some string theorist, we can come up with a `normalized’ one by taking its ratio with the same index for Witten, but the same normalization is not likely to make any sense for Born or Einstein, say. Of course they were not string theorists, and neither is `normal’ a word one should use for any index related to Witten. Still, the explosion of citations is a relatively recent phenomenon, and related to the explosion of papers, so the variation of any index with time — for individuals as well as within fields — need to be taken into account.

4. Many scientists work across disciplines, many more work across subfields. It makes no sense for such people’s work to be evaluated by a single index, as the index may have different ranges in the different fields or subfields. For example, someone working mostly in mathematics and occasionally in string theory may end up with an index which is low compared to string theorists and high compared to mathematicians. How should something like that be used?

5. Indices are used for different purposes at different career stages. So it does not make sense to use the same index for people at different stages of their career.

6. There may be `social’ factors in the rise of citation count of specific papers or individuals — some are obvious and `nearly academic’ ones, like the popularity or currency of a field or a problem — the bandwagon effect. Then there is the `club’ effect — I cite you, you cite me, friends cite friends — which can work wonders in small subfields. There may also be less academic and more career-oriented reasons — it is very likely that papers cite probable referees more often, so that a paper does not come back for revisions simply because `relevant literature was not cited.’ I would not be surprised if this mechanism gets reinforced for people with many collaborators — a paper might be rejected if it did not cite the papers of the referee’s collaborators.

There are also several issues special to Indian science, which have to do with how appointments and promotions are usually effected in India. As noted by G. Desiraju in the letter to Current Science,

it was possible, in the days before we had scientometric indicators, for committees of wise men to simply declare an incompetent as an outstanding scientist.

Unfortunately, it is still possible. But that is another discussion.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: