Posts Tagged ‘science journals’

Indices again

April 23, 2011 1 comment

Via Nanopolitan: Current Science carried a letter by Diptiman Sen and S. Ramasesha, two physicists at the IISc, pointing out that the h-index is not a good `scientometric’ index. Unfortunately, the arguments they have used to establish that are not all equally good. In particular they suggest that the Nobel Laureate V. Ramakrishnan has a low h-index. This was picked upon by two other scientists who point out the actual figures are not particularly low. And in between poor arguments and rejoinders, many other important arguments against using the h-index got lost.

Some of these other arguments were given here and here, and many more can be found all over the cyberspace and blogosphere. My arguments against scientometric indices in general, and h-index (and journal impact factor) in particular, are similar to those given in these links, but also try to take into account the conditions special doing science in India. These are, not necessarily in any particular order, are the following.

1. Most (or all?) such indices are based on only the number of citations (total or per year), so no matter how a metric is designed, it is only the number of actual citations which enters the metric — so any of these metrics is ultimately calculated from only one parameter. Or perhaps two, since the rate of growth of citations may be included. Some metrics would include the number of papers (total or per year) as well. That is another parameter, but the number of citations is not completely independent of the number of papers, and the number of papers is usually not a good measure of their quality, so including that number does not improve the quality of the index either.

2. Even if different ways of using the citation count (and paper count) lead to qualitatively different indices, the standard indices are still numbers associated with only one individual (or one institution), the one being evaluated. This cannot make any sense, since high or low values may be systemic. For example, mathematics has fewer papers than  medicine, than even specialized branches of medicine like oncology, and consequently fewer number of citations as well. So any index that can be applied to both mathematics and medicine will have to take into account its behavior specific to that field, and thus require some sort of comparison within the field. This is never done as far as I can gather, either in the construction or in the usage of these indices.

3. Even if we can make a comparative index, for example by taking ratios or percentiles within a field, that is not likely to hold up against historical data.  That is, given some index — h, g, or whatever — for some string theorist, we can come up with a `normalized’ one by taking its ratio with the same index for Witten, but the same normalization is not likely to make any sense for Born or Einstein, say. Of course they were not string theorists, and neither is `normal’ a word one should use for any index related to Witten. Still, the explosion of citations is a relatively recent phenomenon, and related to the explosion of papers, so the variation of any index with time — for individuals as well as within fields — need to be taken into account.

4. Many scientists work across disciplines, many more work across subfields. It makes no sense for such people’s work to be evaluated by a single index, as the index may have different ranges in the different fields or subfields. For example, someone working mostly in mathematics and occasionally in string theory may end up with an index which is low compared to string theorists and high compared to mathematicians. How should something like that be used?

5. Indices are used for different purposes at different career stages. So it does not make sense to use the same index for people at different stages of their career.

6. There may be `social’ factors in the rise of citation count of specific papers or individuals — some are obvious and `nearly academic’ ones, like the popularity or currency of a field or a problem — the bandwagon effect. Then there is the `club’ effect — I cite you, you cite me, friends cite friends — which can work wonders in small subfields. There may also be less academic and more career-oriented reasons — it is very likely that papers cite probable referees more often, so that a paper does not come back for revisions simply because `relevant literature was not cited.’ I would not be surprised if this mechanism gets reinforced for people with many collaborators — a paper might be rejected if it did not cite the papers of the referee’s collaborators.

There are also several issues special to Indian science, which have to do with how appointments and promotions are usually effected in India. As noted by G. Desiraju in the letter to Current Science,

it was possible, in the days before we had scientometric indicators, for committees of wise men to simply declare an incompetent as an outstanding scientist.

Unfortunately, it is still possible. But that is another discussion.

Chinese competition

August 13, 2010 Leave a comment

Apparently the Dept. of Science and Technology has sent out a question to all the research institutes under it — how can India be more competitive with China and Japan in science and technology. Apparently this was a question asked in the parliament.

To many, a big part of the answer is obvious, dedicate more resources to school and college education, open more universities, and do not concentrate finances in research institutes. But the research institutes will not say this.

A smaller part of the solution is to support Indian science journals.

Indian Science Journals

November 19, 2009 2 comments

Why are science journals published from India so bad? Here is a list of impact factors of Indian science journals. That list is from the year 2000, but the situation has improved only marginally, as this report about 2008 shows. Of course, impact factor may not be a very good measure of the importance of a journal, since it has more to do with immediacy than lasting impact, among other things. But even by other measures, Indian journals are ranked very low.

One could of course say `Who cares?’ So what if Indian journals are not very good, as long as the work produced by Indian scientists, published in foreign journals, are rated highly? Is the place of publication of journals relevant if the contributions are good? I do not have a good answer to that, although I would like to put the counter-question that since science progresses through good research, why do we care about who did that research? In other words, why do we care if Indian science languishes, if good science is being done somewhere in the world?

In any case, the quality and importance of research done in a country seems to have some sort of positive correlation with the quality of journals published from there — we should count Western Europe (minus the UK) as one country for this purpose. The evolution of such quality with time also seems to be correlated with the publication of better journals, as in the case of Singapore or China in recent times.

So if we agree (as many do, including myself) that India needs to publish better quality research journals, we should find some way of getting better quality research into the journals published from India. Our famous scientists, many of whom often shed crocodile tears for Indian science, refuse to publish any of their good quality work (in many cases, any work) in Indian journals. It is not clear to me why, since electronic archives such as this ensure that any paper (in almost all fields of science except medicine)  is seen all over the world even before publication. Research papers would be read from the archives and cited, regardless of where they are subsequently published.

But these scientists cannot be coaxed into publishing in Indian journals that easily. And if they do not publish there, these journals are not going to grow in stature. So I have a suggestion, one similar to what used to be the rule in post-WW2 Europe:

Any paper written using a grant from an Agency of the Indian Government must be published in a journal published in India. If a research grant is given jointly by an Indian and a foreign agency, a predetermined fraction of the papers written using it should be published in Indian journals.

If the Indian funding agencies follow this rule, Indian journals will start looking up again in a few years, be competitive in the world, and attract good research from abroad.