Evaluation of scientific work is most often done by measuring scientific productivity and its impact, through citation analysis. Citation analysis includes measuring the number of citations, types of citations, self-citations (among authors, coauthors, institutions, countries or journals) and “independent” citations. In the evaluation of the status of author, institution or country, it is significant in which journals were the research results published, to what degree were they noticed and who noticed them and formalised that by citing. The status of the journal in which the research is published, as well as the status of the citing journal are some of frequently used indicators for evaluating individual scientists and institutions and are gained through the use of impact factor (IF). However, to use IF in general, and so called standard or Garfield IF in particular, as a basic indicator for the evaluation of an institution or individuals work is to misunderstand its real meaning. Journal IF is a measure of a frequency with which an “average“ article was cited during a certain period of time. IF helps in evaluation of a journal’s quality. It is not to be used for evaluation of a single article, or of a single scientist. A journal IF can potentialy be an indirect measure for the value of an article, because we can suppose it has passed a strict review procedure, but the real value is gained a posteriori, through citation counts and its influence on the IF.
Some standard scientific productivity indicators include the number of publications, impact measured through the total number of citations received, the average number of citations per paper, the number of papers with above average citation count and the potential values of articles gained through IF of journals that have published them. J. E. Hirsch (1), a physicist, was well aware of the shortcomings of these indicators and proposed a new one which would show the recognizable impact of a single scientist, althoughit can be used for journals as well. He has proposed a single number, the “h index,” as a particularly simple and useful way to characterize the scientific output of a researcher. A scientist has index h if h of his Np papers have at least h citations each and the other (Np – h) papers have ≤ h citations each. In practice, this means, that if an author has an h-index of 10, then he has 10 papers published that have a minimum of 10 citations each. The minimal possible total citation count in this case is 100.
As a scientometric indicator, h-index mainly serves for comparing scientists from the same disciplines and with similar work experience. The same can be said when using it for journals. Namely, two individuals with similar h-indices are comparable in terms of their overall scientific impact, even if their total number of papers or citations is very different. Conversely, comparing two individuals (preferably of similar length of work experience in science) with a similar number of total papers, or of total citation count, and very different h-values, the one with the higher h is likely to be the more accomplished scientist (1). According to Braunet al. (2) h-index combines the effect of “quantity” (number of publications) and “quality” (citation rate) in a rather specific, balanced way.
Batista et al. (3) consider h-index to have several advantages. It combines impact and productivity, it is not sensitive to extreme values in the sense of articles with no citations or hyper-cited articles, and it allows the direct identification of the most relevant works with regard to citations. This indicator does not show highly cited or hyper-cited articles, as well as the articles with citation counts which are below the index. One can frequently see situations where a scientist publishes some significant papers, which have an extremely high citation count, but his or hers h-index is not especially high. A frequent case with scientists who have a high h-index is that they work as a team and quote each other. This is the case, for example, in the field of high energy physics, where the number of authors is often higher than 50. Batista et al. (3) and van Raan (4) warn that when using h-index, it is important to investigate the impact of number of authors on total citation count. These authors have proven that the higher the number of authors, the higher the number of self-citations which, unless they are excluded from the final count, may inflate the h-index. On the other hand, it is important to notice, that for some scientific fields, and especially those in development, self citation is a logical and expected phenomenon.
When we take previously said into consideration, h-index mainly defines the recognisability and consistency of an individual scientist or journal in a specific discipline. In this case, “recognisability” means that a recognisable scientist has a relatively higher number of papers of which every one has a significant independent citation count. Independent citations are citations author receives from unknown colleagues from other institutions or, in case of small countries, from other countries.
Considering h-index, similarly to other indicators for evaluating scientific work, it is important to be aware of the scientific discipline as well, as its branches and the topicality of the work (5). For the promotion of physicists on leading research universities, Hirsch (1), as guidelines based on his calculations, suggests h č 12 for an associate professor, h č18 for a full professor, and an average hč 45 for a membership in the NationalAcademyof Sciences of the United States of America, although he allows for exceptions. He suggests, that h-index for a successful physicist with 20 years of scientific activity, should be 20, while h-index of 40 indicates an “outstanding scientist in a major top research laboratory“. He is giving an example of physicists, Nobel Prize winners, whose h-indices range from 70 to 90. An average h-index for a physicist, a candidate for Nobel Prize in a 20 years period from 1985-2005 was 35.
According to Hirsch, the most quoted top 10 scientists in the life sciences in the period 1983-2002, had a median h-index of 57, what is significantly higher than for physicists. However, the life sciences are too large area where we cannot lightly compare h-index of molecular biologists and enviromental or biodiversity biologists, or biologists in the fields of floristics or zoology.
Cronin & Meho (6) compared h-indices and the total citation counts for the information sciences. They analysed 31 most cited authors, according to SSCI most cited IS scholars, from faculties of information sciences in the US in the period 1999-2005. After excluding the self citations, the h-index values where ranged from 5-20. They have proven that there is a positive correlation between the citation count and h-index, which suggest that total citation count is a reliable indicator of impact and influence of individual scientists’ works. The average h-index for information sciences was 11. Oppenheim (7) has analysed British scientists in library and information science discipline and got an average h-index of 7.
Jokić and Šuljok (8) analysed h-indices of PhDs in natural and social sciences in Croatia for the period 1996-2005. For social sciences as a whole, h-indices were in range 1-6, where the 57.9% of authors had an h-index of 1, while only 9% of the authors had an h-index of 4 or greater. 94.6% h-index values for scientists in natural sciences were in range 1-20. The physicists had h-index 4 ≥ in 61% of the cases, chemists in 56.3%, and biologists in 41.3% and mathematicians in 19.8% of the cases. Out of technical reasons, this study did not differentiate independent citations from self citations.
For this short overview we have analysed the h-index values of papers covered by WoS (Web of Science) for Clinical chemistry and Medicinal chemistry from 1995 to 2008. The total number of papers was 8,675. The top ten most productive authors had the number of papers in range from 21 to 33 and an h-index in range from 8 to 19. For comparison and illustration of importance of the time span being analysed, we also analysed the papers covered by WoS from 1985 to 2005. In this period the top ten authors had the number of papers in range from 20 to 27, and h-indices in range from 6 to 17. The study did not exclude self-citations. We were also interested in h-indices of Croatian scientists and experts in this field. The most productive author in the period 1995-2008 had 21 papers covered by WoS and his h-index was 3. We should mention that almost 50% of papers were published in 2007 and 2008 and were not in the position to get a significant number of citations. This makes the current h-index value relatively low in comparison to the most productive scientists and experts in the researched fields in this period.
The scientific community has shown much interest for h-index as a scientometric indicator. A Scopus citation index was the first to offer automatic h-index calculation in addition to already present paper count, citation count and average citations. WoS soon followed.
Except for authors, h-index is used more and more as an indicator for the evaluation of journals. Braun et al. (2) compared certain journals by both h-index and IF. The results demonstrated that the journals Physical Review Letters, Astrophysical Journal andJournal of the American Chemical Society were in the top 20 by h-index. These three journals ranked outside the top 100 by impact factor, which demonstrates incomplete correlation of these two indicators.
For this text, we have researched the most prestigious journals (with the highest IF) in the clinical chemistry and medicinal chemistry subject fields, indexed by WoS for the time span 1995.-2007.The results were these: Clinical Chemistry with IF 4.803 for 2007 had an h-index 80, Current Medicinal Chemistry had IF 4.944 and h-index 33, Journal of Medicinal Chemistry had IF 4.895 and h-index 30, Clinical Chemistry and Laboratory Medicine had IF 2.618 and h-index 16, Clinica Chimica Acta had IF 2.601 and h-index 15. Just by analysing this data we can see that h-index value of a journal, when used alongside IF, can help in forming a clearer picture of a journal’s status within a specific discipline.
Taking into account already mentioned facts; h-index is one of the indicators which can help in evaluating scientific work of an individual scientist, institution, discipline, and journal. However, it should not be considered independently of the scientific discipline, the length of a scientist’s work experience, scientific productivity, co-authorships, the citation count, the types of citations and other relevant parameters.
Notes
Potential conflict of interest
None declared.
References
1. Hirsch JE. An indeks to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A.2005;102:16569-72.
2. Braun T, Glanzel W, Schubert A. A Hirsch-type indeks for hournals. Scientist 2005;19:8.
3. Batista PD, Campiteli MG, Kinouchi O. Is it possible to compare researchers with different scientific interests? Scientometric. 2006;68:179-89.
4. van Raan AFJ. Comparison of the hirsch-indeks with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. Scientometrics. 2006;67:491-502.
5. Egghe L. Dynamic indeks: The hirsch indeks in function of time. JASIST. 2007;58:452-4.
6. Cronin B. Meho, L. Using the h-indeks to rank influential information scientists. JASIS. 2006;57:1275-8.
7. Oppenheim C. Using the h-indeks to rank influential british researchers in information science and librarianship. JASIST. 2007;58:197-301.
8. Jokic M, Suljok A. [Produktivnost i njezin odjek prema citatnim bazama ISI-ja i Scopusa za razdoblje 1996-20005]. In: [Onkraj mitova o prirodnim i društvenim znanostima: sociološki pogled]. Ed. Prpic K, Zagreb: Institut za društvena istraživanja u Zagrebu; 2008, pp. 133-58. (in Croatian)