Quality Assurance and Impact Measurement

Intro

What Are Altmetrics?

The key takeaways from this article are

1

Open Peer Review is a more transparent alternative to conventional peer review.

2

Conventional bibliometric indicators such as the impact factor do not allow any conclusions about the quality of individual articles.

3

Alternative metrics (altmetrics) include usage figures, mentions, and discussions in various networks.

 

(Open) Peer Review

Peer Review is the standard quality assurance process, especially in the case of scholarly journals. Researchers working in the same field as the authors (“peers”) review the manuscripts submitted to a journal, point out errors and ambiguities, and recommend to the editors that a contribution be accepted, rejected, or revised and resubmitted. In the case of the traditional single-blind peer-review process, the reviewers know the names of the authors, but not vice versa; in the case of double-blind peer review, the authors and the reviewers remain anonymous to each other.

One criticism of classical peer review is its lack of transparency. Readers of an article have no way of knowing whether the review was based on appropriate scientific criteria or whether it was conducted in a merely superficial way. Moreover, conflicts of interest – for example because the reviewers are colleagues or competitors of the authors – are not apparent to outsiders.

In order to make the peer-review process more transparent and, ideally, to increase the quality of the reviews and thus also the quality of the articles, more open procedures are being discussed and have, in part, already been implemented. However, open peer review, as it is called, is not clearly defined, and it may refer to different procedures – or to combinations thereof (Ross-Hellauer, 2017; Wolfram et al., 2020).

When the reviews, the authors’ responses, and the editors’ decision are published alongside the article, the readers can see how rigorous the peer-review process was and how the decision of the editors came about. Errors or gaps in the review process can be more easily noticed by fellow researchers. Examples of journals with this type of open peer review include PeerJ (optional if all participants agree), the PLOS journals, and the EMBO Press journals (optional).

Another type of open peer review is also known as post-publication peer review or interactive peer review. In a first step, the submitted manuscript is immediately published online after initial checking. It can then be publicly discussed by invited reviewers or all interested fellow researchers. The reviewers may be anonymous or named. The manuscript is revised by the authors on the basis of the comments, and, in a second step, the final version is published by the journal. Previous versions and the comments remain accessible. Examples of journals with this type of open peer review include Atmospheric Chemistry and Physics, F1000Research and Informationspraxis.

However, open peer review can also take place completely informally without corresponding infrastructure, namely when published preprints are publicly commented on by fellow researchers, and the comments are taken into consideration when improving the manuscript (Frick, 2020).

The Declaration on Research Assessment (DORA) recommends:

"Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions." (DORA)

Metrics

Whereas peer review constitutes basic quality assurance before publication of a scholarly work, various methods have been developed to quantitatively describe the (actual or presumed) importance of journals, publications, and researchers for the scientific community.

The best-known and most influential metric is the Journal Impact Factor (JIF), which nowadays is calculated and marketed by Clarivate Analytics. The JIF is a measure of how often the articles published in a particular journal in the two years prior to the reference year were cited in that year. It was originally developed as a tool for comparing the impact of different journals and as an aid for librarians’ acquisition decisions. Although the JIF refers to a journal as a whole and says nothing about the impact or indeed the scientific quality of individual articles, it is often used to assess scientists’ research performance. This is not the only reason that the JIF is frequently the subject of criticism. It is also highly discipline-dependent and thus not comparable across disciplines. Moreover, it takes only the two years prior to the reference year into consideration. The calculation of the JIF is not very transparent and is often not independently reproducible (Larivière & Sugimoto, 2019).

The San Francisco Declaration on Research Assessment (DORA), which has been signed by numerous research performing organisations, research funding agencies, etc., advocates inter alia that journal-based metrics such as the JIF should not be taken into account when assessing job applications, funding proposals, etc., and that the outputs of scientific research should be evaluated based on their content rather than on the journal in which they were published.

Whereas conventional bibliometric indicators such as the JIF are based on citations in the scientific literature, alternative approaches – known as Altmetrics – try to take other types of communication and reactions into consideration, for example usage figures, mentions on news portals, and discussions on social networks. Examples include the Altmetric Attention Score provided by Altmetric.com and PlumX Metrics provided by Plum Analytics. Alternative metrics have been evaluated inter alia in the project *metrics at Göttingen State and University Library (SUB).

Conventional and also alternative metrics are often produced and made available for a fee by commercial providers. Reliable citation data are available mainly via commercial databases such as Web of Science or Scopus. On the one hand, these databases are fee-based; on the other hand, they do not cover many smaller publishers and journals or non-English-language literature. To remedy this, initiatives such as the Initiative for Open Citations and the Initiative for Open Abstracts have arisen in the science sector that are aimed at making these important metadata openly accessible, thereby rendering it possible to make the publications and their content findable at different locations, to link them, to perform bibliometric analyses, and to examine citations and other relationships (Peroni & Shotton, 2020).

References

Further Reading

Last updated on

More information on the topics of this page: