The evaluation of research quality is a task which is attracting attention as the world turns more and more to evidence-based decision making. The work of scientists and historians are regularly reviewed by institutional administrators to ensure a high quality of scholarship and to determine where to deploy scarce resources. One of the most relied-upon components of research assessment is the review of publications authored by a particular scholar. And although publications are difficult to objectively evaluate, the standard method for many years was to use the journal impact factor. This method measured the number of times the articles from a particular journal were subsequently cited by other publications, for which a numeric score was assigned to the journal. It soon became prestigious for scholars to have their papers published in a journal with a high impact factor.
But an increasing number of academics now recognize that the impact factor may be measuring something other than the quality of a particular paper much less of a particular researcher. Digital publishing and the possibility for monitoring online usage and readership means that evaluating publications can now include a wider measure of merit. Information technology now allows the individual elements of the scholarly communication ecosystem (generally articles published in research journals) to be more accurately tracked and the activity around them measured.
This has led to what is commonly called the alt-metrics movement. This describes a set of indicators that is outside of formal citation of one publication by another but rather includes things like the number of times a paper is downloaded or viewed; the number of times a paper is added to a bookmarking site such as bit.ly; the number of times a paper is “liked” on certain social networks, some of which now cater to scientist communities; whether a paper is mentioned or cited in a WikiPedia article or tweeted by a Twitter user, etc.
All of these indicators demonstrate an interest in a particular research paper without waiting what could be years for it to be formally cited in subsequent work. One could easily imagine that a higher alt-metrics indicator, the more interest in the work and hence an indication of the quality of the research represented. Of course the alt-metrics movement is still very young and the professional evaluation of a scientist still considers a wide range of activities and feedback mechanisms. But the availability of data on usage, views, and links to scientific research outputs (articles) ought not be ignored, particularly when the shortcomings of the traditional impact factor are considered.
The research evaluation through publication record is increasingly important to librarians because ultimately the publication habits of scholars influence increasingly limited library budgets. It is widely recognized in scholarly communication studies that academic journals serve two primary audiences: reader-subscribers as well as scholar-authors most of whom are required to publish to maintain professional status. Taken together with the transition from subscriber-based journals to open access journals, the quality assessment of research output is something librarians have been cultivating an interest in and alt-metrics offers an entirely different perspective on the evaluation of scholarly publications.
One Comment
The ability to track interest in scientific papers before officially published peer review appears is an important development. Scholars can now gauge the level of interest in their research topics and, if sufficiently high, use the collected metrics to persuade administrations that continued or increased departmental funding is warranted. And though Facebook “likes” can never replace formal citation or in-depth academic review, it’s always good to know the extent to which your ideas are reaching fellow scholars.