Bibliometrics is about applying statistical methods to bibliographic data, often focussing on how many times research outputs and publications are cited. There are a number of limitations associated with using bibliometrics to assess research, so they are generally most useful in conjunction with other information like funding received and peer review of research.
Bibliometrics can help with activities like:
- Demonstrating the impact of research.
- Looking at highly cited journals in a subject area - which can be helpful for deciding where to publish.
- Identifying top researchers in a subject area - for identifying potential collaborators etc.
Issues
Be aware that there are complexities to assessing impact using bibliometric data, including:
-
Different search tools will have different coverage.
-
Authors may be indexed under different variations of names (e.g. Thomson, John vs Thomson, J.T.) or indeed have the same name as another author.
-
If you narrow a search by institution, it may be indexed under different names (e.g. Napier University vs Edinburgh Napier University).
Using bibliometric data in the REF
The Research Excellence Framework or REF aims to assess the quality of research in UK higher education institutions, and the results determine how government research funding is allocated. Some of the REF panels may use citation data as part of their assessment of research outputs.
After the first Research Excellence Framework (REF) exercise in 2014, an independent review was set up to look at the role of metrics in the assessment of research. See the July 2015 report The Metric Tide. The main findings of the review included:
- "There is considerable scepticism among researchers, universities, representative bodies and learned societies about the broader use of metrics in research assessment and management.
- Peer review, despite its flaws, continues to command widespread support as the primary basis for evaluating research outputs, proposals and individuals. However, a significant minority are enthusiastic about greater use of metrics, provided appropriate care is taken.
- Carefully selected indicators can complement decision-making, but a ‘variable geometry’ of expert judgement, quantitative indicators and qualitative measures that respect research diversity will be required.
- There is legitimate concern that some indicators can be misused or ‘gamed’: journal impact factors, university rankings and citation counts being three prominent examples.
- The data infrastructure that underpins the use of metrics and information about research remains fragmented, with insufficient interoperability between systems.
- Analysis concluded that no metric can currently provide a like-for-like replacement for REF peer review.
- In assessing research outputs in the REF, it is not currently feasible to assess research outputs or impacts in the REF using quantitative indicators alone.
- In assessing impact in the REF, it is not currently feasible to use quantitative indicators in place of narrative case studies. However, there is scope to enhance the use of data in assessing research environments".