Skip to main content

Cardiff University Statement on Responsible Research Assessment

  • Last updated:


This is Cardiff University’s Policy Statement on Responsible Research Assessment. It provides a set of principles on the appropriate use of quantitative research metrics. As a University community, we are committed to freedom of enquiry through peer review in its broadest sense. Freedom of enquiry is one of seven core values underpinning the University’s The Way Forward 2018-2023 Recast COVID-19 strategy.

This Policy Statement builds upon our commitment to the San Francisco Declaration on Research Assessment (DORA), which we signed in November 2019, and demonstrates our alignment with other prominent initiatives in this area, including the Leiden Manifesto for Research Metrics and The Metric Tide report, commissioned by HEFCE (now Research England).

The Metric Tide urged UK institutions to develop a statement of principles on the use of quantitative indicators in research management and assessment. It recommends that metrics should be considered in terms of

  • robustness (using the best available data);
  • humility (recognising that quantitative evaluation can complement, but does not replace, expert assessment);
  • transparency (keeping the collection of data and its analysis open to scrutiny);
  • diversity (reflecting a multitude of research and researcher career paths); and
  • reflexivity (updating our use of metrics to take account of the effects that such measures have had).

These initiatives and the development of institutional policies are also supported or required by research funders in the UK (e.g., UK Research and Innovation (UKRI), the Wellcome Trust and the Research Excellence Framework (REF)).

The principles outlined below reflect our commitment to responsible research assessment. To help put this into practice, we will provide additional guidance material on specific research metrics, which will be frequently revisited and informed by consultation with University staff and best practice in the field.


We are committed to applying the following guiding principles where applicable (e.g., in recruitment, promotion and funding decisions):

1. Qualitative peer review remains the primary method of research evaluation at Cardiff University. Carefully selected research metrics, such as article-level metrics (citation-based and alternative metrics (altmetrics)), can be useful indicators of scholarly influence if they are used to support (not supplant) qualitative expert judgement.

2. Quality, influence and impact of research are typically abstract concepts that prohibit direct measurement. There is no simple way to measure research quality, and quantitative approaches can only be interpreted as indirect proxies for quality.

3. Disciplinary fields have distinct perspectives of what characterises research quality and different approaches for determining what constitutes a significant research output (for example, the relative importance of book chapters vs journal articles). As a signatory to DORA, we value a diverse range of research outputs, and all outputs will be considered on their own merits, in an appropriate context that reflects the needs and diversity of research fields and outcomes.

4. Both quantitative and qualitative forms of research assessment have their benefits and limitations. Depending on the context, the value of different approaches must be considered and balanced. This is particularly important when dealing with the distinct publication practices and citation norms seen across disciplines.

5. When making qualitative assessments, we should avoid making judgements based on external factors such as the reputation of authors, or the journal or publisher of the work; the work itself is more important and must be considered on its own merits.

6. Not all indicators are useful, informative, or will suit all needs; moreover, metrics that are meaningful in some contexts can be misleading or meaningless in others. For example, in some fields or subfields, citation counts may help estimate elements of usage, but in others they are not useful at all.

7. Avoid applying metrics to individual researchers, particularly those that do not account for individual variation or circumstances. For example, the h-index should not be used to directly compare individuals, because the number of papers and citations differs dramatically among fields and at different points in a career.

8. Ensure that appropriate metrics are chosen for the question being investigated, and do not apply aggregate level metrics to individual subjects, or vice versa (e.g., do not assess the quality of an individual paper based on the journal impact factor or the journal in which it was published).

9. Any use of indicators must take account of potential sources of bias and aim to reduce them: such a consideration applies, for example, to differences between disciplines, career stages and full-time equivalent (FTE) status of the individual being assessed, and equality, diversity and inclusion considerations. Any biases must be explicitly acknowledged in analyses and indicators should be normalised by discipline, where relevant.

10. Quantitative indicators should be selected from those that are widely used and easily understood to ensure that the process is transparent and they are being applied appropriately. Likewise, any quantitative goals or benchmarks must be open to scrutiny.

11. New and alternative metrics are continuously being developed to inform the reception, usage, and value of all types of research output. Any new or non-standard metric or indicator must be used and interpreted in keeping with the principles in this Policy Statement. Additionally, we commit to considering the sources and methods behind such metrics and whether they are vulnerable to being gamed, manipulated, or fabricated.

12. Metrics (in particular bibliometrics) are available from a variety of services, with differing levels of coverage, quality and accuracy. These aspects should be considered when selecting a source for data or metrics.

This statement is licensed under a Creative Commons Attribution 4.0 International Licence. Please attribute as ‘Developed from the UCL Statement on the Responsible Use of Metrics’.