Opinion: evaluating research impact is complicated by the fact that impact means different things to different people

Publicly funded research has been identified as a key mechanism for enhancing economic growth, competitiveness and innovation across multiple spatial units. The research impact agenda has gained much attention across academic and policymaking circles in recent years. The discussion surrounding research impacts has shifted from demonstrating scientific excellence towards the generation of wider societal impacts which address grand challenges. As such, researchers are increasingly required to provide accountability and demonstrate value for money when applying for public funding.

The emergence of the research impact agenda has been accelerated by the introduction of the Research Excellence Framework (REF) in the United Kingdom. The REF is an evaluation tool used to measure the quality of research in UK higher education institutes. In 2006, the UK government announced its decision to replace their previous evaluation framework (Research Assessment Exercise) with the REF. This step represented a shift away from research evaluations based on scientific quality towards a metrics-based approach.

The REF assesses research across three areas: assessments of the quality of outputs, the impact of the research and the research environment of the unit that is submitted for assessment. Each component is assessed separately and then combined into an overall score. In REF 2014, societal impact was given a 20 percent weighting, while this figure has increased to 25 percent of the overall score for REF 2021. This suggests that demonstrating societal impacts will gain even greater importance in future funding decisions.

However, the emphasis on demonstrating research impacts is currently outpacing the development of robust, widely accepted tools to measure research impacts. As such, one has to ask if policymakers have put the cart before the horse in their efforts to align research funding with demonstrating research impacts.

Impact assessments are undermined by the tendency of researchers and funding bodies to "count what can be easily measured", rather than measuring what "counts"

Measuring and evaluating research impacts is complicated by a lack of clarity around what constitutes an impact as impact can mean different things to different people. For example, what constitutes impact for an academic will be different to what constitutes an impact for an engineer or a health practitioner. Impact may be generated simultaneously across such dimensions as economic, health, environmental, public policy, human capacity, technological, societal, academic and cultural. Policymakers and practitioners must be cautious when defining research impact as narrow definitions may overlook potentially important non-intended impacts.

The lack of consensus is not surprising given that the research impact agenda is a relatively recent phenomenon, complicated by conceptual and methodological issues which make its measurement difficult. Furthermore, robust impact assessments are undermined by the tendency of researchers and funding bodies to "count what can be easily measured", rather than measuring what "counts".

The science and innovation literature is awash with definitions and conceptualisations of research impact, but it is surprisingly often left without a definition. The REF provides the most widely used definition of impact: "an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia".

Have funding bodies become too focused on commercially-driven research at the expense of fundamental and experimental research?

Similarly, Science Foundation Ireland (SFI) define impact as: "the direct and indirect ‘influence’ of research or its ‘effect on’ an individual, a community, or society as a whole, including benefits to our economic, social, human and natural capital". 

Typical definitions of research impact tend to highlight positive effects associated with research activities while excluding negative outcomes. However, this underplays an important feature of the innovation process related to the idea of creative destruction. For example, technological advancements in ICT has been linked to increased automation of jobs. Recent work has found that increased automation may result in a reduction in as much as 47 percent of future employment. The advancement of technology in this instance will have adverse effects on a sizeable portion of the population. However, these effects are unlikely to be captured using current metrics based measurement tools.

Recently, questions have been raised as to whether funding bodies have become too focused on commercially-driven research at the expense of fundamental and experimental research. Fianna Fáil’s science spokesperson James Lawless has criticised funding priorities, questioning whether SFI has become too focused on commercially attractive research at the expense of basic research traditionally conducted at universities.

Typical definitions of research impact tend to highlight positive effects associated with research activities while excluding negative outcomes

In a similar vein, commentators have argued research evaluation had become "complicated, burdensome and intrusive" representing a genuine threat to academic autonomy. Furthermore, it has been argued that demonstrating wider societal impacts is much more difficult in certain disciplines such as social sciences and humanities and sustainability.

These issues must be given serious consideration when developing future research impact assessment tools. Future frameworks must consider the intended and non-intended, positive and negative and long- and short-term impacts of research. This means there is a greater need to design effective indicators of research impact that are sufficiently robust to allow comparison across disciplines and structure. Yet these indicators must be sufficiently flexible to facilitate measurement of research across different levels of technological intensity. 


The views expressed here are those of the author and do not represent or reflect the views of RTÉ