Skip to main content
European Research Executive Agency
News article26 April 2022European Research Executive Agency

New insights into the success and failure of competitive research proposals

A recent study used machine learning and qualitative text analysis to investigate the evaluation of grant proposals, thereby generating new insights into grant peer reviewing.

new insights
© VectorMine, Shutterstock

The new study by Darko Hren, David G. Pina (REA), Christopher R. Norman and Ana Marušić, published in the Journal of Informetrics, analysed 3667 grant applications to the Initial Training Networks (ITN) of the European Commission’s Marie Curie Actions under the Seventh Framework Programme (FP7). The researchers sought to better understand the grant peer review process. They discovered that the content of review reports matched predefined evaluation criteria. This observation suggests that the European Commission briefs its independent expert evaluators with great coherence. They further found that detected shortcomings or weaknesses affect a proposal’s evaluation outcome more than its strengths.

Method:

The research project combined machine learning and qualitative analysis methods to investigate reviewers’ comments in grant proposals’ Evaluation Summary Reports (ESR). The 29 most common themes of reviewers’ comments were identified in relation to the four Initial Training Networks (ITN) criteria applicable under FP7: C1-Science & Technology, C2-Training, C3-Implementation, and C4-Impact. The frequency of specific themes, e.g. Involvement of the private sector, within reviewers’ comments under the four criteria were analysed. By proceeding this way, the authors were able to establish and evaluate the relationship between the textual content of the ESR and the outcome of the grant proposals’ evaluation exercise.

Findings:

The authors found that the European Commission’s grant evaluators work in a cohesive manner, adhering to pre-determined assessment criteria. To ensure a fair and constructive grant evaluation procedure, “grading heterogeneity”, i.e. different interpretations of grading scales, should be avoided. The themes identified in the evaluators’ comments matched the evaluation criteria for the granting scheme. This suggests that the grant peer review process is well understood and adhered to by expert reviewers, with a low risk of grading heterogeneity.

In addition, the study demonstrated that identified weaknesses are the strongest determining factor in whether a grant proposal was successful or not. The absence of weaknesses, not the number of strengths, made the critical difference. The researchers explain their observation with the "negativity bias", i.e. human tendency to give more attention and importance to negative than positive information. However, their findings may also speak for the high quality and competitivity of ITN grant proposals.

A look into the future:

The study paves the way for many potential research methods. For example, it would be interesting to investigate the dominance of a single comment in an individual report on the overall outcome of the proposal’s evaluation. The authors also encourage additional machine learning approaches to grant peer reviewing. In this way, the processes and policy changes concerning grant assessment will be better understood which benefits researchers, evaluators and policy-makers.

__

 

Hren D, Pina DG, Norman CR and Marušić A. What makes or breaks competitive research proposals? A mixed-methods analysis of research grant evaluation reports. Journal of Informetrics, Vo. 16, No. 2, May 2022, 101289.

 

Disclaimer: All views expressed in this article are strictly those of the authors and may under no circumstances be regarded as an official position of the European Research Executive Agency or the European Commission.

Details

Publication date
26 April 2022
Author
European Research Executive Agency