Alex Hulkes is Strategic Lead for Insights at the ESRC, and is responsible for developing our ability to evaluate and carry out data-informed analysis of ESRC investments, policy and operation.
Here he highlights why we publish our application and award data, and what conclusions we might be able to draw from it.
You may have noticed that we’ve just published a new set of application and award data showing the number of applications and awards from each research organisation (RO) that has applied for ESRC funding in the last five financial years. The set also contains similar data on numbers of applications and awards based on the research disciplines used to classify grants.Between them, these two views of who is applying and what they’re getting give a good impression of the ESRC funding environment.
Why publish this data?
Well, as a public organisation we feel that we should share whatever data can usefully be shared, even if (or perhaps because) we can’t foresee all the ways in which it could prove valuable for others.
And second, we’d like to encourage ROs to use this information to help understand their application behaviour. This is something that might very well need further analysis from outside ESRC. If it does, and anything of interest crops up as a result, we’d be very interested to hear about it.
To set that particular ball rolling we have used the data to carry out some basic analyses and present what we think it all tells us. As much of the focus of discussion among applicants and ROs is on application success rates, they have been the main focus of our analysis.
Some of the conclusions may be slightly surprising, some much less so. One of the less surprising conclusions is that success rates have dropped recently, and that this is down to an increase in demand for funding. Our budget has been pretty much flat for the past few years so any increase in demand quickly leads to a decrease in success rates.
On the more interesting side, we can find little evidence to support a belief that very many ROs (and possibly no ROs at all) have a success rate which differs meaningfully from the overall average.
Yes, there may be a few over- or under-performers emerging from the data (and of course this is performance solely in terms of success rates, which wouldn’t be our first-choice measure of anything) but in general the picture that we see is best understood through the lens of normal variation. Suggestions of anyone cornering the market aren’t supported by the data.
Success rates by discipline paint a slightly different picture, though in the end they too tend to show a range of outcomes which is consistent with normal variability. Of the areas we support, ‘Education’ seems to have an unusually low success rate overall, while ‘Political science and international studies’ appears to have a high success rate. But there is a dependency on the routes to funding chosen which makes the picture more complicated than this headline suggests.
We expect to update the data annually, and the analysis too. If you have any suggestions or comments on either please email email@example.com
Visit the ESRC website for further details on our performance data, including demand management and grant processing.