Here, Caroline Fiennes from Giving Evidence shares findings from a new study into evidence use in non-profit services for mental health.
UK non-profits delivering mental health services are not great at producing or using scientific evidence. This is the main finding of a new study by Giving Evidence. We interviewed 12 such organisations to understand their ‘evidence system’, i.e., how evidence is:
- Shared, both ‘outbound’ from them and ‘inbound’ to them – and stored.
These nonprofits talked of their growing interest in being evidence-based and focusing on impact (and we don’t doubt them) but in practice it’s not happening consistently. Some charities said that they struggle to find and use external research about what is effective in treating or preventing mental health conditions when designing their programmes.
One reason given is the difficulty of accessing, interpreting and applying academic / independent research – for example, much academic research is behind paywalls, so charity staff sometimes resort to sneaking into their former universities to read it, and certainly much of it is pretty unintelligible to non-researchers. Another is the claim that there isn’t much research which is relevant, although that claim is disputed by some experts and researchers.
However, charities delivering mental health services seem laudably interested in the views of their service users. Three-quarters of the charities we interviewed regularly collect user feedback, and over half have done so on a large scale.
About half of these organisations are producing (or funding production of) impact evaluations, i.e., investigations of the causal effects of their interventions, and many of these seem to be simple pre/post studies, which are open to considerable errors. It may be just as well that not all of them are producing such evaluations, because doing unbiased evaluation research is a specialism which most service delivery organisations don’t have. Instead, they should (we would argue) be using reliable research from elsewhere, which few are.
One charity said that:
Evidence for us is what our users say works…that is enough for us
This concerns us, because the human mind is often misled about what works and only rigorous research can reveal the reality. Happily some of the non-profits which are involved in producing evaluation research are doing so in partnership with reputable research institutions.
Undervalued and underfunded?
Sadly some charities we spoke to seem to be being forced to produce low quality research. Several told us that funders and commissioners require ‘evaluations’ of services but only put towards them budgets too small to allow for reliable research (e.g., with adequate sample size). Most were only £5-10k, and a few were £20-30k.
For example, one charity said that is has dozens such budgets a year, which is very frustrating because individually, those budgets only allow for research that is essentially pointless, but collectively they could enable something insightful.
Adding to the knowledge pool?
About half of these charities are producing the kind of research or impact evaluations which could be useful to other organisations. Plus, reportedly, “every contract specifies different outcomes, which makes it a nightmare to aggregate”(charity interviewee) and also prevents comparisons. We didn’t have resource to look at the quality of that research, i.e., to see whether it is reliable and useful. However, dissemination of that is weak, and that’s not really the charities’ fault; there’s no incentive for them to do so, and few channels anyway.
One charity said that part of the reason they don’t publish much is that:
We don’t want competitors to pick this [our intervention] up.
We have encountered this in other sectors and this is a major problem (not of the charities’ making).
On the upside, amongst the charities that do produce this kind of material, we found no evidence of selective publication: we had thought we might find that material which is flattering is more often published and unflattering material isn’t, which creates publication bias, but we did not find this.
Brutal under-funding of mental health
This is all in a context of brutal under-funding. Mental health accounts for 23% of the UK disease burden, but gets only 13% of the NHS budget and 5% of the UK health research budget. Moreover, charitable giving to mental health is very low: it’s only £714 for every adult with mental health problems whereas donkeys get £2,047 each.
Using evidence-based mental health research to find out what works
Giving Evidence has long said that most charities should not produce causal research, which requires expertise that they don’t have and don’t need, but rather should get good at hearing from their target users about what they want and think of what they’re getting, and then finding and using causal research about what works in addressing it. That seems to be the case for charities delivering mental health services. Some organisations help with this, such as the Centre for Mental Health, and the Mental Elf.
We recommend that mental health charities work towards (and are funded and incentivised to work towards) finding and applying the relevant rigorous research, and working with specialist researchers to produce research where none already exists. We expect to work with some mental health charities on this.
→Discuss on our forum
→ What works in research use? from our Science of Using Science project.
Caroline Fiennes Biography
Caroline founded and directs Giving Evidence. She is one of the few people whose work has appeared in both OK! Magazine and The Lancet. She is on boards of the US Center for Effective Philanthropy, of the world’s largest charity rating agency Charity Navigator, The Cochrane Collaboration (specifically Evidence Aid). She is the Corporation of London’s City Philanthropy Coach, and writes a monthly column in Third Sector magazine. Caroline was named a Philanthropy Advisor of the Year by Spears Wealth Management. More information about Giving Evidence is at http://www.giving-evidence.com/about