Measurable Outcomes in the voluntary sector -- what are they for?
The quest for measurable and predictable outcomes has now become ubiquitous in the voluntary sector. But why has it become so popular? And what does it achieve?
The present emphasis on measuring outcomes is seen by many
people as a way of liberating voluntary sector organisations who win government
contracts from being micromanaged by those contracts. Thus rather than government specifying
exactly how the contract is going to be delivered they simply identify the
outcomes they are looking for and it is up to the organisation to find their
own way of meeting the outcomes. This
has an obvious attraction, but perhaps means that people haven't looked closely
enough at the consequences or difficulties of trying to measure outcomes. We are now finding that measuring outcomes is
extending beyond the world of government contracts. A good example of this is New Philanthropy
Capital who state their mission as
New Philanthropy Capital is passionate about ensuring that the charities with the best results attract the most funding. Our independent research and advice helps donors direct their support for maximum impact.
Again this has an obvious attraction, but it does seem to
assume that discerning who produces the 'best results' is straightforward. Martin Brooks NPC's director of research
that charities have not faced the same level of scrutiny as other sectors and highlighted how increased external scrutiny and measurement had contributed to improved performance in other areas such as health and education. Similar mechanisms could work in the charity sector, argued Brookes.
Many people, of course, would doubt that increased external scrutiny and measurement had improved performance in health and education and it is disturbing that Brookes claims this in such an unquestioning way. There is at very least a debate to be had about these issues and it is to this that I hope my paper will contribute.
One of the main engines in the British voluntary sector for
driving forward the outcomes approach is the Charities Evaluation Service who
supplied me with the following information
In January 2000, the United Way of America carried out a survey of 391 projects, each operated by a different agency, in a systematic effort to determine the extent to which programmes had profited from outcome measurement and the use of the results.
Respondents agreed that
implementing programme outcome measurement was helpful, particularly in the
· communicating programme results (88%)
· focusing staff effort on common goals and purposes (88%)
· clarifying the purpose of the program (86%)
· identifying effective practices (84%)
· successfully competing for resources (83%).
What is interesting is that this research doesn't say that
it produces better outcomes for the people they are working with. Neither does it appear to look at the
problems that 'programme outcome measurement' creates, probably because it is
produced by questionnaires filled in by project managers -- not a form of
research which is going to tell you very much about what is really going on in
Perhaps the real answer to why an outcomes approach has become popular is that it enables people to manufacture a demonstration that their work is effective. A method of measuring outcomes may or may not improve the effectiveness, but what it does do is provide a language for the voluntary sector to justify its work and so receive funding. This, however, creates its own problems, and it is this that I will go on to explore.
The thing that always strikes me about attempting to measure
outcomes in the voluntary sector is the unreliability of the research on which
we base our measurements. This becomes
clear when we assess our research in the light of standard research
First of all there is the Randomised Controlled Trial (RCT); this is the contemporary gold standard for research. All medicines which are licensed have to undergo RCTs before they are let loose on the public. An RCT operates by taking two groups of similar people, both are treated but one with the real thing and another (the control group) with a placebo. The results for the two groups are then assessed and the treatment considered effective if the treated group has better outcomes than the control group. No voluntary group, of course, is ever able to conduct an RCT, they just don't have the resources and in many cases it just won't be possible to create the kind of controlled environment in which an RCT needs to operate. I often get the impression that what people are trying to do to in researching outcomes is to get as close to RCT as possible, but this always leaves things feeling unsatisfactory and half baked because there is never a control group with which to compare results.
The second alternative is ethnographic research. This strikes me as a much more satisfactory
model for researching the work of voluntary organisations but it creates
equally difficult problems. Ethnographic
research is based on what is called participant observation; in this model the
researcher talks to people about what they are doing but also spends time with
them so that he or she can see what they actually do rather than merely what
they say they do. This is an extremely useful research tool and
provides a very fine grained and complex picture of the research subject but it
is incredibly intensive and requires someone with a very particular skills and
training. Again no voluntary sector
organisation is ever able to do anything like this. We tend to rely on what people say to us, but
most of us are aware that this is problematic, people who have spent time with
us are likely to give a positive assessment of the programme irrespective of
what real difference it has made to their lives. Often we can actually engage in participant
observation ourselves as we see how people's lives have changed -- but our
perspective is far from objective, especially if a major concern is to justify
So it is clear that these two methods of research that are reckoned to produce the highest quality data are not available to voluntary organisations and it leaves us in a quandary -- how do we justify the work we do? Nowadays this means 'How do we measure outcomes? I have been involved in trying to develop indicators for a community development project and it has proved a difficult and unsatisfactory process, but this is not an uncommon experience. As Taylor and Bury in their report on the Expert Patient Program say there are 'problems involved in proving what is the real cause of outcomes in the complex fields of human thought and behaviour'. People's behaviour may change but it is a different matter saying why that behaviour changed -- in particular because what people say caused the change in behaviour is not reliable. But these kind of considerations are all swept under the methodological carpet when it comes to measuring voluntary sector outcomes because what people are looking for is not truth but justifiable evidence. They are looking for a language with which to communicate to funders -- and that increasingly means government. Unfortunately, especially in the light of the Gershon efficiency reviews, the only language the government seems to understand is that of so-called value for money. And it is the money side of this which is important rather than the value because the money side is relatively easy to measure whilst the value side is far more problematic. In my experience no matter what government says about wanting to maintain quality of service when it comes down to the real business it is the numbers which count. This is what the tools for producing measurable outcomes are about -- turning the work we do as voluntary sector organisations into numbers. It often seems that it doesn't really matter how accurate these numbers are, or how closely they relate to real lives, what matters is that you can produce the figures in black and white. I believe that virtually none of the figures produced by voluntary sector organisations, as indicators of their outcomes, would pass the test of being included in peer reviewed journals. But this does not seem to concern government, however, they are not going to send out teams of ethnographers to assess the reliability of the figures they have been given -- that certainly wouldn't be cost-effective! This means that front-line workers are actually expected to act as researchers into complex human behaviour. This is a job they generally lack the skills to do and they are also about as far from objective researchers as you could hope to get -- seeing as their livelihood depends on the research they produce. Systems are therefore produced which seek to control the research process but it is inevitably flawed because the people who are being judged on the basis of the research are the ones who are actually required to produce the data. What has been lost is trust -- as Onora O'Neill pointed out in her Reith lectures and this has a corrosive effect on society. In talking about these issues with people I find that they are very aware of the unreliability of the data they are producing, apart from being frustrated and disillusioned at the amount of paperwork they are doing.
As the New Economics Foundation points out we are so fixated
on monetary efficiency that we have forgotten to look for real effectiveness. Providing some kind of service is delivered
at as low a cost as possible then,
at least in the short-term, nothing else seems to matter -- and as we all know
it is the short term which counts in politics.
The problem is exacerbated by the fact that in public life only one kind
of knowledge counts -- the kind of logical, deductive knowledge which
measurable outcomes seem, but, as we have seen, probably don't, produce. This is really the core of the matter.
The issue, therefore, comes down to epistemology ie how we know things. How do I know that the work I am doing is effective? This is a very real question which faces each of us in our work. Increasingly we are being expected to answer this through using measurable indicators, but is this really the best way for us to know how effective we are being? Certainly not if we look at how we live our day-to-day lives. What measurable indicator might there be, for instance, to inform me about whether my wife loves me? The number of kisses she gives me? The number of other men she sleeps with? The issue quickly becomes ludicrous. We understand love not because of measurable indicators but through a different kind of knowledge -- a knowledge which emerges out of relationships and which relies on our intuition, alongside other more obvious facts. It is not as if intuition is the only way of knowing things, but it is intuition which enables me to interpret and understand my observations. Through my human faculties I bring together all the many different experiences I have had with my wife and I am able to come to the conclusion that she truly does love me. I cannot prove this, we know from the example of Othello, for instance, that trying to prove love is likely to destroy it. Trying to prove love turns it into a distant object rather than a real relationship. This is the case in all but the most specialised of human endeavours -- yes we use observable, measurable facts but this goes alongside our intuitions and the knowledge we gain from our relationships and our feelings. Using the full range of human knowledge is the best way to assess the effectiveness of our work. Measurable facts can provide us with a skeleton but it is a skeleton which tends to be dry and meaningless unless it is fleshed out with real human experience.
This kind of questioning is not encouraged in today's
voluntary sector, it tends to be seen as a waste of time, when we could be
getting on with the 'real work' of delivery, but I believe that the serious
quest for the truth rather than the merely convenient will ultimately create
the truly groundbreaking and innovative initiatives which bring real change. We need to recognise that voluntary
organisations are complex living systems rather than simple machines producing
easily quantifiable outcomes.
How might we then begin to make better sense of the genuine desire that the work of the voluntary sector produce real, tangible outcomes? Here are a few awkward comments and uncomfortable questions which might begin to mark out the ground.
· Questions need to be asked about the status of the research on which the measurement of outcomes is based. How reliable are the questionnaires which we handout -- is their data confirmed by any other research? Is the research we do based on finding truth or merely the convenient accumulation of data?
· We also need to recognise that just because an outcome is difficult to measure it doesn't mean that it isn't real. What will be the consequences of only funding people to produce outcomes that are easily measurable?
· We need to recognise that predicting outcomes is highly questionable, particularly if we are genuinely concerned for 'empowering' people rather than squeezing them into our predetermined targets. It seems to me that the strength of voluntary organisations is to be with people and respond quickly and flexibly to what they need, rather than be corralled by outcomes set years previously when the funding bid was made.
· If we are going to measure outcomes then we must also measure the negative consequences of spending time and effort on the measurement. We also need to do some serious research on the long-term consequences for organisations that focus on the measurement of outcomes. Does it, for instance, lead to lying and dishonesty in order to produce good figures? Quantitative measurements need to go hand-in-hand with ethnographic research which gets under the skin of how organisations actually function.
· We need to consider the impact of trying to control work through setting targets has on people -- especially through the loss of reliance on trust as the fundamental building block of community.
· Finally we need to look at ensuring that measuring (and particularly predicting) outcomes is, overall, for the benefit of users, rather than just making organisations more competitive in a culture which is determined by the ability to demonstrate measurable outcomes. Otherwise there will be no benefit in the whole process. We need to consider the whole system rather than just how one organisation succeeds in the system.
If we are genuinely concerned for evidence based practice then we must ask these questions. If we are not able, or not prepared to ask these questions and consider these issues, then I believe our credibility as charities is called into question.
Outcomes and empowerment
-- a guide to outcomes and funding applications
Successful funding applications are increasingly dependent on the ability to demonstrate the outcomes of your work. But what does this actually mean in practice? Below I go on to outline what the implications of this for your work and funding applications will be.
The first mistake which many people make is doing the wrong
kind of work. In order to be successful
in an outcome orientated funding application it is important to choose the
right kind of work to do. This is the
key to successfully funding your work.
You need to be driven not by what is needed in your community but what
you can get money for.
Secondly we need to think about outcomes. Of course everyone wants their work to have a
good outcome but that is not what is being talked about in this instance. You can have the best outcomes in the world
but this is irrelevant if those outcomes are not predictable. You need to choose a piece of work which has
outcomes which can be predicted in advance.
The third point is a necessary corollary of the previous. Your outcomes need to be short-term. It is a waste of time doing a piece of work which is only going to have outcomes years in the future no matter how impressive they may be. These long term outcomes are irrelevant not least because they are highly unpredictable. Funders are only interested in short-term gains even if the short term gains turn out to have disappointing long-term outcomes. You need to learn to think no more than a year ahead.
The fourth point is perhaps the most important. We can often be deceived into thinking that
it is what we achieve that is important -- this is dangerous thinking which can
all too easily mislead us. What is
important is what we are seen to
achieve. In our postmodern world we have
been liberated from a concern with the real and need to learn to focus on the
surface of things. You're not going to
be a successful fund raiser if you keep going on about your achievements which
can't be clearly and simply demonstrated.
Preferably all your achievements should be able to be turned easily into
Finally there is a special skill which will almost guarantee our success as an outcome orientated fund-raiser. This is the ability to piggyback on hidden inputs. An example will make this clearer. Schools are a good example.
A successful school is often one which makes use of these hidden inputs ie the support committed parents give to their children. This may just be a home environment which is conducive to learning or it may even involve direct input by well educated parents into assessed coursework. The more that a school can make use of these hidden inputs the more successful it will be. Governments have, of course, got wise to some of these hidden inputs and have started to factor them in to the way they judge the outcomes of different schools.
This demonstrates that we need to be creative with hidden
inputs. To be truly successful they need
to remain hidden to funders but successful use of them will enable you to
appear to be much more successful than you actually are. And this is the key to successful outcome
Well. Is this really the necessary consequence of outcome orientated funding? Are we being forced into unethical fundraising which is more concerned for the appearance of things than real sustainable long-term change? We need to think more carefully about what outcomes are really about in complex communities rather than the simplified world of profit generating businesses and economic models.
How does change really come about in real communities? This is the fundamental question with which
we need to wrestle. It is not an easy
one to answer and we would be mistaken to think anyone can answer it with
confidence but we can perhaps put down some useful markers:
· Change is much easier to see in retrospect than it is to predict.
· Change is created by a multiplicity of factors rather than one or two discrete interventions
· Outside, often global factors, have considerable impact on what happens in a locality
 Charities Evaluation Service "The case for an outcomes focus"
 David Taylor, Michael Bury (2007) Chronic illness, expert patients and care transition
Sociology of Health & Illness 29 (1) , 27–45 doi:10.1111/j.1467-9566.2007.00516.x
 Unintended consequences: How the efficiency agenda erodes local public services and a new public benefit model to restore them New Economics Foundation 2008
 As a senior figure in a large charity recently said 'we are no longer able to produce a Rolls-Royce service because the councils won't pay for it'