Hopefully I've whetted your appetite, with the SROI audit of Quality Checkers. Now it's worth explaining a little bit about the SROI audit process (you can compare this with other ways of auditing social good on the next page).
SROI takes two forms, it can either be an evaluation, or forecast.
An evaluation is an audit of an existing service, to tell what it's doing, and to make some recommendations where improvements are needed. It helps to answer the question
"which services help our population, and to what extent?"
A forecast looks at a proposed project, and helps to make the case for the investment, or to advise the investment is not being wisely spent. As far as possible, an SROI forecast will allow a quick calculation - "is it worth investing this much money in this project/ initiative? Or should we invest it elsewhere?"
The first point to make is that it is a framework, not a straitjacket. It's based on seven principles, which I described in the SROI audit of Quality Checkers. The usual process follows six stages, though with experience some of these stages can be bundled together. Assigning a financial equivalence value is not a requirement, though again with experience, it is possible to assign robust financial equivalence values to some of the benefits identified, and this helps enormously when using the evaluation to decide if resources are being properly spent, or forecast to decide whether to go ahead with the business case.
Perhaps the most important principle is that ‘value’ depends on what the stakeholders think, instead of what the person proposing to provide the service thinks. This could sensibly be applied in all other projects – "it's not what you value, it's what the recipient values".
SROI has credibility – it uses internationally recognised principles, and there are internationally recognised training and accreditation. The best SROI audits would be conducted by an independent consultant rather than internal to the organisation, and within the UK, a number of government bodies including Office for Civil Society, Centre for Social Justice, nef (new economics foundation), SROINetwork, and the Scottish Government are promoting it as the standard way for statutory commissioners (local government and NHS commissioning board) to justify why they made the decisions they made.
For this reason, it also valuable the charity trustees – because it is a standard and well recognised framework, they can compare options for giving funding using equivalent information from different proposals.
There is still room for personal bias and a personal approach to assigning values; the framework and guidance are likely to limit this, and the results are far less likely to be subject to ‘optimism bias’.
For example, with the Quality Checkers SROI report, the 'most likely' values were all confirmed by organisations that spend real cash, and save real cash. The 'worst-case' and 'best-case' figures were also confirmed by these groups. Groups that did not spend real cash (government regulator, friends and family) had their benefits recorded qualitatively, but did not have financial equivalence values assigned.
I spoke with the Quality Checker team themselves, the experts by experience, and together we identified a set of benefits. This was only a preliminary round – we used it to identify who the main stakeholders were, how they fitted into groups of stakeholders, and then who are needed to speak to.
Then Skills for People made an introduction for me to everyone we had identified, and I called each one to explain what I was doing and what SROI is. We followed this with a face-to-face interview, or a phone call, to discuss the benefits and the value of the benefits; I used my experience to coach them through how to identify benefits.
We used the standard SROI impact map format to name the outputs, outcomes and strategic goals they had intended to achieve by using the Quality Checkers. When people got stuck, I talked to them about what other people in the same stakeholder group had identified, which both triggered ideas, and prompted discussion.
An important principle was applied here – I only recorded benefits people claimed for themselves. If someone was not available then I could not apply any benefits to that organisation “pro rata across the group of stakeholders”.
From these discussions, we ended up with hundreds of separate benefits; I mapped and looked are similarities, and managed to identify where benefits were actually very similar; different groups of stakeholders were getting the same results, which raised a dilemma – to avoid double counting a benefit, which stakeholder group should have the value assigned?
Many of the people I spoke to at the various organisations were people involved in direct care, who were number-phobic – they weren't good at counting how much or how many or what value. They often put me in touch with senior managers, accountants, supervisors, people collecting data, so that I was able to get tangible numbers. Sharing experiences between different organisations helped - one organisation might explain where they looked to find the numbers that we could then combined to form an impact, and I could ask another organisation for the same set of numbers.
People often didn't know how much they knew, or how valuable it was. Again my experience helped– I was able to coach them through. Of course some values were not available from the stakeholder, and had to be found from literature searches and other publications – a particularly valuable resource here is the PSSRU annual report "unit costs for social care".
We assigned a financial equivalent value to some impacts and not others:
Each stakeholder help me to work out a baseline (what they had been getting before the Quality Checker audit), an amount of change, and what else might of influence that change (what amount of the change should be attributed to the Quality Checkers).
Where different stakeholder groups claimed the same benefit, rather than try to give a % attribution, I assigned the whole of the benefit to the stakeholder group which identified the highest value in interview. We are concerned with looking at the overall value rather than the value by stakeholder group, so this was more for convenience and transparency than anything else.
To make sure we’re only talking about what was affected by the project in particular (the Quality Checkers), we need to ask a number of questions:
So many projects and services, their impact on the recipient only lasts as long as the service is available, or for a short time afterwards. With the Quality Checkers, many of the changes went on building after the Quality Checkers had left – there was a real and permanent cultural change. The inspiration to go out and find work, because you’ve seen “someone like you” in a respected professional job, goes on building and may spread to other people; whereas a bit of extra money to spend on additional staff support will cease to have an effect once the money is no longer available.
Transparency and verification are very important when calculating the impact of the Quality Checkers. Stakeholders examined two separate drafts of the report – the first, once I'd gathered all the benefits and before I start assigning amounts and financial equivalences, which allowed them to check whether I'd understood their words at interview, allowed them to change their minds about something they'd said, and let them disagree with someone else's assessment "I don't think that’s very likely".
When I gathered all the numbers, and started to assign financial values, I sent a second draft of the report out, so that people could say "that number looks ridiculous, I don't believe it" or whatever was necessary to end up with impact values and financial equivalence that were acceptable to the stakeholders. In general, some benefits have their values reduced at this stage, and others increased; the overall SROI ratio ended up about the same.
For the purpose of the Quality Checkers’ report, we defined a time period of 2 1/2 years, so that all calculations were restricted this time period. The 2 1/2 years was about equivalent to £100,000 invested by the purchasing organisations in the Quality Checker services.
We started using 3.5% NPV (Net Present Value) to calculate figures, but over 2 ½ years it made almost no difference (within the bounds of the accuracy of the financial equivalence assigned), so for simplicity this calculation was removed.
A number of assumptions were made when calculating the values of savings:
Costs were easier to calculate: they represented BOTH the total amount spent with Skills for People, and the amount spent by organisations who develop their own teams of Quality Checkers – transport, support, payments made, etc..
The savings divided by the costs gave us the SROI ratio. A second draft of the report was then shared across all the stakeholders who could discuss/confirm/challenge, and I followed up with another round of telephone interviews to ensure that all feedback was included.
The SROI ratio determines the payback period. For a ratio of 10 or more, the SROI payback period is so fast that an exact calculation is probably not relevant. Obviously for an SROI ratio of between 1 and 3.5, it can be more important to calculate the payback period and compare with other opportunity costs (this is relevant where the resource could be used on another service, e.g. for service portfolio balancing).
In effect, I was evaluating the evaluators, auditing the auditors. The Quality Checkers themselves are auditors of the user experience.
Everyone I spoke to said it was very useful – both to have the final report, and to go through the process of working out what difference it made them. Many organisations who were trying to develop their own Quality Checker teams, were very glad to see what parts of Quality Checker work made the most difference, and to understand how much they should sensibly invest in developing their own teams. It means that the teams can go along to the commissioners of services, and justify the investment; commissioners are able to see what they’ll get for their money, and make decisions.
Some places use the report as a forecast for investment in other similar services e.g. health Quality Checkers, user experience for elderly care, etc. It identified just how important it is to have a single standard national "way", standards against which services are assessed, and a method for calculating performance against those standards, which means that it's possible to put the standards into national and local contracts and expectant them to be met. Putting local user experience standards into contracts raises lots of questions, such as How do you measure?, and if there is a dispute, whose version of performance do you use?
The SROI report goes in with submissions to relevant organisations explaining why they should use the Quality Checkers. Any organisation with an SROI report should use it as part of the "selling" process – explaining why they represent value for money (this assumes that the SROI report showed that they do represent value for money! Otherwise it should indicate what they need to change). The report uses the standard recognised methodology and is independent, so it is highly credible.
What it demonstrates is the Quality Checkers isn't something that you "ought to do", it's something that makes you resources go further.
What this also illustrates is that the SROI framework can be used to support decisions in all areas where the return isn't directly financial.