Qualitative evaluations: methods, data and analysis

reports on a shelf

Evaluating programmes and projects are an essential part of the feedback loop that should lead to better services. In fact, programmes should be designed with evaluations in mind, to make sure that there are defined and measurable outcomes.

 

While most evaluations generally include numerical analysis, qualitative data is often used along-side the quantitative, to show richness of project impact, and put a human voice in the process. Especially when a project doesn’t meet targets, or have the desired level of impact, comments from project managers and service users usually give the most information into what went wrong (or right) and why.

 

For smaller pilot and feasibility projects, qualitative data is often the mainstay of the evaluation data, when numerical data wouldn’t reach statistical analysis, or when it is too early in a programme to measure intended impact. For example, a programme looking at obesity reductions might not be able to demonstrate a lower number of diabetes referrals at first, but qualitative insight in the first year or few months of the project might show how well messages from the project are being received, or if targeted groups are talking about changing their behaviour. When goals like this are long term (and in public health and community interventions they often are) it’s important to continuously assess the precursors to impact: namely engagement, and this is usually best done in a qualitative way.

 

So, what is best practice for qualitative evaluations? Fortunately, there are some really good guides and overviews that can help teams choose the right qualitative approach. Vaterlaus and Higgenbotham give a great overview of qualitative evaluation methods, while Professor Frank Vanclay talks at a wider level about qualitative evaluations, and innovative ways to capture stories. However, there was also a nice ‘tick-box’ style guide produced by the old Public Health Resource Unit which can still be found at this link. Essentially, the tool suggests 10 questions that can be used to assess the quality of a qualitative based-evaluation – really useful when looking at evaluations that come from other fields or departments.

 

But my contention is that the appraisal tool above is best implemented as a guide for producing qualitative evaluations. If you start by considering the best approach, how you are going to demonstrate rigour, choosing appropriate methods and recruitment, you’ll get a better report at the end of it. I’d like to discuss and expand on some of the questions used to assess the rigour of the qualitative work, because this is something that often worries people about qualitative research, and these steps help demystify good practice.

 

  1. The process: Start by planning the whole evaluation from the outset: What do you plan to do? All the rest will then fall into place.
     
  2. The research questions: what are they and why were these chosen? Are the questions going to give the evaluation the data it needs, and will the methods capture that correctly?
     
  3. Recruitment: who did you choose, and why? Who didn’t take part, and how did you find people? What gaps are there likely to be in representing the target group, and how can you compensate for this? Were there any ethical considerations, how was consent gained, and what was the relationship between the participants and the researcher(s)? Did they have any reason to be biased or not truthful?
     
  4. The data: how did you know that enough had been collected? (Usually when you are starting to hear the same things over and over – saturation) How was it recorded, transcribed, and was it of good quality? Were people willing to give detailed answers?
     
  5. Analysis: make sure you describe how it was done, and what techniques were used (such as discourse or thematic analysis). How does the report choose which quotes to reproduce, and are there contradictions reported in the data? What was the role of the researcher – should they declare a bias, and were multiple views sought in the interpretation of the data?
     
  6. Findings: do they meet the aims and research questions? If not, what needs to be done next time? Are there clear findings and action points, appropriate to improving the project?

 

Then the final step for me is the most important of all: SHARE! Don't let it end up on a dusty shelf! Evaluations are usually seen as a tedious but necessary internal process, but they can be so useful to people as case-studies and learning tools in organisations and groups you might never have thought of. This is especially true if there are things that went wrong, help someone in another local authority not make the same mistakes!

 

At the moment the best UK repositories of evaluations are based around health and economic benefits but that doesn’t stop you putting the report on your organisation’s website – if someone is looking for a similar project, search engines will do the leg work for you. That evaluation might save someone a lot of time and money, and it goes without saying, look for any similar work before you start a project, you might get some good ideas, and stop yourself falling into the same pit-holes!

 

What is a Qualitative approach?

The benefit of having tastier satsumas is difficult to quantify: to turn into a numerical, comparable value. This is essentially what qualitative work does: measure the unquantifiable quality of something. Just don’t ask what we mean by Quality – that opens a whole other can of worms

In this context though, quality also means a lack of quality, or even a negative property such as pain. Pain is a great example of something that people want to quantify: to put in a linear scale out of 10 so that clinicians can prioritise patients, and choose appropriate treatments.

But pain is, in a sense, too multidimensional to be rated as one factor. Firstly, intensity doesn’t always seem to be linear, and muscle pain, bone pain and headaches seem to feel very different. People report different tolerances for pain, and how can people accurately report different levels of pain if they have never experienced the upper bounds? How do you know your patient’s headache is the worst pain imaginable if they’ve never given birth?

A typical qualitative approach would open a dialogue with the patient, partly to get to the bottom of some of these quandaries: Have you ever had pain this strong before? Is it a dull ache, or a sharp pain? Can you ignore it? What has worked well for you in the past? A clinician who can gain a deeper level of understanding might make a better treatment decision for the patient, coupled with their own knowledge and experience.

This is great, although it is probably more time intensive for the patient and clinician compared to using a pain thermometer. It’s also something that us typically social humans are really good at doing; talking to others to find out about their experience. When your partner comes home from work, most people don’t ask them to rate their day out of 10 and leave the conversation like that. I’d argue that most people are extremely attuned to qualitative discussion: we use it everyday, in verbal and non-verbal ways to understand each other.

Two of the main problems with this approach are the repeatability, and sharing of this understanding.

Can we have a conversation with different patients, so that every time we get comparable answers? Perhaps not: in fact this is probably not even desirable. If we don’t tailor the conversation to the individual’s patients situation, we might miss something. If we have a script, and the stock question is ‘Where is your pain?’, the answer ‘In my foot’ doesn’t allow us to say ‘Which foot?’ in a way that is not needed if the answer is ‘nose’.

Secondly, if we have spent all day talking to patients about their post-operation pain, how can we share these insights with others? Is that new pain medication working? Are those fancy dissolvable stitches causing problems for people? A nurse who has been asking these questions all day might have what we can crudely call, an intuitive sense of whether peoples pain in general is worse today, but this is difficult to prove and communicate. Another nurse might talk to the same patients, and have a different perspective. The question is: How can we make use of this qualitative data?

We’ll look at this more in the next post. But there are of course many other descriptions and definitions of qualitative approaches, in fact Guest et al. (2013) disturbingly note that

There are about as many definitions of qualitative research as there are books on the

subject…

This is from Collecting Qualitative Data: A Field Manual for Applied Research (2013), and since this chapter is freely available online, this is as good a place to start as any.