Qualitative analysis software for monitoring and evaluation

monitoring and evaluation with qualitative software

 

Developing systems for the monitoring and evaluation of services, interventions and programmes (or programs to use the American English spelling) is a particular skill that requires great flexibility. As each intervention to be investigated is different, and the aims of the project and funders and service users vary, evaluations have to draw on a diverse toolkit of methods.


Qualitative methods are often an important part of this approach. While many evaluations (and service delivery partners) would prefer to demonstrate a quantitative impact such as cost-benefit, things like user satisfaction, behaviour change and expected long-term impacts can be difficult or costly to put a quantitative figure to. Investigating smaller demographic subsets can also be challenging, especially when key groups are represented in too small a number to realistically sample to statistical significance. There are also situations where qualitative methods can give a depth of detail that is invaluable, especially when considering detailed suggestions on improvements.


Too often monitoring and evaluation is an overlooked part of service delivery, tacked on at the end with little budget or time to deliver in. But a short qualitative evaluation can often provide some useful insight without the resources for a detailed assessment with a full quantitative sample and modelling.

 

But managing qualitative data comes with it's own set of challenges. Often monitoring will require looking at data over a long time frame, and the end consumers of evaluations can be sceptical of the validity of qualitative data, and need to be shown how it fits their deliverable criteria. But qualitative analysis software can help on both these points (and more). It can help 'show your working out' and demonstrate how certain statements in the qualitative data support conclusions, manage large amounts of longitudinal data of different types and basically ensure that the evaluation can focus on what they do best – choosing the right approach to collecting monitoring data, and interpreting it for the end users.

 


Let's look at the first aspect here – managing and collating qualitative data. Regardless of the methodology chosen, such as focus groups, interviews and open-ended surveys, qualitative software can be used to keep all the different data together in one place, allowing for cross-referencing across sources, as well as looking at results from one method. But it also makes it much easier to draw in other 'qualitative' documents to provide context, such as project specifications or policy documents. It also can help collate informal sources of data, such as comments and feedback from service users that were collected outside the formal discovery process.


But my favourite aspect that qualitative software helps facilitate is the development and re-application of assessment criteria. Usually there will be fairly standard aspects for evaluation, such as impact, uptake, cost-effectiveness, etc. But funders and commissioners of M&E may have their own special interests (such as engagement with hard-to-reach populations) which need to be demonstrated.


In qualitative software these become the framework for coding the qualitative data: assigning supportive statements or evidence to each aspect. In our qualitative analysis software, Quirkos, this are represented as bubbles you add data to, sometimes called nodes or themes in other software. However, once you have developed and refined a framework that matches set evaluation criteria, you can reuse this in other projects – tweaking slightly to match the specifications of each project.


That way, it is easy to show comments from users, service delivery staff, or other documentation that supports each area. This not only helps the researcher in their work, but also in communicating the results to end users. You can show all (or some) of the data supporting conclusions in each area, as well as contrasts and differences between subsets. In Quirkos, you would use the Query view and the side-by-side comparison view to show how impact was different between groups such as gender or age. The visual overviews that software likes Quirkos creates can help make funders and budget holders get a quick insight into qualitative data, that usually is time consuming to digest in full (it also makes for great visual presentation slides or figures in reports).

 


Of course, all this can be done with more 'traditional' qualitative analysis approaches, such as paper and highlighters, or a huge Excel spreadsheet. But dedicated qualitative software makes sorting and storing data easier, and can save time in the long run by creating reports that help communicate your findings.


I know a lot of evaluators, especially for small projects, feel that learning qualitative analysis tools or setting up a project in qualitative software is not worth the time investment. But if it has been a while since you have tried qualitative software, especially 'new-kids-on-the-block' like Quirkos, it might be worth looking again. Since Quirkos was initially designed for participatory analysis, it can be learnt very quickly, with a visual interface that keeps you close to the data you are investigating.

 

It's also worth noting its limitations: Quirkos is still text only, so exploring multimedia data is not possible, and it takes a very pure-qual philosophy, so there are few built-in tools for quantitative analysis, although it does support exploration of mixed method and discrete data. If you need these extra features you should look at some of the more traditional packages such as Nvivo and Atlas.ti, bearing in mind the extra learning requirement that comes along with more powerful tools.


We think that for most qualitative evaluations Quirkos will have more than enough functionality, with a good trade-off between power and ease of use. There's a free trial you can download, and our licences are some of the cheapest (and most flexible) around. And if you had any specific questions about using Quirkos for monitoring and evaluation, we'd love to hear from you (support@quirkos.com)and are always happy to help you out learning and using the software. For more on using Quirkos in this field, check out our M&E feature overview.

 

 

References and Resources:

 

The AEA (American Evaluation Association) has a rather outdated list of qualitative software packages:
http://www.eval.org/p/cm/ld/fid=81

 

The British CAQDAS Network has independent reviews of qualitative software and training courses:
https://www.surrey.ac.uk/computer-assisted-qualitative-data-analysis/support/choosing

 

Better Evaluation has just one link to a chapter giving very general overview of choosing qualitative software:
http://www.betterevaluation.org/en/resources/choosing_qual_software

 

 

Evaluating feedback

We all know the score: you attend a conference, business event, or training workshop, and at the end of the day you get a little form asking you to evaluate your experience. You can rate the speakers, venue, lunch and parking on a scale from one-to-five, and tick to say whether you would recommend the event to a friend or colleague.

But what about the other part of the evaluation: the open comments box? What was your favourite part of the day? What could we improve for next time? Any other comments? Hopefully someone is going to spend time typing up all these comments, and see if there are some common themes or good suggestions they can use to improve the event next year. Even if you are using a nifty on-line survey system like SurveyMonkey, does someone read and act on the suggestions you spent all that time writing?

And what about feedback on a product, or on service in a hotel or restaurant? Does something actually happen to all those comments, or as one conference attendee once suggested to me, do they all end up on the floor?

In fact, this is a common problem in research. Even when written up, reports often just stay on the shelf, and don't have influence on practice or procedure. If you want decision makers to pay attention to participant feedback and evaluations, then you need to present them in a clear and engaging way.

 

For the numerical or discrete part of surveys, this is not usually too hard. You can put these values in Excel, (or SPSS if you are statistically minded) and explore the data in pivot tables and bar graphs. Then you can see that the happiest attendees were the ones who ranked lunch as excellent, or that 76% of people would recommend the day to others.

Simple statistics and visualisations like this are a standard part of our language: we hear and see them in the news, at board meetings, even in football league tables. They communicate clearly and quickly.

But what about those written comments? In Excel you can't really see all the comments made by people who ranked the conference poorly, or see if the same suggestions are being made about workshop themes for next year.

That's what Quirkos aims to do: become the 'Excel of text'. It's software that everyone can use to explore, summarise and present text data in an intuitive way.

If you put all of your conference evaluations or customer feedback in Quirkos, you can quickly see all the comments made by people who didn't like your product. Or everything that women from the ages of 24-35 said about your service compared with men from 45-64. By combining the numerical, discrete and text data, you have the power to explore the relationships between themes and the differences between respondees. Then you can share these findings as graphs, bubble maps or just the quotes themselves: quick and easy to understand.

This unlocks the power of comments from all your customers, because Quirkos allows you to see why they liked a particular product. And it gives you the chance to be a better listener: if your consumers have an idea for improving your product, you can make it pop out as clear as day.

Hopefully it also breaks a vicious circle: people don't bother leaving comments as they assume they are aren't being read, and thus organisers stop asking for comments, because those sections are ignored or give generic responses.

 

So hopefully next time you fill out a customer feedback form or event evaluation, your comments will lead to direct improvements, rather than just being lost in translation.