Participatory Qualitative Analysis

laptops for qualitative analysis

 

Engaging participants in the research process can be a valuable and insightful endeavour, leading to researchers addressing the right issues, and asking the right questions. Many funding boards in the UK (especially in health) make engaging with members of the public, or targets of the research a requirement in publicly funded research.

 

While there are similar obligations to provide dissemination and research outputs that are targeted at ‘lay’ members of the public, the engagement process usually ends in the planning stage. It is rare for researchers to have participants, or even major organisational stakeholders, become part of the analysis process, and use their interpretations to translate the data into meaningful findings.

 

With surprisingly little training, I believe that anyone can do qualitative analysis, and get engaged in actions like coding and topic discovery in qualitative data sets.

 

I’ve written about this before but earlier this year we actually had a chance to try this out with Quirkos. It was one of the main reasons we wanted to design new qualitative analysis software; existing solutions were too difficult to learn for non-expert researchers (and quite a lot of experienced experts too).

 

So when we did our research project on the Scottish Referendum, we invited all of the participants to come along to a series of workshops and try analysing the data themselves. Out of 12, only 3 actually came along, but none of these people had any experience of doing qualitative research before.

 

And they were great at it!

 

In a two hour session, respondents were given a quick overview of how to do coding in Quirkos (in just 15 minutes), and a basic framework of codes they could use to analyse the text. They were free to use these topics, or create their own as they wished – all 3 participants chose to add codes to the existing framework.

 

They were each given transcripts from someone else’s anonymised interview: as these were group sessions, we didn’t want people to be identified while coding their own transcript. Each were 30 minute interviews, around 5000 words in length. In the two hour session, all participants had coded one interview completely, and done most (or all) of the second. One participant was so engrossed in the process, he had to be sent home before he missed his dinner, but took a copy of Quirkos and the data home to keep working on his own computer.

 

The graph below shows how quickly the participants learnt how to code. The y axis shows the number of seconds between each ‘coding event’: every time someone coded a new piece of text (and numbered sequentially along the x axis). The time taken to code starts off high, with questions and missteps meaning each event takes a minute or more. However, the time between events quickly decreases, and in fact the average time for the respondents was to add a code every 20 seconds. This is after any gaps longer than 3 minutes have been removed – these are assumed to be breaks for tea or debate! Each user made at least 140 tags, assigning text to one or more categories.

 

 

So participants can be used as cheap labour to speed up or triangulate the coding process? Well, it can be more than this. The topics they chose to add to the framework (‘love of Scotland’, ‘anti-English feelings’, ‘Scottish Difference’) highlighted their own interpretations of the data, showing their own opinions and variations. It also prompted discussion with other coders, about what they thought about the views of people in the dataset, how they had interpreted the data:


“Suspicion, oh yeah, that’s negative trust. Love of Scotland, oh! I put anti-English feelings which is the opposite! Ours are like inverse pictures of each other’s!”

 

Yes: obviously we recorded and transcribed the discussions and reflections, and analysed them in Quirkos! And these revealed that people expressed familiar issues with reflexivity, reliability and process that could have come from experienced qualitative researchers:


“My view on what the categories mean or what the person is saying might change before the end, so I could have actually read the whole thing through before doing the comments”


“I started adding in categories, and then thinking, ooh, if I’d added that in earlier I could actually have tied it up to such-and-such comment”


“I thought that bit revealed a lot about her political beliefs, and I could feel my emotions entering into my judgement”


“I also didn’t want to leave any comment unclassified, but we could do, couldn’t we? That to me is about the mechanics of using the computer, ticky box thing.”

 

This is probably the most useful part of the project to a researcher: the input of participants can be used as stimulus for additional discussion and data collection, or to challenge the way researchers do their own coding. I found myself being challenged about how I had assigned codes to controversial topics, and researchers could use a more formal triangulation process to compare coding between researchers and participants, thus verifying themes, or identifying and challenging significant differences.

 

Obviously, this is a tiny experimental project, and the experience of 3 well educated, middle-class Scots should not be interpreted as meaning that anyone can (or would want to) do this kind of analysis. But I believe we should do try this kind of approach whenever it is appropriate. For most social research, the experts are the people who are always in the field – the participants who are living these lives every day.

 

You can download the full report, as well as the transcripts and coded data as a Quirkos file from http://www.quirkos.com/workshops/referendum/

 

 

Participatory analysis: closing the loop

In participatory research, we try to get away from the idea of researchers doing research on people, and move to a model where they are conducting research with people.

 

The movement comes partly from feminist critiques of epistemology, attacking the pervasive notion that knowledge can only be created by experienced academics, The traditional way of doing research generally disempowers people, as the researchers get to decide what questions to ask, how to interpret and present them, and even what topics are worthy of study in the first place. In participatory research the people who are the focus of the research are seen as the experts, rather than the researchers. At face value, this seems to make sense. After all, who knows more about life on a council estate: someone who has lived there for 20 years, or a middle-class outside researcher?

 

In participatory research, the people who are the subject of the study are often encouraged to be a much greater part of the process, active participants rather than aliens observed from afar. They know they are taking part in the research process, and the research is designed to give them input into what the study should be focusing on. The project can also use research methods that allow people to have more power over what they share, for example by taking photos of their environment, having open group discussions in the community, or using diaries and narratives in lieu of short questionnaires. Groups focused on developing and championing this work include the Participatory Geographies working group of the RGS/IBG, and the Institute of Development Studies at the University of Sussex.

 

This approach is becoming increasingly accepted in mainstream academia, and many funding bodies, including the NIHR, now require all proposals for research projects to have had patient or 'lay-person' involvement in the planning process, to ensure the design of the project is asking the right questions in an appropriate way. Most government funded projects will also stipulate that a summary of findings should be written in a non-technical, freely available format so that everyone involved and affected by the research can access it.

 

Engaging with analysis

Sounds great, right? In a transparent way, non-academics are now involved in everything: choosing which studies are the most important, deciding the focus, choosing the methods and collecting and contributing to the data.

 

But then what? There seems to be a step missing there, what about the analysis?

 

It could be argued that this is the most critical part of the whole process, where researchers summarise, piece together and extrapolate answers from the large mass of data that was collectively gathered. But far too often, this process is a 'black-box' conducted by the researchers themselves, with little if any input from the research participants. It can be a mystery to outsiders, how did researchers come to the particular findings and conclusions from all the different issues that the research revealed? What was discarded? Why was the data interpreted in this way?

 

This process is usually glossed over even in journal articles and final reports, and explaining the process to participants is difficult. Often this is a technical limitation: if you are conducting a muli-factor longitudinal study, the calculation of the statistical analysis is usually beyond all but the most mathematically minded academics, let alone the average Jo.

 

Yet this is also a problem in qualitative research, where participatory methods are often used. Between grounded theory, framework analysis and emergent coding, the approach is complicated and contested even within academia. Furthermore, qualitative analysis is a very lengthy process, with researchers reading and re-reading hundreds or thousands of pages of text: a prospect unappealing to often unpaid research participants.

 

Finally, the existing technical solutions don't seem to help. Software like Nvivo, often used for this type of analysis, is daunting for many researchers without training, and encouraging people from outside the field to try and use it, with all the training and licensing implications of this, makes for an effective brick wall. There are ways to make analysis engaging for everyone, but many research projects don't attempt participation at the analysis stage.

 

Intuitive software to the rescue?

By making qualitative analysis visual and engaging, Quirkos hopes to make participatory analysis a bit more feasible. Users don't require lengthy training, and everyone can have a go. They can make their own topics, analyse their own transcripts (or other people's), and individuals in a large community group can go away and do as little or as much as they like, and the results can be combined, with the team knowing who did what (if desired).

 

It can also become a dynamic group exercise, where with a tablet, large touch surface or projector, everyone can be 'hands on' at once. Rather than doing analysis on flip-charts that someone has to take away and process after the event, the real coding and analysis is done live, on the fly. Everyone can see how the analysis is building, and how the findings are emerging as the bubbles grow. Finally, when it comes to share the findings, rather than long spreadsheets of results, you get a picture – the bubbles tell the story and the issues.

 

Quirkos offers a way to practically and affordably facilitate proper end-to-end participatory research, and finally close the loop to make participation part of every stage in the research process.