Workshop exercises for participatory qualitative analysis

participatory workshop

I am really interested in engaging research participants in the research process. While there is an increasing expectation to get ‘lay’ researchers to set research questions, sit on review boards and even ask questions in qualitative studies, it can be more difficult to engage them with the analysis of the research data and this is much rarer in the literature (see Nind 2011).


However, Quirkos was specifically designed to make qualitative text analysis engaging and easy enough that participants could learn to do it with just a little training, and in this blog post article from last year I describe how a small group were able to learn the software and code qualitative data in a two hour session. I am revisiting and expanding on this work for a talk in the Digital Tools thread of the 2016 ICQI conference in Champagne Illinois this month, so wanted to revisit this a little.


When I had attempted to engage participants in qualitative analysis before, it had been fairly limited in scope. The easiest way was essentially to present an already complete draft ‘report’ or overview of the analysis and findings, and use that for comment and discussion. While this allows respondents some ability to influence the final outputs and conclusions, the analysis process is still entirely led by the researcher, and they don’t really get a chance to change how data is interpreted. The power dynamics between researcher and participant are not significantly altered.


My feeling is that respondents are often the experts in what is being studied (i.e. their own experiences), and I worry that if presented with all the data, they might rightly conclude “You’ve interpreted this as being about x when it is really about y”.


Yet there are obvious problems that occur when you want respondents to engage with this level of detail. First of all, there is the matter of time: qualitative analysis is extremely time consuming, and in most projects asking someone to analyse all the data is asking for days or even months of commitment. This is not feasible for most respondents, especially if asked to do this work voluntarily parallel to the full time, paid job of the researcher! 


Most approaches in the literature choose to engage a small number of participants in the analysis of some of the data. For example Jackson (2008) uses group exercises successfully with people from different educational backgrounds. The DEPICT model breaks down the work so that the whole dataset is covered, but each team member only has a few transcripts to code (Flicker and Nixon 2015).


However, when it came to run participatory analysis workshops for the research project we did on the Scottish Referendum Project, we had an additional secret weapon: Quirkos! One of the main design briefs for Quirkos was to ensure that it was simple enough to learn that it could be used by research participants with little or no formal training in research or qualitative analysis. The workshops we ran with research-naïve respondents showed that such a software package could be used in such a way.


I was initially really worried about how the process would work practically, and how to create a small realistic task that would be a meaningful part of the analysis process. Before I started, I considered a series of tasks and scenarios that could be used in a participatory analysis workshop to get respondent input into the analysis process. I’ve included some brief details of these below, just in case they are helpful to anyone else considering participatory analysis.

 

 

Blank Sheet


The most basic, and most scary scenario: the coding team is provided with just the raw transcript(s), with no existing topic framework or coded data. They start from scratch, creating their own coding framework, and coding data. This is probably the most time consuming, and conceptually challenging approach, but the most neutral in terms of influence from the researchers. Participants are not provided with any preconceptions of what they should be exploring in the data (although they could be provided with the research questions), and are free to make their own interpretations.

 

 

Framework Creation


Here, I envisage a series of possible exercises where the focus is on not the coding of the data explicitly, but consideration of the coding framework and possible topics of interest. Participants choose topics of significance to them, or that they feel are appearing in the data. Here the process is like grounded theory, participants are given one (or several) transcripts to read, and asked what topics are significant. This works well on large sheets of paper with Post-it notes, but by creating the coding framework directly in the software, participants and researchers can easily utilise the developed framework for coding later. Could exist in several variants:


Emergent Coding
As above: creating a coding framework (probably from scratch, or with some example topics already provided by the researcher)
 

Grouping Exercise
A simpler task would be to present a pre-prepared list of many possible topics of interest created by the researcher, and ask participants to group them either thematically, or by order of importance. This gives respondents an easier start on the coding framework, allowing them to familiarise themselves with the process and topics. It is more restrictive, and plants directions of interest for the participants, but they would remain able to challenge, add to, or exclude topics for examination.
 

Category Prompts
Here the researcher has created a few very broad categories (for example Health, Housing, Family) and the participants are encouraged to populate the framework with more specific sub categories. This approach is a good middle ground, where the researcher can set some broad areas of interest, but participants have say in what direction topics should be explored in detail (say Expensive food, Lack of open space).

After one or more of these exercises, the participants could go on to use the coding framework to code the data themselves, or the researcher can use the contributed topic guide to focus their own coding.

 

 

Coding exercises


In these three exercises, I envisage a scenario where some coding has already been completed, the focus of the session is to look either at coded transcripts (on screen or printout) and discuss how the data has been interpreted. This could take the form of:


Researcher Challenge: Where the researcher asks the participants to justify or explain how they have coded the data
Participant Challenge: Participants examine data coded by researchers, question their rationale and suggest changes
Group Challenge: Participants and researchers code the same transcript separately, and get together to compare, contrast and discuss their results.


With all these approaches, one can apply several overall philosophies:
Individual: Where each respondent or researcher works on their own, adding separately to the total coding of the project
Collaborative: Analysis is done as part of a team, working together
Comparative: Where analysts work separately, but come together to discuss and contrast their work, creating a final dialogue from the input of the whole group.

 

Finally, the team should consider whether the aim of the project is to actually create a direct analysis outcome from these sessions, or if they are exercises which are themselves part of the qualitative data generated from the project. For our sessions, we also recorded, transcribed and analysed the discussion which took place around the coding, which itself also contributed nuanced and valuable insight into the thought processes of the participants. Of course, this leaves the problem of creating an infinite Ouroboros loop of data generation, if respondents were then invited to analyse the transcripts of their own analysis sessions!

 

Which approach, and how far the participatory process is taken will obviously depend on the research project and desires of the researcher. However, my main aim here is to just get people thinking about the possibilities, and if engaging participants in the research process in some way will challenge the assumptions of the research team, or lead to better results, and more relevant and impactful outputs.

 

Here are the slides of my ICQI 2016 talk, and the complete data (raw and coded) and summary report on the Scottish Referendum Project is here. I would welcome more discussion on this, in the forum, by e-mail (daniel@quirkos.com) or in the literature!

 

Don't forget, the new version of Quirkos is now available, for researchers and participants alike to bring their qualitative analysis to life. Download a free trial today!

 

 

Participatory analysis: closing the loop

In participatory research, we try to get away from the idea of researchers doing research on people, and move to a model where they are conducting research with people.

 

The movement comes partly from feminist critiques of epistemology, attacking the pervasive notion that knowledge can only be created by experienced academics, The traditional way of doing research generally disempowers people, as the researchers get to decide what questions to ask, how to interpret and present them, and even what topics are worthy of study in the first place. In participatory research the people who are the focus of the research are seen as the experts, rather than the researchers. At face value, this seems to make sense. After all, who knows more about life on a council estate: someone who has lived there for 20 years, or a middle-class outside researcher?

 

In participatory research, the people who are the subject of the study are often encouraged to be a much greater part of the process, active participants rather than aliens observed from afar. They know they are taking part in the research process, and the research is designed to give them input into what the study should be focusing on. The project can also use research methods that allow people to have more power over what they share, for example by taking photos of their environment, having open group discussions in the community, or using diaries and narratives in lieu of short questionnaires. Groups focused on developing and championing this work include the Participatory Geographies working group of the RGS/IBG, and the Institute of Development Studies at the University of Sussex.

 

This approach is becoming increasingly accepted in mainstream academia, and many funding bodies, including the NIHR, now require all proposals for research projects to have had patient or 'lay-person' involvement in the planning process, to ensure the design of the project is asking the right questions in an appropriate way. Most government funded projects will also stipulate that a summary of findings should be written in a non-technical, freely available format so that everyone involved and affected by the research can access it.

 

Engaging with analysis

Sounds great, right? In a transparent way, non-academics are now involved in everything: choosing which studies are the most important, deciding the focus, choosing the methods and collecting and contributing to the data.

 

But then what? There seems to be a step missing there, what about the analysis?

 

It could be argued that this is the most critical part of the whole process, where researchers summarise, piece together and extrapolate answers from the large mass of data that was collectively gathered. But far too often, this process is a 'black-box' conducted by the researchers themselves, with little if any input from the research participants. It can be a mystery to outsiders, how did researchers come to the particular findings and conclusions from all the different issues that the research revealed? What was discarded? Why was the data interpreted in this way?

 

This process is usually glossed over even in journal articles and final reports, and explaining the process to participants is difficult. Often this is a technical limitation: if you are conducting a muli-factor longitudinal study, the calculation of the statistical analysis is usually beyond all but the most mathematically minded academics, let alone the average Jo.

 

Yet this is also a problem in qualitative research, where participatory methods are often used. Between grounded theory, framework analysis and emergent coding, the approach is complicated and contested even within academia. Furthermore, qualitative analysis is a very lengthy process, with researchers reading and re-reading hundreds or thousands of pages of text: a prospect unappealing to often unpaid research participants.

 

Finally, the existing technical solutions don't seem to help. Software like Nvivo, often used for this type of analysis, is daunting for many researchers without training, and encouraging people from outside the field to try and use it, with all the training and licensing implications of this, makes for an effective brick wall. There are ways to make analysis engaging for everyone, but many research projects don't attempt participation at the analysis stage.

 

Intuitive software to the rescue?

By making qualitative analysis visual and engaging, Quirkos hopes to make participatory analysis a bit more feasible. Users don't require lengthy training, and everyone can have a go. They can make their own topics, analyse their own transcripts (or other people's), and individuals in a large community group can go away and do as little or as much as they like, and the results can be combined, with the team knowing who did what (if desired).

 

It can also become a dynamic group exercise, where with a tablet, large touch surface or projector, everyone can be 'hands on' at once. Rather than doing analysis on flip-charts that someone has to take away and process after the event, the real coding and analysis is done live, on the fly. Everyone can see how the analysis is building, and how the findings are emerging as the bubbles grow. Finally, when it comes to share the findings, rather than long spreadsheets of results, you get a picture – the bubbles tell the story and the issues.

 

Quirkos offers a way to practically and affordably facilitate proper end-to-end participatory research, and finally close the loop to make participation part of every stage in the research process.