7 things we learned from ICQI 2016

ICQI conference - image from Ariescwliang

 

I was lucky enough to attend the ICQI 2016 conference last week in Champaign at the University of Illinois. We managed to speak to a lot of people about using Quirkos, but there were hundreds of other talks, and here are some pointers from just a few of them!

 

 

1. Qualitative research is like being at high school
Johnny Saldaña’s keynote described (with cutting accuracy) the research cliques that people tend to stick to. It's important for us to try and think outside these methodological or topic boxes, and learn from other people doing things in different ways. With so many varied sessions and hundreds of talks, conferences like ICQI 2016 are great places to do this.

 

We were also treated to clips from high school movies, and our own Qualitative High School song! The Digital Tools thread got their own theme song: a list of all the different qualitative analysis software packages sung to the tune of ‘ABC’ - the nursery rhyme, not the Jackson 5 hit!

 

 

2. There is a definite theoretical trend
The conference featured lots of talks on Butler, Foucault, but not one explicitly on Derrida! A philosophical bias perhaps? I’m always interested in the different philosophy that is drawn from between North American, British and Continental debates…

 

 

3. Qualitative research balances a divide between chaos and order
Maggie MacLure gave an intriguing keynote about how qualitative research needs to balance the intoxicating chaos and experimentation of Dionysus with the order and clarity of Apollo (channelling Deleuze). She argued that we must resist the tendency of method and neo-liberal positioned research to push for conformity, and go further in advocating for real ontological change. She also said that researchers should do more to challenge the primacy of language: surely why we need a little Derrida here and there?!

 

 

4. We should embrace doubt and uncertainty
This was definitely something that Maggie MacLure's keynote touched on, but a session chaired by Charles Vander talked about uncertainty in the coding process, and how this can be difficult (but ultimately beneficial). Referencing Locke, Golden-Biddle and Feldman (2008), Charles talked about the need to Embrace not knowing, nurture hurdles and disrupt order (while also engaging with the world and connecting with struggle). It's important for students that allowing doubt and uncertainty doesn't lead to fear – a difficult thing when there are set deadlines and things aren’t going the right way, and even true for most academics! We need to teach that qualitative analysis is not a fixed linear process, experimentation and failure is key part of it. Kathy Charmaz echoed this while talking about grounded theory, and noted that ‘coding should be magical, not just mechanical’.

 


5. We should challenge ourselves to think about codes and coding in completely different ways

Johnny Saldaña's coding workshop (which follows on from his excellent textbook) gave examples of the incredible variety of different coding categories one can create. Rather than just creating merely descriptive index coding, try and get to the actions and motivations in the text. Create code lists which are based around actions, emotions, conflicts or even dramaturgical concepts: in which you are exploring the motivations and tactics of those in your research data. More to follow on this...

 

 

6. We still have a lot to learn about how researchers use qualitative software
Two great talks from Ely Lieber and NYU/CUNY took the wonderful meta-step of doing qualitative (and mixed method) analysis on qualitative researchers, to see how they used qualitative software and what they wanted to do with it.

Katherine Gregory and Sarah DeMott looked at responses from hundreds of users of QDA software, and found a strong preference for getting to outputs as soon as possible, and saw people using qualitative data in very quantitative ways. Eli Lieber from Dedoose looked at what he called ‘Research and Evaluation Data Analysis Software’ and saw from 355 QDA users that there was a risk of playing with data rather than really learning from it, and that many were using coding in software as a replacement for deep reading of the data.


There was also a lot of talk about the digital humanities movement, and there was some great insight from Harriett Green on how this shift looks for librarians and curators of data, and how researchers are wanting to connect and explore diverse digital archives.

 


7. Qualitative research still feels like a fringe activity
The ‘march’ of neo-liberalism was a pervasive conference theme, but there were a lot of discussions around the marginalised place of qualitative research in academia. We heard stories of qualitative modules being removed or made online only, problems with getting papers submitted in mainstream journals, and the lack of engagement from evidence users and policy makers. Conferences like this are essential to reinforce connections between researchers working all over the world, but there is clearly still need for a lot of outreach to advance the position of qualitative research in the field.

 

 

There are dozens more fascination talks I could draw from, but these are just a few highlights from my own badly scribbled notes. It was wonderful to meet so many dedicated researchers, working on so many conceptual and social issues, and it always challenges me to think how Quirkos can better meet the needs of such a disparate group of users. So don’t forget to download the trial, and give us more feedback!

 

You should also connect with the Digital Tools for Qualitative Research group, who organised one of the conference Special Interest Groups, but has many more activities and learning events across the year. Hope to see many more of you next year!

 

Workshop exercises for participatory qualitative analysis

participatory workshop

I am really interested in engaging research participants in the research process. While there is an increasing expectation to get ‘lay’ researchers to set research questions, sit on review boards and even ask questions in qualitative studies, it can be more difficult to engage them with the analysis of the research data and this is much rarer in the literature (see Nind 2011).


However, Quirkos was specifically designed to make qualitative text analysis engaging and easy enough that participants could learn to do it with just a little training, and in this blog post article from last year I describe how a small group were able to learn the software and code qualitative data in a two hour session. I am revisiting and expanding on this work for a talk in the Digital Tools thread of the 2016 ICQI conference in Champagne Illinois this month, so wanted to revisit this a little.


When I had attempted to engage participants in qualitative analysis before, it had been fairly limited in scope. The easiest way was essentially to present an already complete draft ‘report’ or overview of the analysis and findings, and use that for comment and discussion. While this allows respondents some ability to influence the final outputs and conclusions, the analysis process is still entirely led by the researcher, and they don’t really get a chance to change how data is interpreted. The power dynamics between researcher and participant are not significantly altered.


My feeling is that respondents are often the experts in what is being studied (i.e. their own experiences), and I worry that if presented with all the data, they might rightly conclude “You’ve interpreted this as being about x when it is really about y”.


Yet there are obvious problems that occur when you want respondents to engage with this level of detail. First of all, there is the matter of time: qualitative analysis is extremely time consuming, and in most projects asking someone to analyse all the data is asking for days or even months of commitment. This is not feasible for most respondents, especially if asked to do this work voluntarily parallel to the full time, paid job of the researcher! 


Most approaches in the literature choose to engage a small number of participants in the analysis of some of the data. For example Jackson (2008) uses group exercises successfully with people from different educational backgrounds. The DEPICT model breaks down the work so that the whole dataset is covered, but each team member only has a few transcripts to code (Flicker and Nixon 2015).


However, when it came to run participatory analysis workshops for the research project we did on the Scottish Referendum Project, we had an additional secret weapon: Quirkos! One of the main design briefs for Quirkos was to ensure that it was simple enough to learn that it could be used by research participants with little or no formal training in research or qualitative analysis. The workshops we ran with research-naïve respondents showed that such a software package could be used in such a way.


I was initially really worried about how the process would work practically, and how to create a small realistic task that would be a meaningful part of the analysis process. Before I started, I considered a series of tasks and scenarios that could be used in a participatory analysis workshop to get respondent input into the analysis process. I’ve included some brief details of these below, just in case they are helpful to anyone else considering participatory analysis.

 

 

Blank Sheet


The most basic, and most scary scenario: the coding team is provided with just the raw transcript(s), with no existing topic framework or coded data. They start from scratch, creating their own coding framework, and coding data. This is probably the most time consuming, and conceptually challenging approach, but the most neutral in terms of influence from the researchers. Participants are not provided with any preconceptions of what they should be exploring in the data (although they could be provided with the research questions), and are free to make their own interpretations.

 

 

Framework Creation


Here, I envisage a series of possible exercises where the focus is on not the coding of the data explicitly, but consideration of the coding framework and possible topics of interest. Participants choose topics of significance to them, or that they feel are appearing in the data. Here the process is like grounded theory, participants are given one (or several) transcripts to read, and asked what topics are significant. This works well on large sheets of paper with Post-it notes, but by creating the coding framework directly in the software, participants and researchers can easily utilise the developed framework for coding later. Could exist in several variants:


Emergent Coding
As above: creating a coding framework (probably from scratch, or with some example topics already provided by the researcher)
 

Grouping Exercise
A simpler task would be to present a pre-prepared list of many possible topics of interest created by the researcher, and ask participants to group them either thematically, or by order of importance. This gives respondents an easier start on the coding framework, allowing them to familiarise themselves with the process and topics. It is more restrictive, and plants directions of interest for the participants, but they would remain able to challenge, add to, or exclude topics for examination.
 

Category Prompts
Here the researcher has created a few very broad categories (for example Health, Housing, Family) and the participants are encouraged to populate the framework with more specific sub categories. This approach is a good middle ground, where the researcher can set some broad areas of interest, but participants have say in what direction topics should be explored in detail (say Expensive food, Lack of open space).

After one or more of these exercises, the participants could go on to use the coding framework to code the data themselves, or the researcher can use the contributed topic guide to focus their own coding.

 

 

Coding exercises


In these three exercises, I envisage a scenario where some coding has already been completed, the focus of the session is to look either at coded transcripts (on screen or printout) and discuss how the data has been interpreted. This could take the form of:


Researcher Challenge: Where the researcher asks the participants to justify or explain how they have coded the data
Participant Challenge: Participants examine data coded by researchers, question their rationale and suggest changes
Group Challenge: Participants and researchers code the same transcript separately, and get together to compare, contrast and discuss their results.


With all these approaches, one can apply several overall philosophies:
Individual: Where each respondent or researcher works on their own, adding separately to the total coding of the project
Collaborative: Analysis is done as part of a team, working together
Comparative: Where analysts work separately, but come together to discuss and contrast their work, creating a final dialogue from the input of the whole group.

 

Finally, the team should consider whether the aim of the project is to actually create a direct analysis outcome from these sessions, or if they are exercises which are themselves part of the qualitative data generated from the project. For our sessions, we also recorded, transcribed and analysed the discussion which took place around the coding, which itself also contributed nuanced and valuable insight into the thought processes of the participants. Of course, this leaves the problem of creating an infinite Ouroboros loop of data generation, if respondents were then invited to analyse the transcripts of their own analysis sessions!

 

Which approach, and how far the participatory process is taken will obviously depend on the research project and desires of the researcher. However, my main aim here is to just get people thinking about the possibilities, and if engaging participants in the research process in some way will challenge the assumptions of the research team, or lead to better results, and more relevant and impactful outputs.

 

Here are the slides of my ICQI 2016 talk, and the complete data (raw and coded) and summary report on the Scottish Referendum Project is here. I would welcome more discussion on this, in the forum, by e-mail (daniel@quirkos.com) or in the literature!

 

Don't forget, the new version of Quirkos is now available, for researchers and participants alike to bring their qualitative analysis to life. Download a free trial today!

 

 

Quirkos version 1.4 is here!

quirkos version 1.4

It’s been a long time coming, but the latest version of Quirkos is now available, and as always it’s a free update for everyone, released simultaneously on Mac, Windows and Linux with all the new goodies!


The focus of this update has been speed. You won’t see a lot of visible differences in the software, but behind the scenes we have rewritten a lot of Quirkos to make sure it copes better with large qualitative sources and projects, and is much more responsive to use. This has been a much requested improvement, and thanks to all our intrepid beta testers for ensuring it all works smoothly.


In the new version, long coded sources now load in around 1/10th of the time! Search results and hierarchy views load much quicker! Large canvas views display quicker! All this adds up to give a much more snappy and responsive experience, especially when working with large projects. This takes Quirkos to a new professional level, while retaining the engaging and addictive data coding interface.


In addition we have made a few small improvements suggested by users, including:


• Search criteria can be refined or expanded with AND/OR operands
• Reports now include a summary section of your Quirks/codes
• Ability to search source names to quickly find sources
• Searches now display the total number of results
• Direct link to the full manual

 

There are also many bug fixes! Including:
• Password protected files can now be opened across Windows, Mac and Linux
• Fix for importing PDFs which created broken Word exports
• Better and faster CSV import
• Faster Quirk merge operations
• Faster keyword search in password protected files

 

However, we have had to change the .qrk file format so that password protected files can open on any operating system. This means that projects opened or created in version 1.4 cannot be opened in older versions of Quirkos (v1.3.2 and earlier).


I know how annoying this is, but there should be no reason for people to keep using older versions: we make the updates free so that everyone is using the same version. Just make sure everyone in your team updates!

 

When you first open a project file from an older version of Quirkos in 1.4, it will automatically convert it to the new file format, and save a backup copy of the old file. Most users will not notice any difference, and you can obviously keep working with your existing project files. But if you want to share your files with other Quirkos users, make sure they also have upgraded to the latest version, or they will get an error message trying to open a file from version 1.4.

 

All you need to do to get the new version is download and install from our website (www.quirkos.com/get.html) and install to the same location as the old Quirkos. Get going, and let us know if you have any suggestions or feedback! You could see your requests appear in version 1.5!