Archaeologies of coding qualitative data

recoding qualitative data

 

In the last blog post I referenced a workshop session at the International Conference of Qualitative Inquiry entitled the ‘Archaeology of Coding’. Personally I interpreted archaeology of qualitative analysis as being a process of revisiting and examining an older project. Much of the interpretation in the conference panel was around revisiting and iterating coding within a single analytical attempt, and this is very important.


In qualitative analysis it is rarely sufficient to only read through and code your data once. An iterative and cyclical process is preferable, often building on and reconsidering previous rounds of coding to get to higher levels of interpretation. This is one of the ways to interpret an ‘archaeology’ of coding – like Jerusalem, the foundations of each successive city is built on the groundwork of the old. And it does not necessarily involve demolishing the old coding cycle to create a new one – some codes and themes (just like significant buildings in a restructured city) may survive into the new interpretation.


But perhaps there is also a way in which coding archaeology can be more like a dig site: going back down through older layers to uncover something revealing. I allude to this more in the blog post on ‘Top down or bottom up’ coding approaches, because of course you can start your qualitative analysis by identifying large common themes, and then breaking these up into more specific and nuanced insight into the data.

 

But both these iterative techniques are envisaged as part of a single (if long) process of coding. But what about revisiting older research projects? If you get the opportunity to go back and re-examine old qualitative data and analysis?


Secondary data analysis can be very useful, especially when you have additional questions to ask of the data, such as in this example by Notz (2005). But it can also be useful to revisit the same data, question or topic when the context around them changes, for example due to a major event or change in policy.


A good example is our teaching dataset conducted after the referendum for Scottish Independence a few years ago. This looked to see how the debate had influenced voters interpretations of the different political parties and how they would vote in the general election that year. Since then, there has been a referendum on the UK leaving the EU, and another general election. Revisiting this data would be very interesting in retrospect of these events. It is easy to see from the qualitative interviews that voters in Scotland would overwhelmingly vote to stay in the EU. However, it would not be up-to-date enough to show the ‘referendum fatigue’ that was interpreted as a major factor reducing support for the Scottish National Party in the most recent election. Yet examining the historical data in this context can be revealing, and perhaps explain variance in voting patterns in the changing winds of politics and policy in Scotland.

 

While the research questions and analysis framework devised for the original research project would not answer the new questions we have of the data, creating new or appended analytical categories would be insightful. For example, many of the original codes (or Quirks) identifying political parties people were talking about will still be useful, but how they map to policies might be reinterpreted, or higher level themes such as the extent that people perceive a necessity for referendum, or value of remaining part of the EU (which was a big question if Scotland became independent). Actually, if this sounds interesting to you, feel free to re-examine the data – it is freely available in raw and coded formats.

 

Of course, it would be even more valuable to complement the existing qualitative data with new interviews, perhaps even from the same participants to see how their opinions and voting intentions have changed. Longitudinal case studies like this can be very insightful and while difficult to design specifically for this purpose (Calman, Brunton, Molassiotis 2013), can be retroactively extended in some situations.


And of course, this is the real power of archaeology: when it connects patterns and behaviours of the old with the new. This is true whether we are talking about the use of historical buildings, or interpretations of qualitative data. So there can be great, and often unexpected value in revisiting some of your old data. For many people it’s something that the pressures of marking, research grants and the like push to the back burner. But if you get a chance this summer, why not download some quick to learn qualitative analysis software like Quirkos and do a bit of data archaeology of your own?

 

Against Entomologies of Qualitative Coding

Entomologies of qualitative coding - image from Lisa Williams https://www.flickr.com/photos/pixellou/5960183942/in/photostream/


I was recently privileged to chair a session at ICQI 2017 entitled “The Archaeology of Coding”. It had a fantastic panel of speakers, including Charles Vanover, Paul Mihas, Kathy Charmaz and Johnny Saldaña all giving their own take on this topic. I’m going to write about my own interpretation of qualitative coding archaeologies in the next blog post, but for now I wanted to cover an important common issue that all the presenters raised in their presentations: coding is never finished.


In my summary I described this as being like the river in the novel Siddhartha by Herman Hesse: ‘coding is never still’. It should constantly change and evolve, and recoil from attempts to label it as ‘done’ or ‘finished’. Heraclitus said the same thing, “You cannot step twice into the same rivers” for they constantly change and shift (as do we). When we come back to revisit our coding, and even during the process of coding, change is part of the process.


I keep coming back to the image of butterflies in a museum display case: dead, pinned to the board with a neatly assigned label of the genus. It’s tempting to approach qualitative coding with this entomologist’s approach: creating seemingly definitive and static codes that describe one characteristic of the data.


Yet this taxonomy can create a tension, lulling you into feeling that some codes (and frameworks) are still, complete, and don’t need revision and amendment. This might be true, but it usually isn’t! If you are using some type of open-ended coding or grounded theory approach, creating a static code can be beguiling, and interpreted as showing progress. But instead, try and see every code as a place-holder for a better category or description – try not to loose the ability for the data to surprise you, and the temptation to force quotes into narrow categories. Assume that you are never finished with coding.


Unless you are using a very strict interpretation of framework analysis, your first attempt at coding will probably change, evolve as you go through different sources, and take you to a place where you want to try another approach. And your attempts at creating a qualitative classification and coding system might just end up being wrong.


Even in biology, classification attempts are complicated. While the public are still familiar with the different ‘animal kingdom’ groupings, attempts to create a taxonomy in the ‘tree of life’ common descent model are now succeeded by the modern ‘cladistic’ approach, based around common history and derived characteristics of a species. And these approaches also have limitations, since they are so complex and subjective (just like qualitative analysis!).

 

For example, if you use the NCBI Taxonomy browser you will see dozens of entries in square brackets. These are the misclassified organisms which have been currently recognised, species placed in the wrong genus. These problems don’t even include the cases when one species is found to be many unique but significantly separate species on closer study. This has even been found to be the case for the common ‘medicinal’ leech!

 

Trying to turn the endless forms most beautiful of the animal ‘kingdoms’ into neat categories is complex, even when just looking at appearance. And these taxonomic groupings tell us little of the diverse range of behaviour and life behind the dead pinned insects.


In a similar way, when we code and analyse qualitative data, we are attempting to listen to the voices of our respondents, and change the rich multitude of lives and experiences into a few key categories that rise up to us. We often need to recognise the reductive nature of this practice, and keep coming back to the detailed rich data behind it. In a way, this is like the difference between knowing the Latin name for a species of butterfly, and knowing how it flies, it’s favourite flowers, and all the details that actually make them unique, not just a name or number.

 

 

In Siddhartha, the central character finds nirvana listening to the chaotic, blended sound of a river, representing the lives and goals of all the people in his life and the world.


“The river, which consisted of him and his loved ones and of all people, he had ever seen, all of these waves and waters were hurrying, suffering, towards goals, many goals, the waterfall, the lake, the rapids, the sea, and all goals were reached, and every goal was followed by a new one, and the water turned into vapour and rose to the sky, turned into rain and poured down from the sky, turned into a source, a stream, a river, headed forward once again”


Like the river, qualitative analysis can be a circle, with each iteration and reading different from the last, building on the previous work, but always listening to the data, not being quick to judge or categorise. Until we have reached this analytical nirvana, it is difficult to let go of our data, and feel that it is complete. This complex, turbulent flow of information defies our attempts to neatly categorise and label it, and the researcher’s quest for neatness and uncovering the truth under our subjectivity demands a single answer and categorisation scheme. But, just like taxonomy, there may never be a state when categorisation is complete, in a single or multiple interpretation. New discoveries, or new context can change it all.


We, the researcher, are a dynamic and fallible part of that process – we interpret, we miscategorise, we impose bias, we get tired and loose concentration. When we are lazy and quick, we take the comfort of labels and boxes, lulled into conformity by the seductive ease of software and coloured markers. But when we become good qualitative researchers: when we are self-critical and self-reflexive, finally learning to fully listen, then we achieve research nirvana:
 

“Siddhartha listened. He was now nothing but a listener, completely concentrated on listening, completely empty, he felt, that he had now finished learning to listen. Often before, he had heard all this, these many voices in the river, today it sounded new. Already, he could no longer tell the many voices apart, not the happy ones from the weeping ones, not the ones of children from those of men, they all belonged together”

 

Download a free trial of Quirkos today and challenge your qualitative coding!

 

 

 

Quirkos vs Nvivo: Differences and Similarities

quirkos vs nvivoI’m often asked ‘How does Quirkos compare to Nvivo?’. Nvivo is by far the largest player in the qualitative software field, and is the product most researchers are familiar with. So when looking at the alternatives like Quirkos (but also Dedoose, ATLAS.ti, MAXQDA, Transana and many others) people want to know what’s different!

 

In a nutshell, Quirkos has far fewer features than Nvivo, but wraps them up in an easier to use package. So Quirkos does not have support for integrated multimedia, Twitter analysis, quantitative analysis, memos, or hypothesis mapping and a dozen other features. For large projects with thousands of sources, those using multimedia data or requiring powerful statistical analysis, the Pro and Plus versions of Nvivo will be much more suitable.


Our focus with Quirkos has been on providing simple tools for exploring qualitative data that are flexible and easier to use. This means that people can get up and running quicker in Quirkos, and we hear that a lot of people who are turned off by the intimidating interface in Nvivo find Quirkos easer to understand. But the basics of coding and analysing qualitative data are the same.


In Quirkos, you can create and group themes (called Nodes in Nvivo), and use drag and drop to attach sections of text to them. You can perform code and retrieve functions by double clicking on the theme to see text coded to that node. And you can also generate reports of your coded data, with lots of details about your project.


Like Nvivo, we can also handle all the common text formats, such as PDFs, Word files, plain text files, and the ability to copy and paste from any other source like web pages. Quirkos also has tools to import survey data, which is not something supported in the basic version of Nvivo.


While Quirkos doesn’t have ‘matrix coding’ in the same way as Nvivo, we do have side-by-side comparison views, where you can use any demographic or quantitative data about your sources to do powerful sub-set analysis. A lot of people find this more interactive, and we try and minimise the steps and clicks between you and your data.


Although Quirkos doesn’t really have any dedicated tools for quantitative analysis, our spreadsheet export allows you to bring data into Excel, SPSS or R where you have much more control over the statistical models you can run than Nvivo or other mixed-methods tools allow.

 

However, there are also features in Quirkos that Nvivo doesn’t have at the moment. The most popular of these is the Word export function. This creates a standard Word file of your complete transcripts, with your coding shown as color coded highlights. It’s just like using pen and highlighter, but you can print, edit and share with anyone who can open a Word file.


Quirkos also has a constant save feature, unlike Nvivo which has to be set up to save over a certain time period. This means that even in a crash you don’t loose any work, something I know people have had problems with in Nvivo.


Another important differential for some people is that that Quirkos is the same on Windows and Mac. With Nvivo, the Windows and Mac versions have different interfaces, features and file formats. This makes it very difficult to switch between the versions, or collaborate with people on a different platform. We also never charge for our training sessions, and all our online support materials are free to download on our website


And we didn’t mention the thing people love most about Quirkos – the clear visual interface! With your themes represented as colourful, dynamic bubbles, you are always hooked into your data, and have the flexibility to play, explore and drill down into the data.


Of course, it’s best to get some impartial comparisons as well, so you can get reviews from the University of Surrey CAQDAS network here: https://www.surrey.ac.uk/sociology/research/researchcentres/caqdas/support/choosing/


But the best way to decide is for yourself, since your style of working and learning, and what you want to do with the software will always be different. Quirkos won’t always be the best fit for you, and for a lot of people sticking with Nvivo will provide an easier path. And for new users, learning the basics of qualitative analysis in Quirkos will be a great first step, and make transitioning to a more complex package like Nvivo easier in the future. But download our free trial (ours lasts for a whole month, not just 14 days!) and let us know if you have any questions!