Balance and rigour in qualitative analysis frameworks

image by https://www.flickr.com/photos/harmishhk/8642273025

 

Training researchers to use qualitative software and helping people who get stuck with Quirkos, I get to see a lot of people’s coding frameworks. Most of the time they are great, often they are fine but have too many codes, but sometimes they just seem to lack a little balance.


In good quality quantitative research, you should see the researchers have adopted a ‘null hypothesis’ before they start the analysis. In other words, an assumption that there is nothing significant in the data. So statisticians play a little game where they make a declaration that there should be no correlation between variables, and try and prove there is nothing there. Only if they try their hardest and can’t possibly convince themselves there is no relationship are they allowed to go on and conclude that there may be something in the data. This is called rejecting the null hypothesis and may temper the excitement of researchers with big data sets, over enthusiastic for career making discoveries.


Unfortunately, it’s rare to see this approach described in published quantitative analysis. But there’s no reason that a similar approach can’t be used in qualitative research to provide some balance from the researcher’s interpretation and prejudices. Most of the time the researcher will have their own preconception of what they are going to find (or would like to find) in the data, and may even have a particular agenda they are trying to prove. Whether a quantitative or qualitative methodology, this is not a good basis for conducting good impartial research. (Read more about the differences between qualitative and quantitative approaches.)

 

Steps like reflexivity statements, and considering unconscious biases can help improve the neutrality of the research, but it’s something to consider closely during the analysis process itself. Even the coding framework you use to tag and analyse your qualitative data can lead to certain quotes being drawn from the data more than others. 


It’s like trying to balance standing in the middle of a seesaw. If you stand over to one end, it’s easy to keep your balance, as you will just be rooted to the ground on one side. However, standing in the middle is the only way you are challenged, and it’s possible to be influenced by sways and wind from one side to another. Before starting your analysis, researchers should ideally be in this zen like state, where they are ready to let the data tell them the story, rather than trying to tell their own data from selective interpretations.


When reading qualitative data, try to have in your head the opposite view to your research hypothesis. Maybe people love being unemployed, and got rich because of it! The data should really shout out a finding regardless of bias or cherry picking.

 
When you have created a coding framework, have a look through at the tone and coverage. Are there areas which might show any bias to one side of the argument, or a particular interpretation? If you have a code for ‘hates homework’ do you have a code for ‘loves homework’? Are you actively looking for contrary evidence? Usually I try and find a counter example to every quote I might use in a project report. So if I want to show a quote where someone says ‘Walking in the park makes me feel healthy and alive’ I’ll see if there is someone else saying ‘The park makes me nervous and scared’. If you can’t, or at least if the people with the dissenting view is in a minority, you might just be able to accept a dominant hypothesis.

 

Your codes should try and reflect this, and in the same way that you shouldn’t have leading questions “Does your doctor make you feel terrible?” be careful about leading coding topics with language like “Terrible doctors”. There can be a confirmation bias, and you may start looking too hard for text to match the theme. In some types of analysis like discourse or in-vivo coding, reflecting the emotive language your participants use is important. But make sure it is their language and not yours that is reflected in strongly worded theme titles.

 

All qualitative software (Quirkos included) allows you to have a longer description of a theme as well as the short title. So make sure you use it to detail what should belong in a theme, as if you were describing it to someone else to do the coding. When you are going through and coding your data, think to yourself: “Would someone else code in the same way?”

 

download quirkos

 

Even when topics are neutral (or balanced with alternatives) you should also make sure that the text you categorise into these fields is fair. If you are glossing over opinions from people who don’t have a problem with their doctor to focus on the shocking allegations, you are giving primacy to the bad experiences, perhaps without recognising that the majority were good.

 

However, qualitative analysis is not a counting game. One person in your sample with a differing opinion is a significant event to be discussed and explained, not an outlier to be ignored. When presenting the results of qualitative data, the reader has to put a great deal of trust in how the researcher has interpreted the data, and if they are only showing one view point or interpretation they can come across as having a personal bias.

 

So before you write up your research, step back and look again at your coding framework. Does it look like a fair reflection of the data? Is the data you’ve coded into those categories reflective? Would someone else have interpreted and described it in the same way? These questions can really help improve the impartiality, rigour and balance of your qualitative research.

 

A qualitative software tool like Quirkos can help make a balanced framework, because it makes it much easier than pen and Post-It notes to go back and change themes and recode data. Download a free trial and see how it works, and how software kept simple can help you focus on your qualitative data.

 

 

Word clouds and word frequency analysis in qualitative data

word clouds quirkos

 

What’s this blog post about? Well, it’s visualised in the graphic above!

 

In the latest update for Quirkos, we have added a new and much requested feature, word clouds! I'm sure you've used these pretty tools before, they show a random display of all the words in a source of text, where the size of each word is proportional to the number of times it has been counted in the text. There are several free online tools that will generate word clouds for you, Wordle.net being one of the first and most popular.

 

These visualisations are fun, and can be a quick way to give an overview of what your respondents are talking about. They also can reveal some surprises in the data that prompt further investigation. However, there are also some limitations to tools based on word frequency analysis, and these tend to be the reason that you rarely see word clouds used in academic papers. They are a nice start, but no replacement for good, deep qualitative analysis!

 

We've put together some tips for making sure your word clouds present meaningful information, and also some cautions about how they work and their limitations.

 


1. Tweak your stop list!

As these tools count every word in the data, results would normally be dominated by basic words that occur most often, 'the', 'of, 'and' and similar small and usually meaningless words. To make sure that this doesn't swamp the data, most tools will have a list of 'stop' words which should be ignored when displaying the word cloud. That way, more interesting words should be the largest. However, there is always a great deal of variation in what these common words are. They differ greatly between verbal and written language for example (just think how often people might say 'like' or 'um' in speech but not in a typed answer). Each language will also need a corresponding stop list!

 

So Quirkos (and many other tools) offer ways to add or remove words from the stop list when you generate a word cloud. By default, Quirkos takes the most 50 frequent words from the verbal and written British National Corpus of words, but 50 is actually a very small stop list. You will still get very common words like 'think' and 'she' which might be useful to certain projects looking at expressions of opinions or depictions of gender. So it's a good idea to look at the word cloud, and remove words that aren't important to you by adding them to the stop list. Just make sure you record what has been removed for writing up, and what your justification was for excluding it!

 


2. There is no weighting or significance

Since word frequency tools just count the occurrence of each word (one point for each utterance) they really only show one thing: how often a word was said. This sounds obvious, but it doesn't give any indication of how important the use of a word was for each event. So if one person says 'it was a little scary', another says 'it was horrifyingly scary' and another 'it was not scary' the corresponding word count doesn't have any context or weight. So this can look deceptive in something like a word cloud, where the above examples count the negative (not scary) and the minor (little scary) the same way, and 'scary' could look like a significant trend. So remember to always go back and read the data carefully to understand why specific words are being used.

 


3. Derivations don't get counted together

Remember that most word cloud tools are not even really counting words, only combinations of letters. So 'fish', 'fishy' and 'fishes' will all get counted as separate words (as will any typos or mis-spellings). This might not sound important, but if you are trying to draw conclusions just from a word cloud, you could miss the importance of fish to your participants, because the different derivations weren't put together. Yet, sometimes these distinctions in vocabulary are important – obviously 'fishy' can have a negative connotation in terms of something feeling off, or smelling bad – and you don't want to put this in the same category as things that swim. So a researcher is still needed to craft these visualisations, and make decisions about what should be shown and grouped. Speaking of which...

 


4. They won't amalgamate different terms used by participants

It's fascinating how different people have their own terms and language to describe the same thing, and illuminating this can bring colour to qualitative data or show important subtle differences that are important for IPA[[]] or discourse analysis. But when doing any kind of word count analysis, this richness is a problem – as the words are counted separately. Thus neither term 'shiny', 'bright' or 'blinding' may show up often, but if grouped together they could show a significant theme. Whether you want to treat certain synonyms in the same way is up to the researcher, but in a word cloud these distinctions can be masked.

 

Also, don’t forget that unless told otherwise (or sometimes hyphenated), word clouds won’t pick up multiple word phrases like ‘word cloud’ and ‘hot topic’.

 

 

5. Don’t focus on just the large trends


Word clouds tend to make the big language trends very obvious, but this is usually only part of the story. Just as important are words that aren’t there – things you thought would come up, topics people might be hesitant to speak about. A series of word clouds can be a good way to show changes in popular themes over time, like what terms are being used in political speeches or in newspaper headlines. In these cases words dropping out of use are probably just as interesting as the new trends.

 

Download a free trial

 


6. This isn't qualitative analysis

At best, this is quantification of qualitative data, presenting only counting. Since word frequency tools are just count sequences of letters, not even words and their meanings, they are a basic supplemental numerical tool to deep qualitative interpretation (McNaught and Lam 2010). And as with all statistical tools, they are easy to misapply and poorly interpret. You need to know what is being counted, what is being missed (see above), and before drawing any conclusions, make sure you understand the underlying data and how it was collected. However…

 

 

7. Word clouds work best as summaries or discussion pieces


If you need to get across what’s coming out of your research quickly, showing the lexicon of your data in word clouds can be a fun starting point. When they show a clear and surprising trend, the ubiquity and familiarity most audiences have with word clouds make these visualisations engaging and insightful. They should also start triggering questions – why does this phrase appear more? These can be good points to start guiding your audience through the story of your data, and creating interesting discussions.

 

As a final point, word clouds often have a level of authority that you need to be careful about. As the counting of words is seen as non-interpretive and non-subjective, some people may feel they ‘trust’ what is shown by them more than the verbose interpretation of the full rich data. Hopefully with the guidance above, you can persuade your audience that while colourful, word clouds are only a one-dimensional dive into the data. Knowing your data and reading the nuance will be what separates your analysis from a one click feature into a well communicated ‘aha’ moment for your field.

 

 

If you'd like to play with word clouds, why not download a free trial of Quirkos? It also has raw word frequency data, and an easy to use interface to manage, code and explore your qualitative data.

 

 

 

 

Share on Facebook Share on Twitter