Word clouds and word frequency analysis in qualitative data

word clouds quirkos

 

What’s this blog post about? Well, it’s visualised in the graphic above!

 

In the latest update for Quirkos, we have added a new and much requested feature, word clouds! I'm sure you've used these pretty tools before, they show a random display of all the words in a source of text, where the size of each word is proportional to the number of times it has been counted in the text. There are several free online tools that will generate word clouds for you, Wordle.net being one of the first and most popular.

 

These visualisations are fun, and can be a quick way to give an overview of what your respondents are talking about. They also can reveal some surprises in the data that prompt further investigation. However, there are also some limitations to tools based on word frequency analysis, and these tend to be the reason that you rarely see word clouds used in academic papers. They are a nice start, but no replacement for good, deep qualitative analysis!

 

We've put together some tips for making sure your word clouds present meaningful information, and also some cautions about how they work and their limitations.

 


1. Tweak your stop list!

As these tools count every word in the data, results would normally be dominated by basic words that occur most often, 'the', 'of, 'and' and similar small and usually meaningless words. To make sure that this doesn't swamp the data, most tools will have a list of 'stop' words which should be ignored when displaying the word cloud. That way, more interesting words should be the largest. However, there is always a great deal of variation in what these common words are. They differ greatly between verbal and written language for example (just think how often people might say 'like' or 'um' in speech but not in a typed answer). Each language will also need a corresponding stop list!

 

So Quirkos (and many other tools) offer ways to add or remove words from the stop list when you generate a word cloud. By default, Quirkos takes the most 50 frequent words from the verbal and written British National Corpus of words, but 50 is actually a very small stop list. You will still get very common words like 'think' and 'she' which might be useful to certain projects looking at expressions of opinions or depictions of gender. So it's a good idea to look at the word cloud, and remove words that aren't important to you by adding them to the stop list. Just make sure you record what has been removed for writing up, and what your justification was for excluding it!

 


2. There is no weighting or significance

Since word frequency tools just count the occurrence of each word (one point for each utterance) they really only show one thing: how often a word was said. This sounds obvious, but it doesn't give any indication of how important the use of a word was for each event. So if one person says 'it was a little scary', another says 'it was horrifyingly scary' and another 'it was not scary' the corresponding word count doesn't have any context or weight. So this can look deceptive in something like a word cloud, where the above examples count the negative (not scary) and the minor (little scary) the same way, and 'scary' could look like a significant trend. So remember to always go back and read the data carefully to understand why specific words are being used.

 


3. Derivations don't get counted together

Remember that most word cloud tools are not even really counting words, only combinations of letters. So 'fish', 'fishy' and 'fishes' will all get counted as separate words (as will any typos or mis-spellings). This might not sound important, but if you are trying to draw conclusions just from a word cloud, you could miss the importance of fish to your participants, because the different derivations weren't put together. Yet, sometimes these distinctions in vocabulary are important – obviously 'fishy' can have a negative connotation in terms of something feeling off, or smelling bad – and you don't want to put this in the same category as things that swim. So a researcher is still needed to craft these visualisations, and make decisions about what should be shown and grouped. Speaking of which...

 


4. They won't amalgamate different terms used by participants

It's fascinating how different people have their own terms and language to describe the same thing, and illuminating this can bring colour to qualitative data or show important subtle differences that are important for IPA[[]] or discourse analysis. But when doing any kind of word count analysis, this richness is a problem – as the words are counted separately. Thus neither term 'shiny', 'bright' or 'blinding' may show up often, but if grouped together they could show a significant theme. Whether you want to treat certain synonyms in the same way is up to the researcher, but in a word cloud these distinctions can be masked.

 

Also, don’t forget that unless told otherwise (or sometimes hyphenated), word clouds won’t pick up multiple word phrases like ‘word cloud’ and ‘hot topic’.

 

 

5. Don’t focus on just the large trends


Word clouds tend to make the big language trends very obvious, but this is usually only part of the story. Just as important are words that aren’t there – things you thought would come up, topics people might be hesitant to speak about. A series of word clouds can be a good way to show changes in popular themes over time, like what terms are being used in political speeches or in newspaper headlines. In these cases words dropping out of use are probably just as interesting as the new trends.

 

Download a free trial

 


6. This isn't qualitative analysis

At best, this is quantification of qualitative data, presenting only counting. Since word frequency tools are just count sequences of letters, not even words and their meanings, they are a basic supplemental numerical tool to deep qualitative interpretation (McNaught and Lam 2010). And as with all statistical tools, they are easy to misapply and poorly interpret. You need to know what is being counted, what is being missed (see above), and before drawing any conclusions, make sure you understand the underlying data and how it was collected. However…

 

 

7. Word clouds work best as summaries or discussion pieces


If you need to get across what’s coming out of your research quickly, showing the lexicon of your data in word clouds can be a fun starting point. When they show a clear and surprising trend, the ubiquity and familiarity most audiences have with word clouds make these visualisations engaging and insightful. They should also start triggering questions – why does this phrase appear more? These can be good points to start guiding your audience through the story of your data, and creating interesting discussions.

 

As a final point, word clouds often have a level of authority that you need to be careful about. As the counting of words is seen as non-interpretive and non-subjective, some people may feel they ‘trust’ what is shown by them more than the verbose interpretation of the full rich data. Hopefully with the guidance above, you can persuade your audience that while colourful, word clouds are only a one-dimensional dive into the data. Knowing your data and reading the nuance will be what separates your analysis from a one click feature into a well communicated ‘aha’ moment for your field.

 

 

If you'd like to play with word clouds, why not download a free trial of Quirkos? It also has raw word frequency data, and an easy to use interface to manage, code and explore your qualitative data.

 

 

 

 

Quirkos v1.5 is here

Quirkos 1.5 word cloud

 

We are happy to announce the immediate availability of Quirkos version 1.5! As always, this update is a free upgrade for everyone who has ever brought a licence of Quirkos, so download now and enjoy the new features and improvements.

 

Here’s a summary of the main improvements in this release:

 

Project Merge


You can now bring together multiple projects in Quirkos, merge sources, Quirks and coding from many authors at once. This makes team work much easier, and allows you to bring in coding frameworks or sources from other projects.

 

Word Frequency tools including:
 

Word-clouds! You can now generate customisable Word Clouds, (click on the Report button). Change the shape, word size, rotation, and cut-off for minimum words, or choose which sources to include. There is also a default ‘stop list’ (a, the, of, and) of the most frequent 50 words from the British National Corpus, but this can be completely customised. Save the word-clouds to a standard image file, or as an interactive webpage.referednum wordcloud
A complete word frequency list of the words occurring across all the sources in your project is also generated in this view.

  • Improved Tree view – now shows longer titles, descriptions and fits more Quirks on the screen
  • Tree view now has complete duplicate / merge options
  • Query results by source name – ability to see results from single or multiple sources
  • Query results now show number of quotes returned
  • Query view now has ‘Copy All’ option
  • Improved CSV spreadsheet export – now clearly shows Source Title, and Quirk Name
  • Merge functions now more logical – default behaviour changed so that you select the Quirk you want to be absorbed into a second.
  • Can now merge parent and child Quirks to all levels
  • Hovering mouse over Quirks now shows description, and coding summary across sources
  • Reports now generate MUCH faster, no more crashes for projects with hundreds of Quirks. Image generation of hierarchy and overlap views now off by default, turn on in Project Settings if needed
  • Improved overlap view, with rings indicating number of overlaps
  • Neater pop-up password entry for existing projects
  • Copy and pasting quotes to external programmes now shows source title after each quote
  • Individually imported sources now take file name as source name by default

 

Bug fixes

  • Fixed a bug where Quirks would suddenly grow huge!
  • Fixed a rare crash on Windows when rearranging / merging Quirks in tree view
  • Fixed a rare bug where a Quirk was invisible after being re-arranged
  • Fixed an even rarer bug where deleting a source would stop new coding
  • Save As project now opens the new file after saving, and no longer shows blank screen
  • Reports can now overwrite if saved to the same folder as an earlier export
  • Upgrading to new versions on Windows only creates a backup of the last version, not all previous versions, lots of space savings. (It’s safe to delete these old versions once you are happy with the latest one)

 

Watch the new features demonstrated in the video below:

 

 

There are a few other minor tweaks and improvements, so we do recommend you update straight away. Everyone is eligible, and once again there are no changes to project files, so you can keep going with your work without missing a beat. Do let us know if you have any feedback or suggestions (support@quirkos.com)

 

Download quirkos free qualitative analysis software

 

If you've not tried Quirkos before, it's a perfect time to get started. Just download the full version and you'll get a whole month to play with it for free!

 

An introduction to Interpretative Phenomenological Analysis

introduction to Interpretative Phenomenological Analysis

 

Interpretative Phenomenological Analysis (IPA) is an increasingly popular approach to qualitative inquiry and essentially an attempt to understand how participants experience and make meaning of their world. Although not to be confused with the now ubiquitous style of beer with the same initials (India Pale Ale), Interpretative Phenomenological Analysis is similarly accused of being too frequently and imperfectly brewed (Hefferon and Gil-Rodriguez 2011).



While you will often see it described as a ‘method’ or even an analytical approach, I believe it is better described as more akin to an epistemology, with its own philosophical concepts of explaining the world. Like grounded theory, it has also grown into a bounded approach in its own right, with a certain group of methodologies and analytical techniques which are assumed as the ‘right’ way of doing IPA.



At its heart, interpretative phenomenological analysis is an approach to examining data that tries to see what is important to the participant, how they interpret and view their own lives and experiences. This in itself is not ground-breaking in qualitative studies, however the approach originally grew from psychology, where a distinct psychological interpretation of how the participant perceives their experiences was often applied. So note that while IPA doesn’t stand for Interpretative Psychological Analysis, it could well do.



To understand the rationale for this approach, it is necessary to engage with some of the philosophical underpinnings, and understand two concepts: phenomenology, and hermeneutics. You could boil this down such that:

   1. Things happen (phenomenology)

   2. We interpret this into something that makes sense to us (hermeneutics - from the Greek word for translate)



Building on the shoulders of the Greek thinkers, two 20th century philosophers are often invoked in describing IPA: Husserl and Heidegger. From Husserl we get the concept of all interpretation coming from objects in an external world, and thus the need for ‘bracketing’ our internal assumptions to differentiate what comes from, or can describe, our consciousness. The focus here is on the individual processes of perception and awareness (Larkin 2013). Heidegger introduces the concept of ‘Dasein’ which means ‘there-being’ in German: we are always embedded and engaged in the world. This asks wider questions of what existence means (existentialism) and how we draw meaning to the world.



I’m not going to pretend I’ve read ‘Being and Time‘ or ‘Ideas’ so don’t take my third hand interpretations for granted. However, I always recommend students read Nausea by Sartre, because it is a wonderful novel which is as much about procrastination as it is about existentialism and the perception of objects. It’s also genuinely funny, and you can find Sartre mocking himself and his philosophy with surrealist lines like: “I do not think, therefore I am a moustache”.



Applying all this philosophy to research, we consider looking for significant events in the lives of the people we are studying, and trying to infer through their language how they interpret and make meaning of these events. However, IPA also takes explicit notice of the reflexivity arguments we have discussed before: we can’t dis-embody ourselves (as interpreters) from our own world. Thus, it is important to understand and ‘bracket’ our own assumptions about the world (which are based on our interpretation of phenomenon) from those of the respondent, and IPA is sometimes described as a ‘double hermeneutic’ of both the researcher and participant.



These concepts do not have to lead you down one particular methodological path, but in practice projects intending to use IPA should generally have small sample sizes (perhaps only a few cases), be theoretically open, exploratory rather than testing existing hypotheses, and with a focus on experience. So a good example research question might be ‘How do people with disabilities experience using doctor surgeries?’ rather than ‘Satisfaction with a new access ramp in a GP practice’. In the former example you would also be interested in how participants frame their struggles with access – does it make them feel limited? Angry that they are excluded?



So IPA tends to lead itself to very small, purposive sampling of people who will share a certain experience. This is especially because it usually implies very close reading of the data, looking for great detail in how people describe their experiences – not just a line-by-line reading, but sometimes also reading between the lines. For appropriate methodologies then, focus groups, interviews and participant diaries are frequently applied. Hefferon and Gil-Rodriguez (2011) note that students often try and sample too many people, and ask too many questions. IPA should be very focused on a small number of relevant experiences.



When it comes to interpretation and analysis, a bottom-up, inductive coding approach is often taken. While this should not be confused with the theory building aims of grounded theory, the researcher should similarly try and park or bracket their own pre-existing theories, and let the participant’s data suggest the themes. Thematic analysis is usually applied in an iterative approach where many initial themes are created, and gradually grouped and refined, within and across sources.



Usually this entails line-by-line coding, where each sentence from the transcript is given a short summary or theme – essentially a unique code for every line focusing on the phenomena being discussed (Larking, Watts and Clifton – 2006). Later would come grouping and creating a structure from the themes, either by iterating the process and coding the descriptive themes to a higher level, or having a fresh read though the data.



A lot of qualitative software packages can struggle with this kind of approach, as they are usually designed to manage a relatively small number of themes, rather than one for each line in every source. Quirkos has definitely struggled to work well for this type of analysis, and although we have some small tweaks in the imminent release (v1.5) that will make this bearable for users, it will not be until the full memo features are included in v1.6 that this will really be satisfactory. However, it seems that most users of line-by-line coding and this method of managing IPA use spreadsheet software (so they can have columns for the transcript, summary, subordinate and later superordinate themes) or a word-processor utilising the comment features.

 

However you approach the analysis, the focus should be on the participant’s own interpretation and meaning of their experiences, and you should be able to craft a story for the reader when writing up that connects the themes you have identified to the way the participant describes the phenomenon of interest.



I’m not going to go much into the limitations of the approach here, suffice it to say that you are obviously limited to understanding participant’s meanings of the world through something like the one-dimensional transcript of an interview. What they are willing to share, and how they articulate may not be the complete picture, and other approaches such as discourse analysis may be revealing. Also, make sure that it is really participant’s understandings of experiences you want to examine. It posits a very deep ‘walk two moons in their moccasins‘ approach that is not right for boarder research questions, perhaps when wanting to contrast the broad opinions of a more diverse sample. Brew your IPA right: know what you want to make, use the right ingredients, have patience in the maturation process, and keep tasting as you go along.



As usual, I want to caution the reader against taking anything from my crude summary of IPA as being gospel, and suggest a true reading of the major texts in the field are essential before deciding if this is the right approach for you and your research. I have assembled a small list of references below that should serve as a primer, but there is much to read, and as always with qualitative epistemologies, a great deal of variety of opinion in discourse, theory and application!

 

 

download Quirkos

Finally, don't forget to give Quirkos a try, and see if it can help with your qualitative analysis. We think it's the easiest, most affordable qualitative software out there, so download a one month free trial and see for yourself!



References

Biggerstaff, D. L. & Thompson, A. R. (2008). Qualitative Research in Psychology 5: 173 – 183.
http://wrap.warwick.ac.uk/3488/1/WRAP_Biggrstaff_QRP_submission_revised_final_version_WRAP_doc.pdf

Hefferson, K., Gil_Rodriguez, E., 2011, Methods: Interpretative phenomenological analysis, October 2011, The Psychologist, 24, pp.756-759

Heidegger, M. ( 1962). Being and time (J. Macquarrie & E. Robinson, Trans.). Oxford, UK: Blackwell. (Original work published 1927)

Husserl, E. ( 1931). Ideas: A general introduction to pure phenomenology (W.R. Boyce Gibson, Trans.). London, UK: Allen & Unwin.

IPARG (The Interpretative Phenomenological Analysis Research Group) at Birkbeck college http://www.bbk.ac.uk/psychology/ipa

Larkin, M., Watts, S., & Clifton, E. 2006. Giving voice and making sense in interpretative phenomenological analysis. Qualitative Research in Psychology, 3, 102-120.

Larkin, M., 2013, Interpretative phenomenological analysis - introduction, [accessed online] https://prezi.com/dnprvc2nohjt/interpretative-phenomenological-analysis-introduction/

Smith, J., Jarman, M. & Osborn, M. (1999). Doing interpretative phenomenological analysis. In M. Murray & K. Chamberlain (Eds.) Qualitative health psychology, London: Sage.

Smith J., Flowers P., Larkin M., 2009, Interpretative phenomenological analysis: theory, method and research, London: Sage.
https://us.sagepub.com/sites/default/files/upm-binaries/26759_01_Smith_et_al_Ch_01.pdf

 

 

Archaeologies of coding qualitative data

recoding qualitative data

 

In the last blog post I referenced a workshop session at the International Conference of Qualitative Inquiry entitled the ‘Archaeology of Coding’. Personally I interpreted archaeology of qualitative analysis as being a process of revisiting and examining an older project. Much of the interpretation in the conference panel was around revisiting and iterating coding within a single analytical attempt, and this is very important.


In qualitative analysis it is rarely sufficient to only read through and code your data once. An iterative and cyclical process is preferable, often building on and reconsidering previous rounds of coding to get to higher levels of interpretation. This is one of the ways to interpret an ‘archaeology’ of coding – like Jerusalem, the foundations of each successive city is built on the groundwork of the old. And it does not necessarily involve demolishing the old coding cycle to create a new one – some codes and themes (just like significant buildings in a restructured city) may survive into the new interpretation.


But perhaps there is also a way in which coding archaeology can be more like a dig site: going back down through older layers to uncover something revealing. I allude to this more in the blog post on ‘Top down or bottom up’ coding approaches, because of course you can start your qualitative analysis by identifying large common themes, and then breaking these up into more specific and nuanced insight into the data.

 

But both these iterative techniques are envisaged as part of a single (if long) process of coding. But what about revisiting older research projects? If you get the opportunity to go back and re-examine old qualitative data and analysis?


Secondary data analysis can be very useful, especially when you have additional questions to ask of the data, such as in this example by Notz (2005). But it can also be useful to revisit the same data, question or topic when the context around them changes, for example due to a major event or change in policy.


A good example is our teaching dataset conducted after the referendum for Scottish Independence a few years ago. This looked to see how the debate had influenced voters interpretations of the different political parties and how they would vote in the general election that year. Since then, there has been a referendum on the UK leaving the EU, and another general election. Revisiting this data would be very interesting in retrospect of these events. It is easy to see from the qualitative interviews that voters in Scotland would overwhelmingly vote to stay in the EU. However, it would not be up-to-date enough to show the ‘referendum fatigue’ that was interpreted as a major factor reducing support for the Scottish National Party in the most recent election. Yet examining the historical data in this context can be revealing, and perhaps explain variance in voting patterns in the changing winds of politics and policy in Scotland.

 

While the research questions and analysis framework devised for the original research project would not answer the new questions we have of the data, creating new or appended analytical categories would be insightful. For example, many of the original codes (or Quirks) identifying political parties people were talking about will still be useful, but how they map to policies might be reinterpreted, or higher level themes such as the extent that people perceive a necessity for referendum, or value of remaining part of the EU (which was a big question if Scotland became independent). Actually, if this sounds interesting to you, feel free to re-examine the data – it is freely available in raw and coded formats.

 

Of course, it would be even more valuable to complement the existing qualitative data with new interviews, perhaps even from the same participants to see how their opinions and voting intentions have changed. Longitudinal case studies like this can be very insightful and while difficult to design specifically for this purpose (Calman, Brunton, Molassiotis 2013), can be retroactively extended in some situations.


And of course, this is the real power of archaeology: when it connects patterns and behaviours of the old with the new. This is true whether we are talking about the use of historical buildings, or interpretations of qualitative data. So there can be great, and often unexpected value in revisiting some of your old data. For many people it’s something that the pressures of marking, research grants and the like push to the back burner. But if you get a chance this summer, why not download some quick to learn qualitative analysis software like Quirkos and do a bit of data archaeology of your own?

 

Against Entomologies of Qualitative Coding

Entomologies of qualitative coding - image from Lisa Williams https://www.flickr.com/photos/pixellou/5960183942/in/photostream/


I was recently privileged to chair a session at ICQI 2017 entitled “The Archaeology of Coding”. It had a fantastic panel of speakers, including Charles Vanover, Paul Mihas, Kathy Charmaz and Johnny Saldaña all giving their own take on this topic. I’m going to write about my own interpretation of qualitative coding archaeologies in the next blog post, but for now I wanted to cover an important common issue that all the presenters raised in their presentations: coding is never finished.


In my summary I described this as being like the river in the novel Siddhartha by Herman Hesse: ‘coding is never still’. It should constantly change and evolve, and recoil from attempts to label it as ‘done’ or ‘finished’. Heraclitus said the same thing, “You cannot step twice into the same rivers” for they constantly change and shift (as do we). When we come back to revisit our coding, and even during the process of coding, change is part of the process.


I keep coming back to the image of butterflies in a museum display case: dead, pinned to the board with a neatly assigned label of the genus. It’s tempting to approach qualitative coding with this entomologist’s approach: creating seemingly definitive and static codes that describe one characteristic of the data.


Yet this taxonomy can create a tension, lulling you into feeling that some codes (and frameworks) are still, complete, and don’t need revision and amendment. This might be true, but it usually isn’t! If you are using some type of open-ended coding or grounded theory approach, creating a static code can be beguiling, and interpreted as showing progress. But instead, try and see every code as a place-holder for a better category or description – try not to loose the ability for the data to surprise you, and the temptation to force quotes into narrow categories. Assume that you are never finished with coding.


Unless you are using a very strict interpretation of framework analysis, your first attempt at coding will probably change, evolve as you go through different sources, and take you to a place where you want to try another approach. And your attempts at creating a qualitative classification and coding system might just end up being wrong.


Even in biology, classification attempts are complicated. While the public are still familiar with the different ‘animal kingdom’ groupings, attempts to create a taxonomy in the ‘tree of life’ common descent model are now succeeded by the modern ‘cladistic’ approach, based around common history and derived characteristics of a species. And these approaches also have limitations, since they are so complex and subjective (just like qualitative analysis!).

 

For example, if you use the NCBI Taxonomy browser you will see dozens of entries in square brackets. These are the misclassified organisms which have been currently recognised, species placed in the wrong genus. These problems don’t even include the cases when one species is found to be many unique but significantly separate species on closer study. This has even been found to be the case for the common ‘medicinal’ leech!

 

Trying to turn the endless forms most beautiful of the animal ‘kingdoms’ into neat categories is complex, even when just looking at appearance. And these taxonomic groupings tell us little of the diverse range of behaviour and life behind the dead pinned insects.


In a similar way, when we code and analyse qualitative data, we are attempting to listen to the voices of our respondents, and change the rich multitude of lives and experiences into a few key categories that rise up to us. We often need to recognise the reductive nature of this practice, and keep coming back to the detailed rich data behind it. In a way, this is like the difference between knowing the Latin name for a species of butterfly, and knowing how it flies, it’s favourite flowers, and all the details that actually make them unique, not just a name or number.

 

 

In Siddhartha, the central character finds nirvana listening to the chaotic, blended sound of a river, representing the lives and goals of all the people in his life and the world.


“The river, which consisted of him and his loved ones and of all people, he had ever seen, all of these waves and waters were hurrying, suffering, towards goals, many goals, the waterfall, the lake, the rapids, the sea, and all goals were reached, and every goal was followed by a new one, and the water turned into vapour and rose to the sky, turned into rain and poured down from the sky, turned into a source, a stream, a river, headed forward once again”


Like the river, qualitative analysis can be a circle, with each iteration and reading different from the last, building on the previous work, but always listening to the data, not being quick to judge or categorise. Until we have reached this analytical nirvana, it is difficult to let go of our data, and feel that it is complete. This complex, turbulent flow of information defies our attempts to neatly categorise and label it, and the researcher’s quest for neatness and uncovering the truth under our subjectivity demands a single answer and categorisation scheme. But, just like taxonomy, there may never be a state when categorisation is complete, in a single or multiple interpretation. New discoveries, or new context can change it all.


We, the researcher, are a dynamic and fallible part of that process – we interpret, we miscategorise, we impose bias, we get tired and loose concentration. When we are lazy and quick, we take the comfort of labels and boxes, lulled into conformity by the seductive ease of software and coloured markers. But when we become good qualitative researchers: when we are self-critical and self-reflexive, finally learning to fully listen, then we achieve research nirvana:
 

“Siddhartha listened. He was now nothing but a listener, completely concentrated on listening, completely empty, he felt, that he had now finished learning to listen. Often before, he had heard all this, these many voices in the river, today it sounded new. Already, he could no longer tell the many voices apart, not the happy ones from the weeping ones, not the ones of children from those of men, they all belonged together”

 

Download a free trial of Quirkos today and challenge your qualitative coding!

 

 

 

Quirkos vs Nvivo: Differences and Similarities

quirkos vs nvivoI’m often asked ‘How does Quirkos compare to Nvivo?’. Nvivo is by far the largest player in the qualitative software field, and is the product most researchers are familiar with. So when looking at the alternatives like Quirkos (but also Dedoose, ATLAS.ti, MAXQDA, Transana and many others) people want to know what’s different!

 

In a nutshell, Quirkos has far fewer features than Nvivo, but wraps them up in an easier to use package. So Quirkos does not have support for integrated multimedia, Twitter analysis, quantitative analysis, memos, or hypothesis mapping and a dozen other features. For large projects with thousands of sources, those using multimedia data or requiring powerful statistical analysis, the Pro and Plus versions of Nvivo will be much more suitable.


Our focus with Quirkos has been on providing simple tools for exploring qualitative data that are flexible and easier to use. This means that people can get up and running quicker in Quirkos, and we hear that a lot of people who are turned off by the intimidating interface in Nvivo find Quirkos easer to understand. But the basics of coding and analysing qualitative data are the same.


In Quirkos, you can create and group themes (called Nodes in Nvivo), and use drag and drop to attach sections of text to them. You can perform code and retrieve functions by double clicking on the theme to see text coded to that node. And you can also generate reports of your coded data, with lots of details about your project.


Like Nvivo, we can also handle all the common text formats, such as PDFs, Word files, plain text files, and the ability to copy and paste from any other source like web pages. Quirkos also has tools to import survey data, which is not something supported in the basic version of Nvivo.


While Quirkos doesn’t have ‘matrix coding’ in the same way as Nvivo, we do have side-by-side comparison views, where you can use any demographic or quantitative data about your sources to do powerful sub-set analysis. A lot of people find this more interactive, and we try and minimise the steps and clicks between you and your data.


Although Quirkos doesn’t really have any dedicated tools for quantitative analysis, our spreadsheet export allows you to bring data into Excel, SPSS or R where you have much more control over the statistical models you can run than Nvivo or other mixed-methods tools allow.

 

However, there are also features in Quirkos that Nvivo doesn’t have at the moment. The most popular of these is the Word export function. This creates a standard Word file of your complete transcripts, with your coding shown as color coded highlights. It’s just like using pen and highlighter, but you can print, edit and share with anyone who can open a Word file.


Quirkos also has a constant save feature, unlike Nvivo which has to be set up to save over a certain time period. This means that even in a crash you don’t loose any work, something I know people have had problems with in Nvivo.


Another important differential for some people is that that Quirkos is the same on Windows and Mac. With Nvivo, the Windows and Mac versions have different interfaces, features and file formats. This makes it very difficult to switch between the versions, or collaborate with people on a different platform. We also never charge for our training sessions, and all our online support materials are free to download on our website


And we didn’t mention the thing people love most about Quirkos – the clear visual interface! With your themes represented as colourful, dynamic bubbles, you are always hooked into your data, and have the flexibility to play, explore and drill down into the data.


Of course, it’s best to get some impartial comparisons as well, so you can get reviews from the University of Surrey CAQDAS network here: https://www.surrey.ac.uk/sociology/research/researchcentres/caqdas/support/choosing/


But the best way to decide is for yourself, since your style of working and learning, and what you want to do with the software will always be different. Quirkos won’t always be the best fit for you, and for a lot of people sticking with Nvivo will provide an easier path. And for new users, learning the basics of qualitative analysis in Quirkos will be a great first step, and make transitioning to a more complex package like Nvivo easier in the future. But download our free trial (ours lasts for a whole month, not just 14 days!) and let us know if you have any questions!

 

Teaching Qualitative Methods via Social Media

teaching qualitative methods social media

 

This blog now has nearly 120 posts about all different kinds of qualitative methods, and has grown to hosting thousands of visitors a month. There are lots of other great qualitative blogs around, including Margaret Roller’s Research Design Review and the Digital Tools for Qualitative Research group and the newly relaunched Qual Page.


But these are only one part of the online qualitative landscape, and there are an increasing number of people engaged in teaching, commenting and exploring qualitative methods and analysis on social media. By this I mean popular platforms like Twitter, Facebook, Linkedin, Academia.net, Researchgate and even Instagram and Snapchat. And yes, people are even using Instagram to share pictures and engage with others doing qualitative research.


So the call for a talk at the International Conference of Qualitative Inquiry (ICQI 2017) asked: How can educators reach out and effectively use social media as a way to teach and engage students with qualitative methodologies?


Well, a frequent concern of teachers is how you teach the richness and complexity of qualitative methods in something like a Tweet which has a 140 character limit? Even the previous sentence would be too long for a Tweet! While other platforms such as comments on Facebook don’t have such tight limits, they are still geared towards short statements. Obviously, detailing the nuances of grounded theory in this way is not realistic. But it can be a great way to start a conversation or debate, to link and draw attention to other longer sources of media.


For example the very popular ‘Write That PhD’ Twitter feed by Dr Melanie Haines of the University of Canberra has nearly 20 thousand followers. The feed offers advice on writing and designing a PhD and often posts or retweets pictures which contain a lot more detailed tips on writing a thesis. This is a good way of getting around the character limit, and pictures, especially when not just of a long block of text are a good way to draw the eye. Social media accounts can also be used to link to other places (such as a blog) where you can write much longer materials – and this is an approach we use a lot.


But to use social media effectively for outreach and engagement, it is also important to understand the different audiences which each platform has, and the subsets within each site. For example, Snapchat has a much younger audience than Facebook, and academic focused platforms might be a good place to network with other academics, but doesn’t tend to have many active undergraduates.


It’s also important to think how students will be looking and searching for information, and how to get into the feeds that they look at on a daily basis. On Facebook and especially Twitter, hashtags are a big part of this, and it’s worth researching the popular terms that people are searching for which are relevant to your research or teaching. For example the #phdlife and #phdchat tags are two of the most popular ones, #profchat and #research have their own niches and audiences too. While it can seem like a good idea to start a new hashtag for yourself like #lovequalitiative, it takes a lot of work and influential followers to get them off the ground.

 

Don’t forget that hashtags and keywords are just one way to target different audiences. Twitter also has ‘lists’ of users with particular interests, and Linkedin and Facebook have groups and pages with followers which it can be worth joining and contributing to. On Researchgate and Academia.net the question forums are very active, and there are great discussions about all aspects of qualitative research.


But the most exciting part of social media for teaching qualitative research is the conversations and discussions that you can have. Since there are so many pluralities of theory and method, online conversations can challenge and promote the diversity of qualitative approaches. This is a challenge as well, as it requires a lot of time, ideally over a long period of time, to keep replying to comments and questions that pop up. However, the beauty of all these platforms is that they effectively create archives for you, so if there was a discussion about qualitative diary methodologies on a Facebook group a year ago, it will still be there, and others can read and learn from it. Conversely, new discussions can pop up at any time (and on any of the different social media sites) so keeping on top of them all can be time consuming.


In short, there is a key rule for digital engagement, be it for teaching or promoting a piece of research: write once, promote often. Get a digital presence on a blog or long form platform (like Medium) and then promote what you’ve written on as many social media platforms as you can. The more you promote, the more visible and the higher rated your content will become, and the greater audience you can engage with. And the best part of all is how measurable it is. You can record the hits, follows and likes of your teaching or research and show your REF committee or department the extent of your outreach. So social media can be a great feather to add to your teaching cap!

 

Writing qualitative research papers

writing qualitative research articles papers

We’ve actually talked about communicating qualitative research and data to the public before, but never covered writing journal articles based on qualitative research. This can often seem daunting, as the prospect of converting dense, information rich studies into a fairly brief and tightly structured paper takes a lot of work and refinement. However, we’ve got some tips below that should help demystify the process, and let you break it down into manageable steps.

 

Choose your journal

The first thing to do is often what left till last: choose the journal you want to submit your article to. Since each journal will have different style guidelines, types of research they publish and acceptable lengths, you should actually have a list of a few journals you want to publish with BEFORE you start writing.

 

To make this choice, there are a few classic pointers. First, make sure your journal will publish qualitative research. Many are not interested in qualitative methodologies, see debates about the BMJ recently to see how contested this continues to be. It’s a good idea to choose a journal that has other articles you have referenced, or are on a similar topic. This is a good sign that the editors (and reviewers) are interested in, and understand this area.

 

Secondly, there are some practical considerations. For those looking for tenure or to one day be part of schemes that assess the quality of academic institutions by their published work such as the REF (in the UK) or PBRF (in New Zealand) you should consider ‘high impact’ or ‘high tier’ journals. These are considered to be the most popular journals in certain areas, but will also be the most competitive to get into.

 

Before you start writing, you should also read the guidance for authors from the journal, which will give you information about length, required sections, how they want the summary and keywords formatted, and the type of referencing. Many are based on the APA style guidelines, so it is a good idea to get familiar with these.

 


Describing your methodology, literature review, theoretical underpinnings

When I am reviewing qualitative articles, the best ones describe why the research is important, and how it fits in with the existing literature. They then make it clear how the researcher(s) chose their methods, who they spoke to and why they were chosen. It’s then clear throughout the paper which insights came from respondent data, and when claims are made how common they were across respondents.

 

To make sure you do this, make sure you have a separate section to detail your methods, recruitment aims and detail the people you spoke to – not just how many, but what their background was, how they were chosen, as well as eventually noting any gaps and what impact that could have on your conclusion. Just because this is a qualitative paper doesn’t mean you don’t have to say the number of people you spoke to, but there is no shame in that number being as low as one for a case study or autoethnography!

 

Secondly, you must situate your paper in the existing literature. Read what has come before, critique it, and make it clear how your article contributes to the debate. This is the thing that editors are looking for most – make the significance of your research and paper clear, and why other people will want to read it.

 

Finally, it’s very important in qualitative research papers to clearly state your theoretical background and assumptions. So you need to reference literature that describes your approach to understanding the world, and be specific about the interpretation you have taken. Just saying ‘grounded theory’ for example is not enough – there are a dozen different conceptualisations of this one approach.
 

 

Reflexivity

It’s not something that all journals ask for, but if you are adopting many qualitative epistemologies, you are usually taking a stance on positivism, impartibility, and the impact of the researcher on the collection and interpretation of the data. This sometimes leads to the need for the person(s) who conducted the research to describe themselves and their backgrounds to the reader, so they can understand the world view, experience and privilege that might influence how the data was interpreted. There is a lot more on reflexivity in this blog post.


How to use quotations

Including quotations and extracts from your qualitative data is a great feature, and a common way to make sure that you back up your description of the data with quotes that support your findings. However, it’s important not to make the text too dense with quotations. Try and keep to just a few per section, and integrate them into your prose as much as possible rather than starting every one with ‘participant x said’. I also like to try and show divergence in the respondents, so have a couple of quotes that show alternative view points.

 

On a practical note, make sure any quotations are formatted according to the journal’s specifications. However, if they don’t have specific guidelines, try and make them clear by always giving them their own indented paragraph (if more than a sentence) and clearly label them with a participant identifier, or significant anonymised characteristic (for example School Administrator or Business Leader). Don’t be afraid to shorten the quotation to keep it relevant to the point you are trying to make, while keeping it an accurate reflection of the participant’s contribution. Use ellipsis (…) to show where you have removed a section, and insert square brackets to clarify what the respondent is talking about if they refer to ‘it’ or ‘they’, for example [the school] or [Angela Merkel].

 


Don’t forget visualisations

If you are using qualitative analysis software, make sure you don’t just use it as a quotation finder. The software will also help you do visualisations and sub-set analysis, and these can be useful and enlightening to include in the paper. I see a lot of people use an image of their coding structure from Quirkos, as this quickly shows the relative importance of each code in the size of the bubble, as well as the relationships between quotes. Visual outputs like this can get across messages quickly, and really help to break up text heavy qualitative papers!

 


Describe your software process!

No, it’s not enough to just say ‘We used Nvivo’. There are a huge number of ways you could have used qualitative analysis software, and you need to be more specific about what you used the software for, how you did the analysis (for example framework / emergent) and how you got outputs from the software. If you did coding with other people, how did this work? Did you sit together and code at one time? Did you each code different sources or go over the same ones? Did you do some form of inter-rater reliability, even if it was not a quantitative assessment? Finally, make sure you include your software in the references – see the APA guides for how to format this. For Quirkos this would look something like:

 

Quirkos Software (2017). Quirkos version 1.4.1 [Computer software]. Edinburgh: Quirkos Limited.

 

Quirkos - qualitative analysis software

 


Be persistent!

Journal publication is a slow process. Unless you get a ‘desk rejection’, where the editor immediately decides that the article is not the right fit for the journal, hearing back from the reviewers could take months or even a year. Ask colleagues and look at the journal information to get an idea of how long the review process takes for each journal. Finally, when you get some feedback it might be negative (a rejection) or unhelpful (when the reviewers don’t give constructive feedback). This can be frustrating, especially when it is not clear how the article can be made better. However, there are excellent journals such as The Qualitative Report that take a collaborative rather than combatitative approach to reviewing articles. This can be really helpful for new authors.

 

Remember that a majority of articles are rejected at any paper, and some top-tier journals have acceptance rates of 10% or less. Don’t be disheartened; try and read the comments, keep on a cycle of quickly improving your paper based on the feedback you can get, and either send it back to the journal or find a more appropriate home for it.

 

Good luck, and don’t forget to try out Quirkos for your qualitative analysis. Our software is easy to use, and makes it really easy to get quotes into Word or other software for writing up your research. Learn more about the features, and download a free, no-obligation trial.

 

 

Does software lead to the homogenisation of qualitative research?

printing press homogenisation qualitative method

 

In the last couple of weeks there has been a really interesting discussion on the Qualrs-L UGA e-mail discussion group about the use of software in qualitative analysis. Part of this was the question of whether qualitative software leads to the ‘homoginisation’ of qualitative research and analysis. As I understand it, this is the notion that the qualitative sphere is contracting from diverse beginnings, narrowing to a series of commonly used and accepted methods of collection and interpretation. For example, the most popular are probably semi-structured interview transcripts coupled with some type of framework based interpretation. Are more and more researchers using qualitative research churning out work using the same research? Is modern qualitative technology leading to a unified outputs like the introduction of the printing press, or helping increasing the accessibility of the discipline?


While I do see some evidence of trends emerging in the literature and research articles, I do not see them as inevitable, or feel that alternative approaches have been relegated, or that software need be a force for homogenisation.


Actually, I see a lot of similarities in this debate with a keynote talk on conformity in qualitative research by Professor Maggie MacLure at the ICQI conference last year. Referencing Deleuze, Nietzsche and the Greek Myths, she described the need to balance the dichotomy of two of the sons of Zeus in Greek legend: Apollo and Dionysus. Dionysus represents, chaos, emotion (and excess drinking of wine) while Apollo masters truth, rational thinking and prophecy. One can argue that following Apollo can lead to homogenisation, while too much Dionysus in your research can lead to chaos and a difficulty in drawing meaningful conclusions (especially with the wine drinking, although many researchers I know would disagree on this important point when writing up research).


However, a little creativity is important, especially at the point of choosing your methodology. In qualitative research, you can use arts-based research, using participant creation of drawings, games or even pottery as data. There are real challenges in keeping the richness of these creative methods alive through the analysis process: how do you analyse a drawing by a participant? Yet it’s rarely enough to just look at transcripts of respondents talking about their creations, and ignoring the art work itself. So take a pinch of the creative to ward against homogenisation: the excellent overview on Creative Research Methods by Helen Kara is a great place to start.


But what about the analysis and qualitative software? Can this be creative and unique as well?


I would argue that it can – especially with certain tools. I think there is a tendency for software to ‘lead’ users into particular behaviours and approaches, which is why users should look at the Five Level QDA approach advocated by Woolf and Silver and decide how they want to analyse their data before choosing a software package. But most software is very flexible. Even tools like Atlas.ti that was originally designed for grounded theory can be used for other theoretical approaches (Friese 2014). However you can still see this legacy in the design, for example the difficulty in creating a hierarchical coding structure in Atlas.ti remains today.


The design methodology for Quirkos was to create a very simple qualitative software tool that allowed people to use it in anyway they wanted. And in my experience from 3 years in running a qualitative software company, I can assure you that there is little risk of homogenisation in software users! Users occasionally share their projects with me to get advice on a problem, and I can see people using the features in ways we never envisaged! I also get lots of emails in my Inbox with suggestions on how we can make small tweaks to allow people to use Quirkos in different ways. The demand from the users is not to adopting the same approach over and over again, but being able to customise the software to their own needs and ways of working. And again, I can assure you these approaches are more diverse than I ever imagined.


And what about the argument that software creates mechanical and thought-less analysis? Well, I think this is a risk, and I’ve written about the discipline that users need to avoid this. But I think that any reductive analytical process risks becoming automatic, and thus removing the richness of the qualitative data. Even a pen and highlighters approach to analysis can become automatic and brainless if not done with care, and when re-reading data the eye can skip to the brightly coloured sections, sometimes missing vital context.


Ironically there is also some homogenisation in the software industry itself. Many scholars including Fielding and Lee (1998) have talked about ‘Creeping featurism’ and a trend of software packages to become more similar and (complex) as they add tools and functionality from each other. They tend to have similar interfaces, and function in ways that often seem very similar to the new user. Now, a fan of any one qualitative software package will quickly let you know how superior X is to Y because of a subtle aspect of the layout, and how easy it is to work in a particular way. Again this seems to evidence that software itself does not lead to homogenisation of approaches.


There are more than a dozen qualitative software packages actively developed at the moment, and between them they offer a fantastic variety of conceptual and practical approaches to data coding and management. For most people I speak to, the choice of software is bewildering, just like the variety of methods that can be used in qualitative research. I hope that new students are led so that, rather than being shoehorned into a particular approach, they are excited by the dizzying heights of possibility in qualitative research.


If you would like to give the unique Quirkos experience a try, we have a free trial you can download so you can see if the simple, visual and colourful approach is right for your qualitative research. And as ever, if you have any questions, feel free to get in touch with us at support@quirkos.com.

 

Quirkos v1.4.1 is now available for Linux

quirkos for linux

 

A little later than our Windows and Mac version, we are happy to announce that we have just released Quirkos 1.4.1 for Linux. There are some major changes to the way we release and package our Linux version, so we want to provide some technical details of these, and installation instructions.


Previously our releases had a binary-based and distro independent installer. However, this was based on 32 bit libraries to provide backwards compatibility, and required a long list of dependencies to work on many systems.


From this release forward, we are releasing Quirkos as an AppImage – a single file which contains a complete image of the software. This should improve compatibility across different distros, and also remove some of the dependency hell involved in the previous installer.


Once you download the .AppImage file, you will need to give the file executable permissions (a standard procedure when downloading binaries). You can do this at the command-line just by typing ‘chmod +x Quirkos-1.4.1-x86_64.AppImage’. This step can also be done with a File Manager GUI like Nautilus (the default in Gnome and Ubuntu) by right clicking on the downloaded file, selecting the Permissions tab, and ticking the ‘Allow executing file as program’ box. Then you can start Quirkos from the command-line, or by double clicking on the file.


Since an AppImage is essentially a ‘live’ filesystem contained in a single file, there is no installation needed, and if you want to create a Desktop shortcut to the software stored in a different location, you will have to create one yourself.
 

Secondly, we have also moved to a 64 bit release for this version of Quirkos. While we initially wanted to provide maximum compatibility with older computers, this actually creates a headache for the vast majority of Linux users with 64 bit installations. They were required to install 32 bit libraries for many common packages (if they did not have them already), creating duplication and huge install requirements. Now Quirkos should run out-of-the-box for a vast majority of users.


Should you prefer the older 32 bit installer package, you can still download the old version from here:
https://www.quirkos.com/quirkos-1.4-linux-installer.run


Supporting Linux is really important to us, and we are proud to be the only major commercial qualitative software company creating a Linux version, let alone one that is fully feature and project compatible with the Windows and Mac builds. While there are great projects like RQDA which are still supported, TAMS Analyzer and Weft QDA have not been updated for Linux in many years, and are pretty much impossible to build these days. Dedoose is an option in Linux since it is browser based, but sometimes requires some tweaking to get Flash running properly. Adobe AIR for Linux is now no longer supported, so the Dedoose desktop App is sadly no longer an option.
 

But Quirkos will keep supporting Linux, and provide a real option for qualitative researchers wanting to use free and open platforms.


We REALLY would love to have your feedback on our new Linux release, positive, negative or neutral! We still have a relatively small number of users on Linux, so your experiences are extra important to us. Is the AppImage more convenient? Have you had any dependency problems? Would you prefer we kept providing 32bit packages? E-mail us at support@quirkos.com and let us know!