Tips for managing mixed method and participant data in Quirkos and CAQDAS software

mixed method data

 

Even if you are working with pure qualitative data, like interview transcripts, focus groups, diaries, research diaries or ethnography, you will probably also have some categorical data about your respondents. This might include demographic data, your own reflexive notes, context about the interview or circumstances around the data collection. This discrete or even quantitative data can be very useful in organising your data sources across a qualitative project, but it can also be used to compare groups of respondents.

 


It’s also common to be working with more extensive mixed data in a mixed method research project. This frequently requires triangulating survey data with in-depth interviews for context and deeper understanding. However, much survey data also includes qualitative text data in the form of open-ended questions, comments and long written answers.

 


This blog has looked before at how to bring in survey data from on-line survey platforms like Surveymonkey, Qualtrics and Limesurvey. It’s really easy to do this, whatever you are using, just export as a CSV file, which Quirkos can read and import directly. You’ll get the option to change whether you want each question to be treated as discrete data, a longer qualitative answer, or even the name/identifier for each source.

 


But even if you haven’t collected your data using an online platform, it is quite easy to format it in a spreadsheet. I would recommend this as an option for many studies, it’s simply good data management to be able to look at all your participant data together. I often have a table of respondent’s data (password protected of course) which contains a column for names, contact details, whether I have consent forms, as well as age, occupation and other relevant information. During data collection and recruitment having this information neatly arranged helps me remember who I have contacted about the research project (and when), who has agreed to take part, as well as suggestions from snowball sampling for other people to contact.

 


Finally, a respondent ‘database’ like this can also be used to record my own notes on the respondents and data collection. If there is someone I have tried to contact many times but seems reluctant to take part, this is important to note. It can remind me when I have agreed to interview people, and keep together my own comments on how well this went. I can record which audio and transcript files contain the interview for this respondent, acting as a ‘master key’ of anonymised recordings. 

 


So once you have your long-form qualitative data, how best to integrate this with the rest of the participant data? Again, I’m going to give examples using Quirkos here, but the similar principals will apply to many other CAQDAS/QDA software packages.

 


First, you could import the spreadsheet data as is, and add the transcripts later. To do this, just save your participant database as a CSV file in Excel, Google Docs, LibreOffice or your spreadsheet software of choice. You can bring in the file into a blank Quirkos project using the ‘Import source from CSV’ on the bottom right of the screen. The wizard in the next page will allow you to choose how you want to treat each column in the spreadsheet, and each row of data will become a new source. When you have brought in the data from the spreadsheet, you can individually bring the qualitative data in as the text source for each participant, copy and pasting from wherever you have the transcript data.

 


However, it’s also possible to just put the text into a column in the spreadsheet. It might look unmanageable in Excel when a single cell has pages of text data, but it will make for an easy one step import into Quirkos. Now when you bring in the data to Quirkos, just select the column with the text data as the ‘Question’ and discrete data as the ‘Properties’ (although they should be auto-detected like this).

 


You can also do direct data entry in Quirkos itself, and there are some features to help make this quick and relatively painless. The Properties and Values editor allows you to create categories and values to define your sources. There are also built in values for True/False, Yes/No, options from 1 -10 or Likert scales from Agree to Disagree. These let you quickly enter common types of data, and select them for each source. It’s also possible to export this data later as a CSV file to bring back into spreadsheet software.

 

mixed method data entry in quirkos

 

Once your data has been coded in Quirkos, you can use tools like the query view and the comparison views to quickly see differences between groups of respondents. You can also create simple graphs and outputs of your quantitative and discrete data. Having not just demographic information, but also your notes and thoughts together is vital context to properly interpreting your qualitative and quantitative data.

 

 

A final good reason to keep a good database of your research data is to make sure that it is properly documented for secondary analysis and future use. Should you want to ever work with the data again, share it with another research team, or the wider community, an anonymised data table like this is important to make sure the research has the right metadata to be used for different lines of enquiry.

 

 

Get an overview of Quirkos and then try for yourself with our free trial, and see how it can help manage pure qualitative or mixed method research projects.

 

 

 

What actually is Grounded Theory? A brief introduction

grounded theory

 

“It’s where you make up as you go along!”

 

For a lot of students, Grounded Theory is used to describe a qualitative analytical method, where you create a coding framework on the fly, from interesting topics that emerge from the data. However, that's not really accurate. There is a lot more to it, and a myriad of different approaches.


Basically, grounded theory aims to create a new theory of interpreting the world, either when it’s an area where there isn’t any existing theory, or you want to challenge what is already out there. An approach that is often overused, it is a valuable way of approaching qualitative research when you aren’t sure what questions to ask. However, it is also a methodological box of worms, with a number of different approaches and confusing literature.


One of my favourite quotes on the subject is from Dey (1999) who says that there are “probably as many versions of grounded theory as there are grounded theorists”. And it can be a problem: a quick search of Google Scholar will show literally hundreds of qualitative research articles with the phrase “grounded theory was used” and no more explanation than this. If you are lucky, you’ll get a reference, probably to Strauss and Corbin (1990). And you can find many examples in peer-reviewed literature describing grounded theory as if there is only one approach.

 

Realistically there are several main types of grounded theory:

 

Classical (CGT)
Classical grounded theory is based on the Glaser and Strauss (1967) book “The Discovery of Grounded Theory”, in which it is envisaged more as a theory generation methodology, rather than just an analytical approach. The idea is that you examine data and discover in it new theory – new ways of explaining the world. Here everything is data, and you should include fieldwork notes as well as other literature in your process. However, a gap is recommended so that literature is not examined first (like when doing a literature review) creating bias too early, but rather engaging with existing theory as something to be challenged.


Here the common coding types are substantive and theoretical – creating an iterative one-two punch which gets you from data to theory. Coding is considered to be very inductive, having less initial focus from the literature.

 

Modified (Straussian)
The way most people think about grounded theory probably links closest to the Strauss and Corbin (1990) interpretation of grounded theory, which is probably more systematic and concerned with coding and structuring qualitative data. It traditionally proposes a three (or sometimes two) stage iterative coding approach, first creating open codes (inductive), then grouping and relating them with axial coding, and finally a process of selective coding. In this approach, you may consider a literature review to be a restrictive process, binding you to prejudices from existing theory. But depending on the different interpretations, modified grounded theory might be more action oriented, and allow more theory to come from the researcher as well as the data. Speaking of which…

 

Constructivist
The seminal work on constructivism here is from Charmaz (2000 or 2006), and it’s about the way researchers create their own interpretations of theory from the data. It aims to challenge the idea that theory can be ‘discovered’ from the data – as if it was just lying there, neutral and waiting to be unearthed. Instead it tries to recognise that theory will always be biased by the way researchers and participants create their own understanding of society and reality. This engagement between participants and researchers is often cited as a key part of the constructivist approach.
Coding stages would typically be open, focused and then theoretical. Whether you see this as being substantively different from the ‘open – axial – selective’ modified grounded theory strategy is up to you. You’ll see many different interpretations and implementations of all these coding approaches, so focus more on choosing the philosophy that lies behind them.

 

Feminist
A lot of the literature here comes from the nursing field, including Kushner and Morrow (2003), Wuest (1995), and Keddy (2006). There are clear connections here with constructivist and post-modern approaches: especially the rejection of positivist interpretations (even in grounded theory!), recognition of multiple possible interpretations of reality, and the examination of diversity, privilege and power relations.

 

Post-modern
Again, a really difficult segmentation to try and label, but for starters think Foucault, power and discourse. Mapping of the social world can be important here, and some writers argue that the practice of trying to generate theory at all is difficult to include in a postmodern interpretation. This is a reaction against the positivist approach some see as inherent in classical grounded theory. For where this leaves the poor researcher practically, there are at least one main suggested approach here from Clarke (2005) who focuses on mapping the social world, including actors and noting what has been left unsaid.

 

There are also what seem to me to be a variety of approaches plus a particular methodology, such as discursive grounded theory where the focus is more on the language used in the data (McCreaddie and Payne 2010). It basically seeks to integrate discourse analysis to look at how participants use language to describe themselves and their worlds. However, I would argue that many different ways of analysing data like discourse analysis can be combined with grounded theory approaches, so I am not sure they are a category of their own right.

 

 

To do grounded theory justice, you really need to do more than read this crude blog post! I’d recommend the chapter on Grounded Theory in Savin-Baden and Howell Major’s (2013) textbook on Qualitative Research. There’s also the wonderfully titled "A Novice Researcher’s First Walk Through the Maze of Grounded Theory" by Evans (2013). Grounding Grounded Theory (Dey 1999) is also a good read – much more critical and humorous than most. However, grounded theory is such a pervasive trope in qualitative research, indeed is seen by some to define qualitative research, that it does require some understanding and engagement.

 

But it’s also worth noting that for practical purposes, it’s not necessary to get involved in all the infighting and debate in the grounded theory literature. For most researchers the best advice is to read a little of each, and decide which approach is going to work best for you based on your research questions and personal preferences. Even better is if you can find another piece of research that describes a grounded theory approach you like, then you can just follow their lead: either citing them or their preferred references. Or, as Dey (1999) notes, you can just create your own approach to grounded theory! Many argue that grounded theory encourages such interpretation and pluralism, just be clear to yourself and your readers what you have chosen to do and why!

 

Merging and splitting themes in qualitative analysis

split and merge qual codes

To merge or to split qualitative codes, that is the question…

 

One of the most asked questions when designing a qualitative coding structure is ‘How many codes should I have?’. It’s easy to start out a project thinking that just a few themes will cover the research questions, but sooner or later qualitative analysis tends towards ballooning thematic structure, and before you’ve even started you might have a framework with dozens of codes. And while going through and analysing the data, you might end up with another couple dozen more. So it’s quite common for researchers to end up with more than a hundred codes (or sometimes hundreds)!

 

This can be alarming for students doing qualitative analysis for the first time, but I would argue it’s fine in most situations. While it can be confusing and disorienting if you are using paper and highlighters, when using CAQDAS software a large number of themes can be quite manageable. However, this itself can be a problem, since qualitative software makes it almost too easy to create an unwieldy number of codes. While some restraint is always advisable, when I am running workshops I usually advise new coders not to worry, since with the software it is easier to merge codes later, than split them apart.

 

I’m going to use the example of Quirkos here, but the same principal applies to any qualitative data analysis package. When you are going through and analysing your qualitative text sources, reading and coding them is the most time consuming part. If you create a new code for a theme half way through coding your data because you can see it is becoming important, you will have to go back to the beginning and read through the already coded sources to make sure you have complete coverage. That’s why it’s normally easier to think through codes before starting a code/read through.

 

Of course there is some methodological variance here: if you are doing certain types of grounded theory this may not apply as you will want to create themes on the fly. It’s also worth noting that good qualitative coding is an iterative process, and you should expect to go through the data several times anyway. Usually each time you do this you will look at the code structure in a different way – maybe creating a more higher-level, theory driven coding framework on each pass.

 

However, there is another way that QDA software helps you manage your qualitative themes: since it is simple to merge smaller codes together under a more general heading. In Quirkos, just right click on the code bubble you want to keep, and you will see the dialogue below:

 

merging qualitative codes in quirkos


Then select from the drop down list of other themes in your project which topic you want to merge into the Quirk you selected first. That’s it! All the coded text in the second bubble will get added to the first one, and it will keep the name of that code, appended with “(merged)” so you can identify it.

 

Since it is so easy to merge topics in qualitative software, I generally suggest that people aren’t afraid to create a large number of very specific topics, knowing they can merge them together later. For example, if you are create a code for when people are talking about eating out at a restaurant, why not start with separate codes for Fast food, Mexican, Chinese, Haute cuisine etc - since you can always merge them later under the generic ‘Restaurant’ theme if you decide you don’t need that much detail.

 

It is also possible to retroactively split broad codes into smaller categories, but this is a much more engaged process. To do this in Quirkos, I would start by taking the code you want to expand (say Restaurant) and make sure it is a top level code – in other words is not a subcategory of another code. Then, create the codes you want to break out (for example Thai, Vegetarian, Organic) and make them sub categories of the main node. Then, double click on the top Quirk, and you will get a list of all the text coded to the top node (Restaurant). From this view in Quirkos, you can drag and drop each code into the relevant subcategory (eg Organic, Thai):


splitting qualitative codes in quirkos


Once you have gone through and recoded all the quotes into new codes, you can either delete the quotes from the top level code (Restaurant) one by one (by right clicking on the highlight stripe), or remove all quotes from that node by deleting the top-node entirely. If you still want to have a Restaurant Quirk at the top to contain the sub categories, just recreate it, and add the sub-categories to it. That way you will have a blank ‘Restaurant’ theme to keep the subcategories (Thai, Organic) together.

 

So to summarise, don’t be afraid to have too many codes in CAQDAS software – use the flexibility it gives you to experiment. While there is always too much of a good thing, the software will help you see all the coding options at once, so you can decide the best place to categorise each quote. With the ability to merge, and even split apart codes with a little effort, it’s always possible to adjust  your coding framework later – in fact you should anticipate the need to do this as you refine your interpretations. You can also save your project at one stage of the coding, and go back to that point if you need to revert to an earlier state to try a different approach. For more information about large or small coding strategies, this blog post article goes into more depth.


If you want to see how this works in Quirkos, just download the free trial and try for yourself. Quirkos makes operations like merge and split really easy, and the software is designed to be intuitive, visual and colourful. So give it a try, and always contact us if you have any questions or suggestions on how we can make common operations like this quicker and simpler!

 

 

Using qualitative analysis software to teach critical thought

teaching qualitative software

 

It’s a key part of the curriculum for British secondary school and American high school education to teach critical thought and analysis. It’s a vital life skill: the ability to look at who is saying what, and pick apart what is being said. I’ve been thinking about the possible role for qualitative analysis in education, and how qualitative data analysis software in particular could help develop critical analysis skills in students of all ages.

 

While using qualitative analysis software is fairly common at university level, it’s a little unusual (possibly unprecedented at a quick glance) to use it at higher/secondary level with pre-college students. But why is this the case? It may well be that previously the software was too complex or expensive to use in mainstream schools, especially when you consider the amount of training the teachers and educators would have to have.

 

However, Quirkos was designed to make qualitative analysis more accessible by being easier to learn and teach, while also reducing the cost of licences. Thus it may make a better fit than previous options for the higher education sector. But how would such an approach work, and how would it fit into an already tight curriculum?

 

First of all, the notion of critical reading and analysis is prominent as a ‘core skill’ in UK secondary and USA K-12 education. For example the UK English curriculum states that:
“Critical reading, discussing, appreciating and exploring texts is essential for learning across the curriculum”
 

In History, teachers should:
“equip pupils to ask perceptive questions, think critically, weigh evidence, sift arguments, and develop perspective and judgement… [and] understand the methods of historical enquiry, including how evidence is used rigorously to make historical claims, and discern how and why contrasting arguments and interpretations of the past have been constructed”
 

Even in the USA the Common Core State Standards “stresses critical-thinking, problem-solving, and analytical skills that are required for success in college, career, and life”
 

I would argue that trying to include qualitative analysis in a curriculum can tick many of these boxes, and provide a flexible way to integrate these skills in other lesion plans. For example, in History, students could be given a number of newspaper articles covering an important historical event. These may come from different countries or papers with different viewpoints, and using qualitative software students could perform comparative analysis, identifying sections of the text that show bias or contradict.

 

In an English class, students could be provided with a digital copy of a book on the reading list, and given a framework with topics to explore, encouraging them to identify metaphor, similes, or more specific issues like ‘representations of women’ or other recurring themes. If qualitative analysis software became a standard tool in schools, it could easily fit into a variety of activities, with teachers easily able to look at student’s outputs for marking and group discussion.

 

Finally, students of any age could be encouraged to do their own qualitative research project, surveying their peers or community on topics both topical and relevant to the curriculum. That way, children can also learn about setting research questions, bias, and presenting results, helping them better understand and critique the barrage of studies they are exposed to in the media.

 

 

The visual, colourful and interactive interface of Quirkos is very intuitive to the digital touch-screen generation: it not only looks like a game, but provides visual stimulation and feedback in the same way. Watching their bubble codes grow, and organising topics like petals in a flower should be intuitive for children of all ages, but is also fundamentally teaching them the basics of qualitative analysis, sorting and categorising categories, and thinking about what different sources are saying.

 

We are talking to educators in the UK already about developing example lesson plans and curriculums around Quirkos and qualitative analysis. There are a lot of practical hurdles to overcome, including the plurality of different IT systems schools use, and the limited amount of time teachers get to learn and enact new methods.

 

But the benefits are considerable: a background in qualitative research and analysis techniques is a great transferable skill for students to take into their working life. Although there doesn’t seem to be a lot of jobs outside research that make qualitative analysis experience an essential criteria, many jobs include a lot of dealing with written text in just such a way. Few workers in office environments can get by without engaging with company or government policy documents, and in areas like HR staff have to critically appraise (in a replicable, and guided way) written documents like CVs and covering letters on a regular basis.

 

And it’s a frequent complaint from employers that these are exactly the kind of skills applicants are lacking:

“In survey after survey, they rate young applicants as deficient in such key workplace skills as written and oral communication, critical thinking and analytical reasoning.”
 

The Collegiate Learning Assessment Plus measure used in the US university system measures analytical reasoning, critical thinking, document literacy, writing and communication skills – all considered essential areas by employers from all backgrounds. A recent study found that 40% of students, even at University level, lacked proficiency in these areas.

 

Qualitative analysis requires students to develop all of these skills, and getting started at a young age will not only help high school students start their academic studies where critical reasoning will become a daily task, but get them on the right step to employment, and to becoming an engaged and informed member of society.

 

 

In vivo coding and revealing life from the text

Ged Carrol https://www.flickr.com/photos/renaissancechambara/21768441485


Following on from the last blog post on creating weird and wonderful categories to code your qualitative data, I want to talk about an often overlooked way of creating coding topics – using direct quotes from participants to name codes or topics. This is sometimes called “in vivo” coding, from the Latin ‘in life’ and not to be confused with the ubiquitous qualitative analysis software ‘Nvivo’ which can be used for any type of coding, not just in vivo!


In an older article I did talk about having a category for ‘key quotes’ - those beautiful times when a respondent articulates something perfectly, and you know that quote is going to appear in an article, or even be the article title. However, with in vivo coding, a researcher will create a coding category based on a key phrase or word used by a participant. For example someone might say ‘It felt like I was hit by a bus’ to describe their shock at the event, and rather than creating a topic/node/category/Quirk for ‘shock’, the researcher will name it ‘hit by a bus’. This is especially useful when metaphors like this are commonly used, or someone uses an especially vivid turn of phrase.


In vivo coding doesn’t just apply to metaphor or emotions, and can keep researchers close to the language that respondents themselves are using. For example when talking about how their bedroom looks, someone might talk about ‘mess’, ‘chaos’, or ‘disorganised’ and their specific choice of word may be revealing about their personality and embarrassment. It can also mitigate the tendency for a researcher to impose their own discourse and meaning onto the text.


This method is discussed in more depth in Johnny Saldaña’s book, The Coding Manual for Qualitative Researchers, which also points out how a read-through of the text to create in vivo codes can be a useful process to create a summary of each source.


Ryan and Bernard (2003) use a different terminology, indigenous categories or typographies after Patton (1990). However, here the meaning is a little different – they are looking for unusual or unfamiliar terms which respondents use in their own subculture. A good example of these are slang terms unique to a particular group, such as drug users, surfers, or the shifting vernacular of teenagers. Again, conceptualising the lives of participants in their own words can create a more accurate interpretation, especially later down the line when you are working more exclusively with the codes.


Obviously, this method is really a type of grounded theory, letting codes and theory emerge from the data. In a way, you could consider that if in vivo coding is ‘from life’ or grows from the data, then framework coding to an existing structure is more akin to ‘in vitro’ (from glass) where codes are based on a more rigid interpretation of theory. This is just like the controlled laboratory conditions of in vitro research with more consistent, but less creative, creations.


However, there are problems in trying to interpret the data in this way, most obviously, how ubiquitous will an in-vivo code from one source be across everyone’s transcripts? If someone talks about a shocking event in one source as feeling like being ‘hit by a bus’ and another ‘world dropped out from under me’, would we code the same text together? Both are clearly about ‘shock’ and would probably belong in the same theme, but does the different language require a slightly different interpretation? Wouldn’t you loose some of the nuance of the in vivo coding process if similar themes like these were lumped together?


The answer to all of these issues is probably ‘yes’. However, they are not insurmountable. In fact, Johnny Saldaña suggests that an in vivo coding process works best as a first reading of the data, creating not just a summary if read in order,  but a framework from each source which should be later combined with a ‘higher’ level of second coding across all the data. So after completing in vivo coding, the researcher can go back and create grouped coding categories based around common elements (like shock) or/and conceptual theory level codes (like long term psychological effects) which resonate across all the sources.


This sounds like it would be a very time consuming process, but in fact multi-level coding (which I often advocate) can be very efficient, especially with an in vivo coding as the first process. It may be that you just highlight some of these key words, on paper or Word, or create a series of columns in Excel adjacent to each sentence or paragraph of source material. Since the researcher doesn’t have to ponder the best word or phrase to describe the category at this stage, creating the coding framework is quick. It’s also a great process for participatory analysis, since respondents can quickly engage with selecting juicy morsels of text.


Don’t forget, you don’t have to use an exclusivly in vivo coding framework: just remember that it’s an option, and use for key illuminating quotes along side your other codes. Again, there is no one-size-fits-all approach for qualitative analysis, but knowing the range of methods allows you to choose the best way forward for each research question or project.


CAQDAS/QDA software makes it easy to keep all the different stages of your coding process together, and also create new topics by splitting and emerging existing codes. While the procedure will vary a little across the different qualitative analysis packages, the basics are very similar, so I’ll give a quick example of how you might do this in Quirkos.


Not a lot of people know this, but you can create a new Quirk/topic in Quirkos by dropping a section of text directly onto the create new bubble button, so this is a good way to create a lot of themes on the fly (as with in vivo coding). Just name these according to the in vivo phrase, and make sure that you highlight the whole section of relevant text for coding, so that you can easily see the context and what they are talking about.


Once you have done a full (or partial) reading and coding of your qualitative data, you can work with these codes in several ways. Perhaps the easiest is to create a umbrella (or parent) code (like shock) to which you can make relevant in vivo codes subcategories, just by dragging and dropping them onto the top node. Now, when you double click on the main node, you will see quotes from all the in vivo subcategories in one place.

 

qualitative research software - quirkos

 

It’s also possible to use the Levels feature in Quirkos to group your codes: this is especially useful when you might want to put an in vivo code into more than one higher level group. For example, the ‘hit by a bus’ code might belong in ‘shock’ but also a separate category called ‘metaphors’. You can create levels from the Quirk Properties dialogue of any Quirk, assign codes to one or more of these levels, and explore them using the query view. See this blog post for more on how to use levels in Quirkos.


It’s also possible to save a snapshot of your project at any point, and then actually merge codes together to keep them all under the same Quirk. You will loose most of the original in vivo codes this way (which is why the other options are usually better) but if you find yourself just dealing with too many codes, or want to create a neat report based on a few key concepts this can be a good way to go. Just right click on the Quirks you want to keep, and select ‘Merge Quirk with...’ to choose another topic to be absorbed into it. Don’t forget all actions in Quirkos have Undo and Redo options!


We don’t have an example dataset coded using in vivo quotes, but if you look at some of the sources from our Scottish Independence research project, you will see some great comments about politics and politicians that leap out of the page and would work great for in vivo coding. So why not try it out, and give in vivo coding a whirl with the free trial of Quirkos: affordable, flexible qualitative software that makes coding all these different approaches a breeze!

 

 

Turning qualitative coding on its head

CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=248747


For the first time in ages I attended a workshop on qualitative methods, run by the wonderful Johnny Saldaña. Developing software has become a full time (and then some) occupation for me, which means I have little scope for my own professional development as a qualitative researcher. This session was not only a welcome change, but also an eye-opening critique to the way that many in the room (myself included) approach coding qualitative data.

 

Professor Saldaña has written an excellent Coding Manual for Qualitative Researchers, and the workshop really brought to life some of the lessons and techniques in the book. Fundamental to all the approaches was a direct challenge to researchers doing qualitative coding: code different.

 

Like many researchers, I am guilty of taking coding as a reductive, mechanical exercise. My codes tend to be very basic and descriptive – what is often called index coding. My codes are often a summary word of what the sentence or section of text is literally about. From this, I will later take a more ‘grand-stand’ view of the text and codes themselves, looking at connections between themes to create categories that are closer to theory and insight.

 

However, Professor Saldaña gave (to my count) at least 12 different coding frameworks and strategies that were completely unique to me. While I am not going to go into them all here (that’s what the textbook, courses and the companion website is for!) it was not one particular strategy that stuck with me, but the diversity of approaches.

 

It’s easy when you start out with qualitative data analysis to try a simple strategy – after all it can be a time consuming and daunting conceptual process. And when you have worked with a particular approach for many years (and are surrounded by colleagues who have a similar outlook) it is difficult to challenge yourself. But as I have said before, to prevent coding being merely a reductive and descriptive act, it needs to be continuous and iterative. To truly be analysis and interrogate not just the data, but the researcher’s conceptualisation of the data, it must challenge and encourage different readings of the data.

 

For example, Professor Saldaña actually has a background in performance and theatre, and brings some common approaches from that sphere to the coding process: exactly the kind of cross-disciplinary inspiration I love! When an actor or actress is approaching a scene or character, they may engage with the script (which is much like a qualitative transcript) looking at the character's objectives, conflicts, tactics, attitudes, emotions and subtexts. The question is: what is the character trying to do or communicate, and how? This sort of actor-centred approach works really well in qualitative analysis, in which people, narratives and stories are often central to the research question.

 

So if you have an interview with someone, for example on their experience with the adoption process, imagine you are a writer dissecting the motivations of a character in a novel. What are they trying to do? Justify how they would be a good parent (objectives)? Ok, so how are they doing this (tactics)? And what does this reveal about their attitudes and emotions? Is there a subtext here – are they hurt because of a previous rejection?

 

Other techniques talked about the importance of creating codes which were based around emotions, participant’s values, or even actions: for example, can you make all your codes gerunds (words that end in –ing)? While there was a distinct message that researchers can mix and match these different coding categories, it felt to me a really good challenge to try and view the whole data set from one particular view point (for example conflicts) and then step to one side and look again with a different lens.

 

It’s a little like trying to understand a piece of contemporary sculpture: you need to see it up close, far away, and then from different angles to appreciate the different forms and meaning. Looking at qualitative data can be similar – sometimes the whole picture looks so abstract or baffling, that you have to dissect it in different ways. But often the simplest methods of analysis are not going to provide real insight. Analysing a Henry Moore sculpture by the simplest categories (what is the material, size, setting) may not give much more understanding. Cutting up a work into sections like head, torso or leg does little to explore the overall intention or meaning. And certain data or research questions suit particular analytical approaches. If a sculpture is purely abstract, it is not useful to try and look for aspects of human form - even if the eye is constantly looking for such associations.

 

Here, context is everything. Can you get a sense of what the artist wanted to say? Was it an emotion, a political statement, a subtle treatise on conventional beauty? And much like impressionist painting, sometimes a very close reading stops the viewer from being able to see the brush strokes from the trees.

 

Another talk I went to on how researchers use qualitative analysis software, noted that some users assumed that the software and the coding process was either a replacement or better activity than a close reading of the text. While I don’t think that coding qualitative data can ever replace a detailed reading or familiarity with the source text, coding exercises can help read in different ways, and hence allow new interpretations to come to light. Use them to read your data sideways, backwards, and though someone else’s eyes.

 

But software can help manage and make sense of these different readings. If you have different coding categories from different interpretations, you can store these together, and use different bits from each interpretation. But it can also make it easier to experiment, and look at different stages of the process at any time. In Quirkos you can use the Levels feature to group different categories of coding together, and look at any one (or several) of those lenses at a time.

 

Whatever approach you take to coding, try to really challenge yourself, so that you are forced to categorise and thus interpret the data in different ways. And don't be suprsied if the first approach isn't the one that reveals the most insight!

 

There is a lot more on our blog about coding, for example populating a coding framework and coding your codes. There will also be more articles on coding qualitative data to come, so make sure to follow us on Twitter, and if you are looking for simple, unobtrusive software for qualitative analysis check out Quirkos!

 

7 things we learned from ICQI 2016

ICQI conference - image from Ariescwliang

 

I was lucky enough to attend the ICQI 2016 conference last week in Champaign at the University of Illinois. We managed to speak to a lot of people about using Quirkos, but there were hundreds of other talks, and here are some pointers from just a few of them!

 

 

1. Qualitative research is like being at high school
Johnny Saldaña’s keynote described (with cutting accuracy) the research cliques that people tend to stick to. It's important for us to try and think outside these methodological or topic boxes, and learn from other people doing things in different ways. With so many varied sessions and hundreds of talks, conferences like ICQI 2016 are great places to do this.

 

We were also treated to clips from high school movies, and our own Qualitative High School song! The Digital Tools thread got their own theme song: a list of all the different qualitative analysis software packages sung to the tune of ‘ABC’ - the nursery rhyme, not the Jackson 5 hit!

 

 

2. There is a definite theoretical trend
The conference featured lots of talks on Butler, Foucault, but not one explicitly on Derrida! A philosophical bias perhaps? I’m always interested in the different philosophy that is drawn from between North American, British and Continental debates…

 

 

3. Qualitative research balances a divide between chaos and order
Maggie MacLure gave an intriguing keynote about how qualitative research needs to balance the intoxicating chaos and experimentation of Dionysus with the order and clarity of Apollo (channelling Deleuze). She argued that we must resist the tendency of method and neo-liberal positioned research to push for conformity, and go further in advocating for real ontological change. She also said that researchers should do more to challenge the primacy of language: surely why we need a little Derrida here and there?!

 

 

4. We should embrace doubt and uncertainty
This was definitely something that Maggie MacLure's keynote touched on, but a session chaired by Charles Vander talked about uncertainty in the coding process, and how this can be difficult (but ultimately beneficial). Referencing Locke, Golden-Biddle and Feldman (2008), Charles talked about the need to Embrace not knowing, nurture hurdles and disrupt order (while also engaging with the world and connecting with struggle). It's important for students that allowing doubt and uncertainty doesn't lead to fear – a difficult thing when there are set deadlines and things aren’t going the right way, and even true for most academics! We need to teach that qualitative analysis is not a fixed linear process, experimentation and failure is key part of it. Kathy Charmaz echoed this while talking about grounded theory, and noted that ‘coding should be magical, not just mechanical’.

 


5. We should challenge ourselves to think about codes and coding in completely different ways

Johnny Saldaña's coding workshop (which follows on from his excellent textbook) gave examples of the incredible variety of different coding categories one can create. Rather than just creating merely descriptive index coding, try and get to the actions and motivations in the text. Create code lists which are based around actions, emotions, conflicts or even dramaturgical concepts: in which you are exploring the motivations and tactics of those in your research data. More to follow on this...

 

 

6. We still have a lot to learn about how researchers use qualitative software
Two great talks from Ely Lieber and NYU/CUNY took the wonderful meta-step of doing qualitative (and mixed method) analysis on qualitative researchers, to see how they used qualitative software and what they wanted to do with it.

Katherine Gregory and Sarah DeMott looked at responses from hundreds of users of QDA software, and found a strong preference for getting to outputs as soon as possible, and saw people using qualitative data in very quantitative ways. Eli Lieber from Dedoose looked at what he called ‘Research and Evaluation Data Analysis Software’ and saw from 355 QDA users that there was a risk of playing with data rather than really learning from it, and that many were using coding in software as a replacement for deep reading of the data.


There was also a lot of talk about the digital humanities movement, and there was some great insight from Harriett Green on how this shift looks for librarians and curators of data, and how researchers are wanting to connect and explore diverse digital archives.

 


7. Qualitative research still feels like a fringe activity
The ‘march’ of neo-liberalism was a pervasive conference theme, but there were a lot of discussions around the marginalised place of qualitative research in academia. We heard stories of qualitative modules being removed or made online only, problems with getting papers submitted in mainstream journals, and the lack of engagement from evidence users and policy makers. Conferences like this are essential to reinforce connections between researchers working all over the world, but there is clearly still need for a lot of outreach to advance the position of qualitative research in the field.

 

 

There are dozens more fascination talks I could draw from, but these are just a few highlights from my own badly scribbled notes. It was wonderful to meet so many dedicated researchers, working on so many conceptual and social issues, and it always challenges me to think how Quirkos can better meet the needs of such a disparate group of users. So don’t forget to download the trial, and give us more feedback!

 

You should also connect with the Digital Tools for Qualitative Research group, who organised one of the conference Special Interest Groups, but has many more activities and learning events across the year. Hope to see many more of you next year!

 

Workshop exercises for participatory qualitative analysis

participatory workshop

I am really interested in engaging research participants in the research process. While there is an increasing expectation to get ‘lay’ researchers to set research questions, sit on review boards and even ask questions in qualitative studies, it can be more difficult to engage them with the analysis of the research data and this is much rarer in the literature (see Nind 2011).


However, Quirkos was specifically designed to make qualitative text analysis engaging and easy enough that participants could learn to do it with just a little training, and in this blog post article from last year I describe how a small group were able to learn the software and code qualitative data in a two hour session. I am revisiting and expanding on this work for a talk in the Digital Tools thread of the 2016 ICQI conference in Champagne Illinois this month, so wanted to revisit this a little.


When I had attempted to engage participants in qualitative analysis before, it had been fairly limited in scope. The easiest way was essentially to present an already complete draft ‘report’ or overview of the analysis and findings, and use that for comment and discussion. While this allows respondents some ability to influence the final outputs and conclusions, the analysis process is still entirely led by the researcher, and they don’t really get a chance to change how data is interpreted. The power dynamics between researcher and participant are not significantly altered.


My feeling is that respondents are often the experts in what is being studied (i.e. their own experiences), and I worry that if presented with all the data, they might rightly conclude “You’ve interpreted this as being about x when it is really about y”.


Yet there are obvious problems that occur when you want respondents to engage with this level of detail. First of all, there is the matter of time: qualitative analysis is extremely time consuming, and in most projects asking someone to analyse all the data is asking for days or even months of commitment. This is not feasible for most respondents, especially if asked to do this work voluntarily parallel to the full time, paid job of the researcher! 


Most approaches in the literature choose to engage a small number of participants in the analysis of some of the data. For example Jackson (2008) uses group exercises successfully with people from different educational backgrounds. The DEPICT model breaks down the work so that the whole dataset is covered, but each team member only has a few transcripts to code (Flicker and Nixon 2015).


However, when it came to run participatory analysis workshops for the research project we did on the Scottish Referendum Project, we had an additional secret weapon: Quirkos! One of the main design briefs for Quirkos was to ensure that it was simple enough to learn that it could be used by research participants with little or no formal training in research or qualitative analysis. The workshops we ran with research-naïve respondents showed that such a software package could be used in such a way.


I was initially really worried about how the process would work practically, and how to create a small realistic task that would be a meaningful part of the analysis process. Before I started, I considered a series of tasks and scenarios that could be used in a participatory analysis workshop to get respondent input into the analysis process. I’ve included some brief details of these below, just in case they are helpful to anyone else considering participatory analysis.

 

 

Blank Sheet


The most basic, and most scary scenario: the coding team is provided with just the raw transcript(s), with no existing topic framework or coded data. They start from scratch, creating their own coding framework, and coding data. This is probably the most time consuming, and conceptually challenging approach, but the most neutral in terms of influence from the researchers. Participants are not provided with any preconceptions of what they should be exploring in the data (although they could be provided with the research questions), and are free to make their own interpretations.

 

 

Framework Creation


Here, I envisage a series of possible exercises where the focus is on not the coding of the data explicitly, but consideration of the coding framework and possible topics of interest. Participants choose topics of significance to them, or that they feel are appearing in the data. Here the process is like grounded theory, participants are given one (or several) transcripts to read, and asked what topics are significant. This works well on large sheets of paper with Post-it notes, but by creating the coding framework directly in the software, participants and researchers can easily utilise the developed framework for coding later. Could exist in several variants:


Emergent Coding
As above: creating a coding framework (probably from scratch, or with some example topics already provided by the researcher)
 

Grouping Exercise
A simpler task would be to present a pre-prepared list of many possible topics of interest created by the researcher, and ask participants to group them either thematically, or by order of importance. This gives respondents an easier start on the coding framework, allowing them to familiarise themselves with the process and topics. It is more restrictive, and plants directions of interest for the participants, but they would remain able to challenge, add to, or exclude topics for examination.
 

Category Prompts
Here the researcher has created a few very broad categories (for example Health, Housing, Family) and the participants are encouraged to populate the framework with more specific sub categories. This approach is a good middle ground, where the researcher can set some broad areas of interest, but participants have say in what direction topics should be explored in detail (say Expensive food, Lack of open space).

After one or more of these exercises, the participants could go on to use the coding framework to code the data themselves, or the researcher can use the contributed topic guide to focus their own coding.

 

 

Coding exercises


In these three exercises, I envisage a scenario where some coding has already been completed, the focus of the session is to look either at coded transcripts (on screen or printout) and discuss how the data has been interpreted. This could take the form of:


Researcher Challenge: Where the researcher asks the participants to justify or explain how they have coded the data
Participant Challenge: Participants examine data coded by researchers, question their rationale and suggest changes
Group Challenge: Participants and researchers code the same transcript separately, and get together to compare, contrast and discuss their results.


With all these approaches, one can apply several overall philosophies:
Individual: Where each respondent or researcher works on their own, adding separately to the total coding of the project
Collaborative: Analysis is done as part of a team, working together
Comparative: Where analysts work separately, but come together to discuss and contrast their work, creating a final dialogue from the input of the whole group.

 

Finally, the team should consider whether the aim of the project is to actually create a direct analysis outcome from these sessions, or if they are exercises which are themselves part of the qualitative data generated from the project. For our sessions, we also recorded, transcribed and analysed the discussion which took place around the coding, which itself also contributed nuanced and valuable insight into the thought processes of the participants. Of course, this leaves the problem of creating an infinite Ouroboros loop of data generation, if respondents were then invited to analyse the transcripts of their own analysis sessions!

 

Which approach, and how far the participatory process is taken will obviously depend on the research project and desires of the researcher. However, my main aim here is to just get people thinking about the possibilities, and if engaging participants in the research process in some way will challenge the assumptions of the research team, or lead to better results, and more relevant and impactful outputs.

 

Here are the slides of my ICQI 2016 talk, and the complete data (raw and coded) and summary report on the Scottish Referendum Project is here. I would welcome more discussion on this, in the forum, by e-mail (daniel@quirkos.com) or in the literature!

 

Don't forget, the new version of Quirkos is now available, for researchers and participants alike to bring their qualitative analysis to life. Download a free trial today!

 

 

Quirkos version 1.4 is here!

quirkos version 1.4

It’s been a long time coming, but the latest version of Quirkos is now available, and as always it’s a free update for everyone, released simultaneously on Mac, Windows and Linux with all the new goodies!


The focus of this update has been speed. You won’t see a lot of visible differences in the software, but behind the scenes we have rewritten a lot of Quirkos to make sure it copes better with large qualitative sources and projects, and is much more responsive to use. This has been a much requested improvement, and thanks to all our intrepid beta testers for ensuring it all works smoothly.


In the new version, long coded sources now load in around 1/10th of the time! Search results and hierarchy views load much quicker! Large canvas views display quicker! All this adds up to give a much more snappy and responsive experience, especially when working with large projects. This takes Quirkos to a new professional level, while retaining the engaging and addictive data coding interface.


In addition we have made a few small improvements suggested by users, including:


• Search criteria can be refined or expanded with AND/OR operands
• Reports now include a summary section of your Quirks/codes
• Ability to search source names to quickly find sources
• Searches now display the total number of results
• Direct link to the full manual

 

There are also many bug fixes! Including:
• Password protected files can now be opened across Windows, Mac and Linux
• Fix for importing PDFs which created broken Word exports
• Better and faster CSV import
• Faster Quirk merge operations
• Faster keyword search in password protected files

 

However, we have had to change the .qrk file format so that password protected files can open on any operating system. This means that projects opened or created in version 1.4 cannot be opened in older versions of Quirkos (v1.3.2 and earlier).


I know how annoying this is, but there should be no reason for people to keep using older versions: we make the updates free so that everyone is using the same version. Just make sure everyone in your team updates!

 

When you first open a project file from an older version of Quirkos in 1.4, it will automatically convert it to the new file format, and save a backup copy of the old file. Most users will not notice any difference, and you can obviously keep working with your existing project files. But if you want to share your files with other Quirkos users, make sure they also have upgraded to the latest version, or they will get an error message trying to open a file from version 1.4.

 

All you need to do to get the new version is download and install from our website (www.quirkos.com/get.html) and install to the same location as the old Quirkos. Get going, and let us know if you have any suggestions or feedback! You could see your requests appear in version 1.5!

 

Top 10 qualitative research blog posts

top 10 qualitative blog articles

We've now got more than 70 posts on the official Quirkos blog, on lots of different aspects of qualitative research and using Quirkos in different fields. But it's now getting a bit difficult to navigate, so I wanted to do a quick recap with the 10 most popular articles, based on the number of hits over the last two years.

 

Tools for critical appraisal of qualitative research

A review of tools that can be used to assess the quality of qualitative research.

 

Transcription for qualitative research

The first on a series of posts about transcribing qualitative research, breaking open the process and costs.

 

10 tips for recording good qualitative audio

Some tips for recording interviews and focus-groups for good quality transcription

 

10 tips for semi-structured qualitative interviewing

Some advice to help researchers conduct good interviews, and what to plan for in advance

 

Sampling issues in qualitative research

Issues to consider when sampling, and later recruiting participants in qualitative studies

 

Developing an interview guide for semi-structured interviews

The importance of having a guide to facilitate in-depth qualitative interviews

 

Transcribing your own qualitative data

Last on the transcription trifecta, tips for making transcription a bit easier if you have to do it yourself

 

Participant diaries for qualitative research

Some different approaches to self-report and experience sampling in qualitative research

 

Recruitment for qualitative research

Factors to consider when trying to get participants for qualitative research

 

Engaging qualitative research with a quantitative audience

The importance of packaging and presenting qualitative research in ways that can be understood by quantitative-focused policy makers and journal editors

 

There are a lot more themes to explore in the blog post, including posts on how to use CAQDAS software, and doing your qualitative analysis in Quirkos, the most colourful and intuitive way to explore your qualitative research.