In vivo coding and revealing life from the text

Ged Carrol https://www.flickr.com/photos/renaissancechambara/21768441485


Following on from the last blog post on creating weird and wonderful categories to code your qualitative data, I want to talk about an often overlooked way of creating coding topics – using direct quotes from participants to name codes or topics. This is sometimes called “in vivo” coding, from the Latin ‘in life’ and not to be confused with the ubiquitous qualitative analysis software ‘Nvivo’ which can be used for any type of coding, not just in vivo!


In an older article I did talk about having a category for ‘key quotes’ - those beautiful times when a respondent articulates something perfectly, and you know that quote is going to appear in an article, or even be the article title. However, with in vivo coding, a researcher will create a coding category based on a key phrase or word used by a participant. For example someone might say ‘It felt like I was hit by a bus’ to describe their shock at the event, and rather than creating a topic/node/category/Quirk for ‘shock’, the researcher will name it ‘hit by a bus’. This is especially useful when metaphors like this are commonly used, or someone uses an especially vivid turn of phrase.


In vivo coding doesn’t just apply to metaphor or emotions, and can keep researchers close to the language that respondents themselves are using. For example when talking about how their bedroom looks, someone might talk about ‘mess’, ‘chaos’, or ‘disorganised’ and their specific choice of word may be revealing about their personality and embarrassment. It can also mitigate the tendency for a researcher to impose their own discourse and meaning onto the text.


This method is discussed in more depth in Johnny Saldaña’s book, The Coding Manual for Qualitative Researchers, which also points out how a read-through of the text to create in vivo codes can be a useful process to create a summary of each source.


Ryan and Bernard (2003) use a different terminology, indigenous categories or typographies after Patton (1990). However, here the meaning is a little different – they are looking for unusual or unfamiliar terms which respondents use in their own subculture. A good example of these are slang terms unique to a particular group, such as drug users, surfers, or the shifting vernacular of teenagers. Again, conceptualising the lives of participants in their own words can create a more accurate interpretation, especially later down the line when you are working more exclusively with the codes.


Obviously, this method is really a type of grounded theory, letting codes and theory emerge from the data. In a way, you could consider that if in vivo coding is ‘from life’ or grows from the data, then framework coding to an existing structure is more akin to ‘in vitro’ (from glass) where codes are based on a more rigid interpretation of theory. This is just like the controlled laboratory conditions of in vitro research with more consistent, but less creative, creations.


However, there are problems in trying to interpret the data in this way, most obviously, how ubiquitous will an in-vivo code from one source be across everyone’s transcripts? If someone talks about a shocking event in one source as feeling like being ‘hit by a bus’ and another ‘world dropped out from under me’, would we code the same text together? Both are clearly about ‘shock’ and would probably belong in the same theme, but does the different language require a slightly different interpretation? Wouldn’t you loose some of the nuance of the in vivo coding process if similar themes like these were lumped together?


The answer to all of these issues is probably ‘yes’. However, they are not insurmountable. In fact, Johnny Saldaña suggests that an in vivo coding process works best as a first reading of the data, creating not just a summary if read in order,  but a framework from each source which should be later combined with a ‘higher’ level of second coding across all the data. So after completing in vivo coding, the researcher can go back and create grouped coding categories based around common elements (like shock) or/and conceptual theory level codes (like long term psychological effects) which resonate across all the sources.


This sounds like it would be a very time consuming process, but in fact multi-level coding (which I often advocate) can be very efficient, especially with an in vivo coding as the first process. It may be that you just highlight some of these key words, on paper or Word, or create a series of columns in Excel adjacent to each sentence or paragraph of source material. Since the researcher doesn’t have to ponder the best word or phrase to describe the category at this stage, creating the coding framework is quick. It’s also a great process for participatory analysis, since respondents can quickly engage with selecting juicy morsels of text.


Don’t forget, you don’t have to use an exclusivly in vivo coding framework: just remember that it’s an option, and use for key illuminating quotes along side your other codes. Again, there is no one-size-fits-all approach for qualitative analysis, but knowing the range of methods allows you to choose the best way forward for each research question or project.


CAQDAS/QDA software makes it easy to keep all the different stages of your coding process together, and also create new topics by splitting and emerging existing codes. While the procedure will vary a little across the different qualitative analysis packages, the basics are very similar, so I’ll give a quick example of how you might do this in Quirkos.


Not a lot of people know this, but you can create a new Quirk/topic in Quirkos by dropping a section of text directly onto the create new bubble button, so this is a good way to create a lot of themes on the fly (as with in vivo coding). Just name these according to the in vivo phrase, and make sure that you highlight the whole section of relevant text for coding, so that you can easily see the context and what they are talking about.


Once you have done a full (or partial) reading and coding of your qualitative data, you can work with these codes in several ways. Perhaps the easiest is to create a umbrella (or parent) code (like shock) to which you can make relevant in vivo codes subcategories, just by dragging and dropping them onto the top node. Now, when you double click on the main node, you will see quotes from all the in vivo subcategories in one place.

 

qualitative research software - quirkos

 

It’s also possible to use the Levels feature in Quirkos to group your codes: this is especially useful when you might want to put an in vivo code into more than one higher level group. For example, the ‘hit by a bus’ code might belong in ‘shock’ but also a separate category called ‘metaphors’. You can create levels from the Quirk Properties dialogue of any Quirk, assign codes to one or more of these levels, and explore them using the query view. See this blog post for more on how to use levels in Quirkos.


It’s also possible to save a snapshot of your project at any point, and then actually merge codes together to keep them all under the same Quirk. You will loose most of the original in vivo codes this way (which is why the other options are usually better) but if you find yourself just dealing with too many codes, or want to create a neat report based on a few key concepts this can be a good way to go. Just right click on the Quirks you want to keep, and select ‘Merge Quirk with...’ to choose another topic to be absorbed into it. Don’t forget all actions in Quirkos have Undo and Redo options!


We don’t have an example dataset coded using in vivo quotes, but if you look at some of the sources from our Scottish Independence research project, you will see some great comments about politics and politicians that leap out of the page and would work great for in vivo coding. So why not try it out, and give in vivo coding a whirl with the free trial of Quirkos: affordable, flexible qualitative software that makes coding all these different approaches a breeze!

 

 

Turning qualitative coding on its head

CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=248747


For the first time in ages I attended a workshop on qualitative methods, run by the wonderful Johnny Saldaña. Developing software has become a full time (and then some) occupation for me, which means I have little scope for my own professional development as a qualitative researcher. This session was not only a welcome change, but also an eye-opening critique to the way that many in the room (myself included) approach coding qualitative data.

 

Professor Saldaña has written an excellent Coding Manual for Qualitative Researchers, and the workshop really brought to life some of the lessons and techniques in the book. Fundamental to all the approaches was a direct challenge to researchers doing qualitative coding: code different.

 

Like many researchers, I am guilty of taking coding as a reductive, mechanical exercise. My codes tend to be very basic and descriptive – what is often called index coding. My codes are often a summary word of what the sentence or section of text is literally about. From this, I will later take a more ‘grand-stand’ view of the text and codes themselves, looking at connections between themes to create categories that are closer to theory and insight.

 

However, Professor Saldaña gave (to my count) at least 12 different coding frameworks and strategies that were completely unique to me. While I am not going to go into them all here (that’s what the textbook, courses and the companion website is for!) it was not one particular strategy that stuck with me, but the diversity of approaches.

 

It’s easy when you start out with qualitative data analysis to try a simple strategy – after all it can be a time consuming and daunting conceptual process. And when you have worked with a particular approach for many years (and are surrounded by colleagues who have a similar outlook) it is difficult to challenge yourself. But as I have said before, to prevent coding being merely a reductive and descriptive act, it needs to be continuous and iterative. To truly be analysis and interrogate not just the data, but the researcher’s conceptualisation of the data, it must challenge and encourage different readings of the data.

 

For example, Professor Saldaña actually has a background in performance and theatre, and brings some common approaches from that sphere to the coding process: exactly the kind of cross-disciplinary inspiration I love! When an actor or actress is approaching a scene or character, they may engage with the script (which is much like a qualitative transcript) looking at the character's objectives, conflicts, tactics, attitudes, emotions and subtexts. The question is: what is the character trying to do or communicate, and how? This sort of actor-centred approach works really well in qualitative analysis, in which people, narratives and stories are often central to the research question.

 

So if you have an interview with someone, for example on their experience with the adoption process, imagine you are a writer dissecting the motivations of a character in a novel. What are they trying to do? Justify how they would be a good parent (objectives)? Ok, so how are they doing this (tactics)? And what does this reveal about their attitudes and emotions? Is there a subtext here – are they hurt because of a previous rejection?

 

Other techniques talked about the importance of creating codes which were based around emotions, participant’s values, or even actions: for example, can you make all your codes gerunds (words that end in –ing)? While there was a distinct message that researchers can mix and match these different coding categories, it felt to me a really good challenge to try and view the whole data set from one particular view point (for example conflicts) and then step to one side and look again with a different lens.

 

It’s a little like trying to understand a piece of contemporary sculpture: you need to see it up close, far away, and then from different angles to appreciate the different forms and meaning. Looking at qualitative data can be similar – sometimes the whole picture looks so abstract or baffling, that you have to dissect it in different ways. But often the simplest methods of analysis are not going to provide real insight. Analysing a Henry Moore sculpture by the simplest categories (what is the material, size, setting) may not give much more understanding. Cutting up a work into sections like head, torso or leg does little to explore the overall intention or meaning. And certain data or research questions suit particular analytical approaches. If a sculpture is purely abstract, it is not useful to try and look for aspects of human form - even if the eye is constantly looking for such associations.

 

Here, context is everything. Can you get a sense of what the artist wanted to say? Was it an emotion, a political statement, a subtle treatise on conventional beauty? And much like impressionist painting, sometimes a very close reading stops the viewer from being able to see the brush strokes from the trees.

 

Another talk I went to on how researchers use qualitative analysis software, noted that some users assumed that the software and the coding process was either a replacement or better activity than a close reading of the text. While I don’t think that coding qualitative data can ever replace a detailed reading or familiarity with the source text, coding exercises can help read in different ways, and hence allow new interpretations to come to light. Use them to read your data sideways, backwards, and though someone else’s eyes.

 

But software can help manage and make sense of these different readings. If you have different coding categories from different interpretations, you can store these together, and use different bits from each interpretation. But it can also make it easier to experiment, and look at different stages of the process at any time. In Quirkos you can use the Levels feature to group different categories of coding together, and look at any one (or several) of those lenses at a time.

 

Whatever approach you take to coding, try to really challenge yourself, so that you are forced to categorise and thus interpret the data in different ways. And don't be suprsied if the first approach isn't the one that reveals the most insight!

 

There is a lot more on our blog about coding, for example populating a coding framework and coding your codes. There will also be more articles on coding qualitative data to come, so make sure to follow us on Twitter, and if you are looking for simple, unobtrusive software for qualitative analysis check out Quirkos!

 

7 things we learned from ICQI 2016

ICQI conference - image from Ariescwliang

 

I was lucky enough to attend the ICQI 2016 conference last week in Champaign at the University of Illinois. We managed to speak to a lot of people about using Quirkos, but there were hundreds of other talks, and here are some pointers from just a few of them!

 

 

1. Qualitative research is like being at high school
Johnny Saldaña’s keynote described (with cutting accuracy) the research cliques that people tend to stick to. It's important for us to try and think outside these methodological or topic boxes, and learn from other people doing things in different ways. With so many varied sessions and hundreds of talks, conferences like ICQI 2016 are great places to do this.

 

We were also treated to clips from high school movies, and our own Qualitative High School song! The Digital Tools thread got their own theme song: a list of all the different qualitative analysis software packages sung to the tune of ‘ABC’ - the nursery rhyme, not the Jackson 5 hit!

 

 

2. There is a definite theoretical trend
The conference featured lots of talks on Butler, Foucault, but not one explicitly on Derrida! A philosophical bias perhaps? I’m always interested in the different philosophy that is drawn from between North American, British and Continental debates…

 

 

3. Qualitative research balances a divide between chaos and order
Maggie MacLure gave an intriguing keynote about how qualitative research needs to balance the intoxicating chaos and experimentation of Dionysus with the order and clarity of Apollo (channelling Deleuze). She argued that we must resist the tendency of method and neo-liberal positioned research to push for conformity, and go further in advocating for real ontological change. She also said that researchers should do more to challenge the primacy of language: surely why we need a little Derrida here and there?!

 

 

4. We should embrace doubt and uncertainty
This was definitely something that Maggie MacLure's keynote touched on, but a session chaired by Charles Vander talked about uncertainty in the coding process, and how this can be difficult (but ultimately beneficial). Referencing Locke, Golden-Biddle and Feldman (2008), Charles talked about the need to Embrace not knowing, nurture hurdles and disrupt order (while also engaging with the world and connecting with struggle). It's important for students that allowing doubt and uncertainty doesn't lead to fear – a difficult thing when there are set deadlines and things aren’t going the right way, and even true for most academics! We need to teach that qualitative analysis is not a fixed linear process, experimentation and failure is key part of it. Kathy Charmaz echoed this while talking about grounded theory, and noted that ‘coding should be magical, not just mechanical’.

 


5. We should challenge ourselves to think about codes and coding in completely different ways

Johnny Saldaña's coding workshop (which follows on from his excellent textbook) gave examples of the incredible variety of different coding categories one can create. Rather than just creating merely descriptive index coding, try and get to the actions and motivations in the text. Create code lists which are based around actions, emotions, conflicts or even dramaturgical concepts: in which you are exploring the motivations and tactics of those in your research data. More to follow on this...

 

 

6. We still have a lot to learn about how researchers use qualitative software
Two great talks from Ely Lieber and NYU/CUNY took the wonderful meta-step of doing qualitative (and mixed method) analysis on qualitative researchers, to see how they used qualitative software and what they wanted to do with it.

Katherine Gregory and Sarah DeMott looked at responses from hundreds of users of QDA software, and found a strong preference for getting to outputs as soon as possible, and saw people using qualitative data in very quantitative ways. Eli Lieber from Dedoose looked at what he called ‘Research and Evaluation Data Analysis Software’ and saw from 355 QDA users that there was a risk of playing with data rather than really learning from it, and that many were using coding in software as a replacement for deep reading of the data.


There was also a lot of talk about the digital humanities movement, and there was some great insight from Harriett Green on how this shift looks for librarians and curators of data, and how researchers are wanting to connect and explore diverse digital archives.

 


7. Qualitative research still feels like a fringe activity
The ‘march’ of neo-liberalism was a pervasive conference theme, but there were a lot of discussions around the marginalised place of qualitative research in academia. We heard stories of qualitative modules being removed or made online only, problems with getting papers submitted in mainstream journals, and the lack of engagement from evidence users and policy makers. Conferences like this are essential to reinforce connections between researchers working all over the world, but there is clearly still need for a lot of outreach to advance the position of qualitative research in the field.

 

 

There are dozens more fascination talks I could draw from, but these are just a few highlights from my own badly scribbled notes. It was wonderful to meet so many dedicated researchers, working on so many conceptual and social issues, and it always challenges me to think how Quirkos can better meet the needs of such a disparate group of users. So don’t forget to download the trial, and give us more feedback!

 

You should also connect with the Digital Tools for Qualitative Research group, who organised one of the conference Special Interest Groups, but has many more activities and learning events across the year. Hope to see many more of you next year!

 

Workshop exercises for participatory qualitative analysis

participatory workshop

I am really interested in engaging research participants in the research process. While there is an increasing expectation to get ‘lay’ researchers to set research questions, sit on review boards and even ask questions in qualitative studies, it can be more difficult to engage them with the analysis of the research data and this is much rarer in the literature (see Nind 2011).


However, Quirkos was specifically designed to make qualitative text analysis engaging and easy enough that participants could learn to do it with just a little training, and in this blog post article from last year I describe how a small group were able to learn the software and code qualitative data in a two hour session. I am revisiting and expanding on this work for a talk in the Digital Tools thread of the 2016 ICQI conference in Champagne Illinois this month, so wanted to revisit this a little.


When I had attempted to engage participants in qualitative analysis before, it had been fairly limited in scope. The easiest way was essentially to present an already complete draft ‘report’ or overview of the analysis and findings, and use that for comment and discussion. While this allows respondents some ability to influence the final outputs and conclusions, the analysis process is still entirely led by the researcher, and they don’t really get a chance to change how data is interpreted. The power dynamics between researcher and participant are not significantly altered.


My feeling is that respondents are often the experts in what is being studied (i.e. their own experiences), and I worry that if presented with all the data, they might rightly conclude “You’ve interpreted this as being about x when it is really about y”.


Yet there are obvious problems that occur when you want respondents to engage with this level of detail. First of all, there is the matter of time: qualitative analysis is extremely time consuming, and in most projects asking someone to analyse all the data is asking for days or even months of commitment. This is not feasible for most respondents, especially if asked to do this work voluntarily parallel to the full time, paid job of the researcher! 


Most approaches in the literature choose to engage a small number of participants in the analysis of some of the data. For example Jackson (2008) uses group exercises successfully with people from different educational backgrounds. The DEPICT model breaks down the work so that the whole dataset is covered, but each team member only has a few transcripts to code (Flicker and Nixon 2015).


However, when it came to run participatory analysis workshops for the research project we did on the Scottish Referendum Project, we had an additional secret weapon: Quirkos! One of the main design briefs for Quirkos was to ensure that it was simple enough to learn that it could be used by research participants with little or no formal training in research or qualitative analysis. The workshops we ran with research-naïve respondents showed that such a software package could be used in such a way.


I was initially really worried about how the process would work practically, and how to create a small realistic task that would be a meaningful part of the analysis process. Before I started, I considered a series of tasks and scenarios that could be used in a participatory analysis workshop to get respondent input into the analysis process. I’ve included some brief details of these below, just in case they are helpful to anyone else considering participatory analysis.

 

 

Blank Sheet


The most basic, and most scary scenario: the coding team is provided with just the raw transcript(s), with no existing topic framework or coded data. They start from scratch, creating their own coding framework, and coding data. This is probably the most time consuming, and conceptually challenging approach, but the most neutral in terms of influence from the researchers. Participants are not provided with any preconceptions of what they should be exploring in the data (although they could be provided with the research questions), and are free to make their own interpretations.

 

 

Framework Creation


Here, I envisage a series of possible exercises where the focus is on not the coding of the data explicitly, but consideration of the coding framework and possible topics of interest. Participants choose topics of significance to them, or that they feel are appearing in the data. Here the process is like grounded theory, participants are given one (or several) transcripts to read, and asked what topics are significant. This works well on large sheets of paper with Post-it notes, but by creating the coding framework directly in the software, participants and researchers can easily utilise the developed framework for coding later. Could exist in several variants:


Emergent Coding
As above: creating a coding framework (probably from scratch, or with some example topics already provided by the researcher)
 

Grouping Exercise
A simpler task would be to present a pre-prepared list of many possible topics of interest created by the researcher, and ask participants to group them either thematically, or by order of importance. This gives respondents an easier start on the coding framework, allowing them to familiarise themselves with the process and topics. It is more restrictive, and plants directions of interest for the participants, but they would remain able to challenge, add to, or exclude topics for examination.
 

Category Prompts
Here the researcher has created a few very broad categories (for example Health, Housing, Family) and the participants are encouraged to populate the framework with more specific sub categories. This approach is a good middle ground, where the researcher can set some broad areas of interest, but participants have say in what direction topics should be explored in detail (say Expensive food, Lack of open space).

After one or more of these exercises, the participants could go on to use the coding framework to code the data themselves, or the researcher can use the contributed topic guide to focus their own coding.

 

 

Coding exercises


In these three exercises, I envisage a scenario where some coding has already been completed, the focus of the session is to look either at coded transcripts (on screen or printout) and discuss how the data has been interpreted. This could take the form of:


Researcher Challenge: Where the researcher asks the participants to justify or explain how they have coded the data
Participant Challenge: Participants examine data coded by researchers, question their rationale and suggest changes
Group Challenge: Participants and researchers code the same transcript separately, and get together to compare, contrast and discuss their results.


With all these approaches, one can apply several overall philosophies:
Individual: Where each respondent or researcher works on their own, adding separately to the total coding of the project
Collaborative: Analysis is done as part of a team, working together
Comparative: Where analysts work separately, but come together to discuss and contrast their work, creating a final dialogue from the input of the whole group.

 

Finally, the team should consider whether the aim of the project is to actually create a direct analysis outcome from these sessions, or if they are exercises which are themselves part of the qualitative data generated from the project. For our sessions, we also recorded, transcribed and analysed the discussion which took place around the coding, which itself also contributed nuanced and valuable insight into the thought processes of the participants. Of course, this leaves the problem of creating an infinite Ouroboros loop of data generation, if respondents were then invited to analyse the transcripts of their own analysis sessions!

 

Which approach, and how far the participatory process is taken will obviously depend on the research project and desires of the researcher. However, my main aim here is to just get people thinking about the possibilities, and if engaging participants in the research process in some way will challenge the assumptions of the research team, or lead to better results, and more relevant and impactful outputs.

 

Here are the slides of my ICQI 2016 talk, and the complete data (raw and coded) and summary report on the Scottish Referendum Project is here. I would welcome more discussion on this, in the forum, by e-mail (daniel@quirkos.com) or in the literature!

 

Don't forget, the new version of Quirkos is now available, for researchers and participants alike to bring their qualitative analysis to life. Download a free trial today!

 

 

Quirkos version 1.4 is here!

quirkos version 1.4

It’s been a long time coming, but the latest version of Quirkos is now available, and as always it’s a free update for everyone, released simultaneously on Mac, Windows and Linux with all the new goodies!


The focus of this update has been speed. You won’t see a lot of visible differences in the software, but behind the scenes we have rewritten a lot of Quirkos to make sure it copes better with large qualitative sources and projects, and is much more responsive to use. This has been a much requested improvement, and thanks to all our intrepid beta testers for ensuring it all works smoothly.


In the new version, long coded sources now load in around 1/10th of the time! Search results and hierarchy views load much quicker! Large canvas views display quicker! All this adds up to give a much more snappy and responsive experience, especially when working with large projects. This takes Quirkos to a new professional level, while retaining the engaging and addictive data coding interface.


In addition we have made a few small improvements suggested by users, including:


• Search criteria can be refined or expanded with AND/OR operands
• Reports now include a summary section of your Quirks/codes
• Ability to search source names to quickly find sources
• Searches now display the total number of results
• Direct link to the full manual

 

There are also many bug fixes! Including:
• Password protected files can now be opened across Windows, Mac and Linux
• Fix for importing PDFs which created broken Word exports
• Better and faster CSV import
• Faster Quirk merge operations
• Faster keyword search in password protected files

 

However, we have had to change the .qrk file format so that password protected files can open on any operating system. This means that projects opened or created in version 1.4 cannot be opened in older versions of Quirkos (v1.3.2 and earlier).


I know how annoying this is, but there should be no reason for people to keep using older versions: we make the updates free so that everyone is using the same version. Just make sure everyone in your team updates!

 

When you first open a project file from an older version of Quirkos in 1.4, it will automatically convert it to the new file format, and save a backup copy of the old file. Most users will not notice any difference, and you can obviously keep working with your existing project files. But if you want to share your files with other Quirkos users, make sure they also have upgraded to the latest version, or they will get an error message trying to open a file from version 1.4.

 

All you need to do to get the new version is download and install from our website (www.quirkos.com/get.html) and install to the same location as the old Quirkos. Get going, and let us know if you have any suggestions or feedback! You could see your requests appear in version 1.5!

 

Top 10 qualitative research blog posts

top 10 qualitative blog articles

We've now got more than 70 posts on the official Quirkos blog, on lots of different aspects of qualitative research and using Quirkos in different fields. But it's now getting a bit difficult to navigate, so I wanted to do a quick recap with the 10 most popular articles, based on the number of hits over the last two years.

 

Tools for critical appraisal of qualitative research

A review of tools that can be used to assess the quality of qualitative research.

 

Transcription for qualitative research

The first on a series of posts about transcribing qualitative research, breaking open the process and costs.

 

10 tips for recording good qualitative audio

Some tips for recording interviews and focus-groups for good quality transcription

 

10 tips for semi-structured qualitative interviewing

Some advice to help researchers conduct good interviews, and what to plan for in advance

 

Sampling issues in qualitative research

Issues to consider when sampling, and later recruiting participants in qualitative studies

 

Developing an interview guide for semi-structured interviews

The importance of having a guide to facilitate in-depth qualitative interviews

 

Transcribing your own qualitative data

Last on the transcription trifecta, tips for making transcription a bit easier if you have to do it yourself

 

Participant diaries for qualitative research

Some different approaches to self-report and experience sampling in qualitative research

 

Recruitment for qualitative research

Factors to consider when trying to get participants for qualitative research

 

Engaging qualitative research with a quantitative audience

The importance of packaging and presenting qualitative research in ways that can be understood by quantitative-focused policy makers and journal editors

 

There are a lot more themes to explore in the blog post, including posts on how to use CAQDAS software, and doing your qualitative analysis in Quirkos, the most colourful and intuitive way to explore your qualitative research.

 

 

Participant diaries for qualitative research

participant diaries

 

I’ve written a little about this before, but I really love participant diaries!


In qualitative research, you are often trying to understand the lives, experiences and motivations of other people. Through methods like interviews and focus groups, you can get a one-off insight into people’s own descriptions of themselves. If you want to measure change over a period, you need to schedule a series of meetings, and each of which will be limited by what a participant will recall and share.


However, using diary methodologies, you can get a longer and much more regular insight into lived experiences, plus you also change the researcher-participant power dynamic. Interviews and focus groups can sometimes be a bit of an interrogation, with the researcher asking questions, and participants given the role of answering. With diaries, participants can have more autonomy to share what they want, as well as where and when (Meth 2003).


These techniques are also called self-report or ‘Contemporaneous assessment methods’, but there are actually a lot of different ways you can collect diary entries. There are some great reviews of different diary based methods, (eg Bolger et al. 2003), but let’s look at some of the different approaches.


The most obvious is to give people a little journal or exercise book to write in, and ask them to record on a regular basis any aspects of their day that are relevant to your research topic. If they are expected to make notes on the go, make it a slim pocket sized one. If they are going to write a more traditional diary at the end of each day, make a nice exercise book to work in. I’ve actually found that people end up getting quite attached to their diaries, and will often ask for them back. So make sure you have some way to copy or transcribe them and consider offering to return them once you have investigated them, or you could give back a copy if you wish to keep hold of the real thing.

 

You can also do voice diaries – something I tried in Botswana. We were initially worried that literacy levels in rural areas would mean that participants would either be unable, or reluctant to create written entries. So I offered everyone a small voice recorder, where they could record spoken notes that we would transcribe at the end of the session. While you could give a group of people an inexpensive (~£20) Dictaphone, I actually brought a bunch of cheap no-brand MP3 players which only cost ~£5 each, had a built in voice recorder and headphones, and could work on a single AAA battery (which was easy to find from local shops, since few respondents had electricity for recharging). The audio quality was not great, but perfectly adequate. People really liked these because they could also play music (and had a radio), and they were cheap enough to be lost or left as thank-you gifts at the end of the research.

 

There is also a large literature on ‘experience sampling’ – where participants are prompted at regular or random intervals to record on what they are doing or how they are feeling at that time. Initially this work was done using pagers, (Larson 1989) when participants would be ‘beeped’ at random times during the day and asked to write down what they were doing at the time. More recent studies have used smartphones to both prompt and directly collect responses (Chen et al. 2014).

 

Secondly, there is now a lot of online journal research, both researcher solicited as part of a qualitative research project (Kaun 2015), or collected from people’s blogs and social media posts. This is especially popular in market research when looking at consumer behaviour (Patterson 2005), project evaluation (Cohen et al. 2006).

 

Diary methods can create detailed, and reliable data. One study found that asking participants to record diary entries three times a day to measure stigmatised behaviour like sexual activities found an 89.7% adherence rate (Hensel et al. 2012), far higher than would be expected from traditional survey methods. There is a lot of diary based research in the sexual and mental health literature: for more discussion on the discrepancies and reliability between diary and recall methods, there is a good overview in Coxon (1999) but many studies like Garry et al. (2002) found that diary based methods generated more accurate responses. Note that these kinds of studies tend to be mixed-method, collecting both discrete quantitative data and open ended qualitative comments.

 

Whatever the method you are choosing, it’s important to set up some clear guidelines to follow. Personally I think either a telephone conversation or face-to-face meeting is a good idea to give a chance for participants to ask questions. If you’ve not done research diaries before, it’s a good idea to pilot them with one or two people to make sure you are briefing people clearly, and they can write useful entries for you. The guidelines, (explained and taped to the inside of the diary) should make it clear:

  • What you are interested in hearing about
  • What it will be used for
  • How often you expect people to write
  • How much they should write
  • How to get in touch with you
  • How long they should be writing entries, and how to return the diary.

 

Even if you expressly specify that your participants should write their journals should be written everyday for three weeks, you should be prepared for the fact that many won’t manage this. You’ll have some that start well but lapse, others that forget until the end and do it all in the last day before they see you, and everything in-between. You need to assume this will happen with some or all of your respondents, and consider how this is going to affect how you interpret the data and draw conclusions. It shouldn’t necessarily mean that the data is useless, just that you need to be aware of the limitations when analysing it. There will also be a huge variety in how much people write, despite your guidelines. Some will love the experience, sharing volumes of long entries, others might just write a few sentences, which might still be revealing.

 

For these reasons, diary-like methodologies are usually used in addition to other methods, such as semi-structured interviews (Meth 2003), or detailed surveys. Diaries can be used to triangulate claims made by respondents in different data sources (Schroder 2003) or provide more richness and detail to the individual narrative. From the researchers point of view, the difference between having data where a respondent says they have been bullied, and having an account of a specific incident recorded that day is significant, and gives a great amount of depth and illumination into the underlying issues.

 

Qualitative software - Quirkos

 

However, you also need to carefully consider the confidentiality and other ethical issues. Often participants will share a lot of personal information in diaries, and you must agree how you will deal with this and anonymise it for your research. While many respondents find keeping a qualitative diary a positive and reflexive process, it can be stressful to ask people in difficult situations to reflect on uncomfortable issues. There is also the risk that the diary could be lost, or read by other people mentioned in it, creating a potential disclosure risk to participants. Depending on what you are asking about, it might be wise to ask participants themselves to create anonymised entries, with pseudo-names and places as they write.

 

Last, but not least, what about your own diary? Many researchers will keep a diary, journal or ‘field notes’ during the research process (Altricher and Holly 2004), which can help provide context and reflexivity as well as a good way of recording thoughts on ideas and issues that arise during the data collection process. This is also a valuable source of qualitative data itself, and it’s often useful to include your journal in the analysis process – if not coded, then at least to remind you of your own reflections and experiences during the research journey.

 

So how can you analyse the text of your participant diaries? In Quirkos of course! Quirkos takes all the basics you need to do qualitative analysis, and puts it in a simple, and easy to use package. Try for yourself with a free trial, or find out more about the features and benefits.

 

Sharing qualitative research data from Quirkos

exporting and sharing qualitative data

Once you’ve coded, explored and analysed your qualitative data, it’s time to share it with the world. For students, the first step will be supervisors, for researchers it might be peers or the wider research community, and for market research firms, it will be their clients. Regardless of who the end user of your research is, Quirkos offers a lot of different ways to get your hard earned coding out into the real world.

 

Share your project file
The best, and easiest way to share your coded data is to send your project file to someone. If they have a copy of Quirkos (even the trial) they will be able to explore the project in the same way you can, and you can work on it collaboratively. Files are compatible across Windows, Mac and Linux, and are small enough they can be e-mailed, put on a USB stick or Dropbox as needed.

 

Word export
One of people’s favourite features is the Word export, which creates a standard Word file of your data, with comments and coloured highlights showing your complete coding. This means that pretty much anyone can see your coding, since the file will open in Microsoft Office, LibreOffice/OpenOffice, Google Docs, Pages (on Mac) and many others. It’s also a great way to print out your project if you prefer to read though it on paper, while still being able to see all your coding. If you print the ‘Full Markup’ view, you will still be able to see the name (and author) of the code on a black and white printer!qualitative word export from quirkos


There are two options available in the ‘Project’ button – either ‘Export All Sources as Word Document’ which creates one long file, or ‘Export Each Source’ which creates a separate file for each source in the project in a folder you specify.

 

Reports
So this is the most conventional output in Quirkos, a customisable document which gives a summary of the project, and an ordered list of coded text segments. It also includes graphical views of your coding framework, including the clustered views which show the connections between themes. When generated in Quirkos, you will get a two columned preview, with a view of how the report will look on the left, and all the options for what you want to include in the report on the right.


You can print this directly, save it as a PDF document, or even save as a webpage. This last option creates a report folder that anyone can open, explore and customise in their browser, in the same way as you are able to in the Quirkos report view. This also creates a folder which contains all the images in the report (such as the canvas and overlap views) that you can then include directly in presentations or articles.

quirkos qualitative data report


There are many options available here, including the ability to list all quotes by source (ie everything one person said) or by theme (ie everything everyone said on one topic). You can change how these quotes are formatted (by making the text or highlight into the colour of the Quirk) and the level of detail, such as whether to include the source name, properties and percentage of coding.

 

Sub-set reports (query view)
By default, the report button will generate output of the whole project. But if you want to just get responses from a sub-set of your data, you can generate reports containing only the results of filters from the query view. So you could generate a report that only shows the responses from Men or Women, or by one of the authors in the project.

 

CSV export
Quirkos also gives you the option to export your project as CSV files – a common spreadsheet format which you can open with in Excel, SPSS or equivalents. This allows you to do more quantitative analysis in statistical software, generate graphs of your coding, and conduct more detailed sub-analysis. The CSV export creates a series of files which represent the different tables in the project database, with v_highlight.csv containing your coded quotes. Other files contain the question and answers (in a structured project), a list of all your codes, levels, and source properties (also called metadata).

 

Database editing
For true power users, there is also the option to perform full SQL operations on your project file. Since Quirkos saves all your project data as a standard SQLite database, it’s possible to open and edit it with a number of third party tools such as SQL Browser to perform advanced operations. You can also use standard command line operations (CLI) like SELECT FROM WHERE to explore and edit the database. Our full manual has more details on the database structure. Hopefully, this will also allow for better integration with other qualitative analysis software in the future.

 

If you are interesting in seeing how Quirkos can help with coding and presenting your qualitative data, you can download a one-month free trial and try for yourself. Good luck with your research!

 

Tools for critical appraisal of qualitative research

appraising qualitative data

I've mentioned before how the general public are very quantitatively literate: we are used to dealing with news containing graphs, percentages, growth rates, and big numbers, and they are common enough that people rarely have trouble engaging with them.

 

In many fields of studies this is also true for researchers and those who use evidence professionally. They become accustomed to p-values, common statistical tests, and plot charts. Lots of research is based on quantitative data, and there is a training and familiarity in these methods and data presentation techniques which create a lingua-franca of researchers across disciplines and regions.

 

However, I've found in previous research that many evidence based decision makers are not comfortable with qualitative research. There are many reasons for this, but I frequently hear people essentially say that they don't know how to appraise it. While they can look at a sample size and recruitment technique and a r-square value and get an idea of the limitations of a study, this is much harder for many practitioners to do with qualitative techniques they are less familiar with.

 

But this needn’t be the case, qualitative research is not rocket science, and there are fundamental common values which can be used to assess the quality of a piece of research. This week, a discussion on appraisal of qualitative research was started on Twitter started by the Mental Health group of the 'National Elf Service’ (@Mental_Elf) - an organisation devoted to collating and summarising health evidence for practitioners.

 

People contributed many great suggestions of guides and toolkits that anyone can use to examine and critique a qualitative study, even if the user is not familiar with qualitative methodologies. I frequently come across this barrier to promoting qualitative research in public sector organisations, so was halfway through putting together these resources when I realised they might be useful to others!

 

First of all, David Nunan (@dnunan79) based at the University of Oxford shared an appraisal tool developed at the Centre for Evidence-Based Medicine (@CebmOxford).

 

Lucy Terry (@LucyACTerry) offered specific guidelines for charities from New Philanthropy Capital, with gives five key quality criteria, that the research should be: Valid, Reliable, Confirmable, Reflexive and Responsible.

 

There’s also an article by Kuper et al (2008) which offers guidance on assessing a study using qualitative evidence. As a starting point, they list 6 questions to ask:

  • Was the sample used in the study appropriate to its research question?
  • Were the data collected appropriately?
  • Were the data analysed appropriately?
  • Can I transfer the results of this study to my own setting?
  • Does the study adequately address potential ethical issues, including reflexivity?
  • Overall: is what the researchers did clear?
     

The International Centre for Allied Health Evidence at the University of South Australia has a list of critical apprasial tools, including ones specific to qualitative research. From these, I quite like the checklist format of one developed by the Critical Appraisal Skills Programme, I can imagine this going down well with health commissioners.

 

Another from the Occupational Therapy Evidence-Based Practice Research Group at McMaster University in Canada is more detailed, and is also available in multiple languages and an editable Word document.

 

Finally, Margaret Roller and Paul Lavrakas have a recent textbook (Applied Qualitative Research Design: A Total Quality Framework Approach 2015) that covers many of these issues in research, and detail the Total Quality Framework that can be used for designing, discussing and evaluating qualitative research. The book contains specific chapters on detailing the application of the framework to different projects and methodologies. Margaret Roller also has an article on her excellent blog on weighing the value of qualitative research, which gives an example of the Total Quality Framework.

 

In short, there are a lot of options to choose from, but the take away message from them is that the questions are simple, short, and largely common sense. However, the process of assessing even just a few pieces of qualitative research in this way will quickly get evidence based practitioners into the habit of asking these questions of most projects they come across, hopefully increasing their comfort level in dealing with qualitative studies.

 

The tools are also useful for students, even if they are familiar with qualitative methodologies, as it helps facilitate a critical reading that can give focus to paper discussion groups or literature reviews. Adopting one of the appraisal techniques here (or modifying one) would also be a great start to a systematic review or meta-analysis.

 

Finally, there are a few sources from the Evidence and Ethnicity in Commissioning project I was involved with that might be useful, but if you have any suggestions please let me know, either in the forum or by e-mailing daniel@quirkos.com and I will add these to the list. Don't forget to find out more about using Quirkos for your qualitative analysis and download the free trial.

 

 

Finding, using and some cautions on secondary qualitative data

secondary data analysis

 

Many researchers instinctively plan to collect and create new data when starting a research project. However, this is not always needed, and even if you end up having to collect your own data, looking for other sources already out there can help prevent redundancy and improve your conceptualisation of a project. Broadly you can think of two different types of secondary data: sources collected previously for specific research projects, and data 'scraped' from other sources, such as Hansard transcripts or Tweets.

 

Other data sources include policy documents, news articles, social media posts, other research articles, and repositories of qualitative data collected by other researchers, which might be interviews, focus groups, diaries, videos or other sources. Secondary analysis of qualitative data is not very common, since the data tends to be collected with a very narrow research interest, and of a depth that makes anonymisation and aggregation difficult. However, there is a huge amount of rich data that could be used again for other subjects, see this article by Heaton 2008 for a general overview, and Irwin 2013 for discussion of some of the ethical issues.

 

The advantage with using qualitative data analysis software is that you can keep many disperate sources of evidence together in one place. If you have an article positing a particular theory, you can quickly cross code supportive evidence with sections of text from that article, and evidence why your data does or does not support the literature. Examining social policy? Put the law or government guidelines into your project file, and cross reference statements from participants that challenge or support these dictates in practice. The best research does not exist in isolation: it must engage with both the existing literature and real policy to make an impact.

 


Data from other sources has the advantage that the researcher doesn’t have to spend time and resources collecting and recruiting. However, the constant disadvantage is that the data was not specifically obtained to meet your particular research questions or needs. For example, data from Twitter might give valuable insights into people’s political views. But statements that people make do not always equate with their views (this is true for directly collected research methods as well), so someone may make a controversial statement just to get more followers, or be suppressing their true beliefs if they believe that expressing them will be unpopular. 

 

Each different website has it’s own culture as well, which can affect what people share, and in how much detail. A paper by Hine (2012) shows that posts from the popular UK 'Mumsnet' forum have particular attitudes that are acceptable, and often posters are looking for validation of their behaviour from others. Twitter and Facebook are no exception, their each have different styles and acceptable posts that true internet ethnographers should understand well!

 

Even when using secondary data collected for a specific academic research project, the data might not be suitable for your needs. A great series of qualitative interviews about political views may seem to be a great fit for your research, but might not have asked a key question (for example about respondents parent’s beliefs) which renders the data unusable for your purpose. Additionally, it is usually impossible to identify respondents in secondary data sets to ask follow on questions, since data is anonymised. It’s sometimes even difficult to see the original research questions and interview schedule, and so to find out what questions were asked and for what purpose.

 

But despite all this, it is usually a good idea to look for secondary sources. It might give you an insight into the area of study you hadn’t considered, highlighting interesting issues that other research as picked up on. It might also reduce the amount of data you need to collect: if someone has done something similar in this area, you can design your data collection to highlight relevant gaps, and build on what has been done before (theoretically, all research should do this to a certain degree).

 

I know it’s something I keep reiterating, but it’s really important to understand who your data represents: you need some kind of contextual or demographic data. This is sometimes difficult to find when using data gathered from social media, where people are often given the option to only state very basic data, such as gender, location or age, but many people may not disclose. It can also be a pain to extract comments from social media posts in such a way that the identity of the poster and their posts are kept with it – however there are 3rd party tools that can help with this.

 

When writing up your research, you will also want to make explicit how you found and collected this source of data. For example, if you are searching Twitter for a particular hashtag or phrase, when did you run the search? If you run it the next day, or even minute, the results will be different, and how far back did you include posts? What languages? Are there comments that you excluded - especially if they look like spam or promotional posts? Think about making it replicable: what information would someone need to get the same data as you?

 

You should also try and be as comprehensive as possible. If you are examining newspaper articles for something that has changed over time (such as use of the phrase ‘tactical warfare’) don’t assume all your results will be online. While some projects have digitised newspaper archives from major titles, there are a lot of sources that are still print only, or reside in special databases. You can gain help and access to these from national libraries, such as the British Library.


There are growing repositories of open access data, including qualitative datasets. A good place to start is the UK Data Service, even if you are outside the UK, as it contains links to a number of international stores of qualitative data. Start here, but note that you will generally have to register, or even gain approval to access some datasets. This shouldn’t put you off, but don’t expect to always be able to access the data immediately, and plan to prepare a case for why you should be granted access. In the USA there is a qualitative specific data repository called the Qualitative Data Repository (or QDR) hosted by Syracuse University.

 

If you have found a research article based on interesting data that is not held on a public repository, it is worth contacting the authors anyway to see if they are able to share it. Research based on government funding increasingly comes with stipulations that the data should be made freely available, but this is still a fairly new requirement, and investigators from other projects may still be willing and able to grant access. However, authors can be protective over their data, and may not have acquired consent from participants in such a way that they would be allowed to share the data with third parties. This is something to consider when you do your own work: make sure that you are able to give back to the research community and share your own data in the future.

 


Finally, a note of caution about tailored results. Google, Facebook and other search engines do not show the same results in the same order to all people. They are customised for what they think you will be interested in seeing, based on your own search history and their assumptions of your gender, location, ethnicity, and political leanings. This article explains the impact of the ‘research bubble’ and this will impact the results that you get from social media (especially Facebook).

 

To get around this in a search, you can use a privacy focused search engine (like DuckDuckGo), add &pws=0 to the end of a Google search, or use a browser in ‘Private’ or ‘Incognito’ mode. However, it’s much more difficult to get neutral results from Facebook, so bare this in mind.

 

Hopefully these tips have given you food-for-thought about sources and limitations of secondary data sources, if you have any comments or suggestions, please share them in the forum. Don’t forget that Quirkos has simple copy and paste source generation that allows you to bring in secondary data from lots of different formats and internet feeds, and the visual interface makes coding and exploring them a breeze. Download a free trial from www.quirkos.com/get.html.