Balance and rigour in qualitative analysis frameworks

image by https://www.flickr.com/photos/harmishhk/8642273025

 

Training researchers to use qualitative software and helping people who get stuck with Quirkos, I get to see a lot of people’s coding frameworks. Most of the time they are great, often they are fine but have too many codes, but sometimes they just seem to lack a little balance.


In good quality quantitative research, you should see the researchers have adopted a ‘null hypothesis’ before they start the analysis. In other words, an assumption that there is nothing significant in the data. So statisticians play a little game where they make a declaration that there should be no correlation between variables, and try and prove there is nothing there. Only if they try their hardest and can’t possibly convince themselves there is no relationship are they allowed to go on and conclude that there may be something in the data. This is called rejecting the null hypothesis and may temper the excitement of researchers with big data sets, over enthusiastic for career making discoveries.


Unfortunately, it’s rare to see this approach described in published quantitative analysis. But there’s no reason that a similar approach can’t be used in qualitative research to provide some balance from the researcher’s interpretation and prejudices. Most of the time the researcher will have their own preconception of what they are going to find (or would like to find) in the data, and may even have a particular agenda they are trying to prove. Whether a quantitative or qualitative methodology, this is not a good basis for conducting good impartial research. (Read more about the differences between qualitative and quantitative approaches.)

 

Steps like reflexivity statements, and considering unconscious biases can help improve the neutrality of the research, but it’s something to consider closely during the analysis process itself. Even the coding framework you use to tag and analyse your qualitative data can lead to certain quotes being drawn from the data more than others. 


It’s like trying to balance standing in the middle of a seesaw. If you stand over to one end, it’s easy to keep your balance, as you will just be rooted to the ground on one side. However, standing in the middle is the only way you are challenged, and it’s possible to be influenced by sways and wind from one side to another. Before starting your analysis, researchers should ideally be in this zen like state, where they are ready to let the data tell them the story, rather than trying to tell their own data from selective interpretations.


When reading qualitative data, try to have in your head the opposite view to your research hypothesis. Maybe people love being unemployed, and got rich because of it! The data should really shout out a finding regardless of bias or cherry picking.

 
When you have created a coding framework, have a look through at the tone and coverage. Are there areas which might show any bias to one side of the argument, or a particular interpretation? If you have a code for ‘hates homework’ do you have a code for ‘loves homework’? Are you actively looking for contrary evidence? Usually I try and find a counter example to every quote I might use in a project report. So if I want to show a quote where someone says ‘Walking in the park makes me feel healthy and alive’ I’ll see if there is someone else saying ‘The park makes me nervous and scared’. If you can’t, or at least if the people with the dissenting view is in a minority, you might just be able to accept a dominant hypothesis.

 

Your codes should try and reflect this, and in the same way that you shouldn’t have leading questions “Does your doctor make you feel terrible?” be careful about leading coding topics with language like “Terrible doctors”. There can be a confirmation bias, and you may start looking too hard for text to match the theme. In some types of analysis like discourse or in-vivo coding, reflecting the emotive language your participants use is important. But make sure it is their language and not yours that is reflected in strongly worded theme titles.

 

All qualitative software (Quirkos included) allows you to have a longer description of a theme as well as the short title. So make sure you use it to detail what should belong in a theme, as if you were describing it to someone else to do the coding. When you are going through and coding your data, think to yourself: “Would someone else code in the same way?”

 

download quirkos

 

Even when topics are neutral (or balanced with alternatives) you should also make sure that the text you categorise into these fields is fair. If you are glossing over opinions from people who don’t have a problem with their doctor to focus on the shocking allegations, you are giving primacy to the bad experiences, perhaps without recognising that the majority were good.

 

However, qualitative analysis is not a counting game. One person in your sample with a differing opinion is a significant event to be discussed and explained, not an outlier to be ignored. When presenting the results of qualitative data, the reader has to put a great deal of trust in how the researcher has interpreted the data, and if they are only showing one view point or interpretation they can come across as having a personal bias.

 

So before you write up your research, step back and look again at your coding framework. Does it look like a fair reflection of the data? Is the data you’ve coded into those categories reflective? Would someone else have interpreted and described it in the same way? These questions can really help improve the impartiality, rigour and balance of your qualitative research.

 

A qualitative software tool like Quirkos can help make a balanced framework, because it makes it much easier than pen and Post-It notes to go back and change themes and recode data. Download a free trial and see how it works, and how software kept simple can help you focus on your qualitative data.

 

 

Comparing qualitative software with spreadsheet and word processor software

word and excel for qualitative analysis

An article was recently posted on the excellent Digital Tools for Qualitative Research blog on how you can use standard spreadsheet software like Excel to do qualitative analysis. There are many other articles describing this kind of approach, for example Susan Eliot or Meyer and Avery (2008). However, it’s also possible to use word processing software as well, see for example this presentation from Jean Scandlyn on the pros and cons of common software for analysing qualitative data.

 

For a lot of researchers, using Word or Excel seems like a good step up from doing qualitative analysis with paper and highlighters. It’s much easier to keep your data together, and you can easily correct, undo and do text searches. You also get the advantage of being able to quickly copy and paste sections from your analysis into research articles or a thesis. It’s also tempting because nearly everyone has access to either Microsoft Office products or free equivalents like OpenOffice (http://www.libreoffice.org) or Google Docs and knows how to use them. In contrast, qualitative analysis software can be difficult to get hold of: not all institutions have licences for them, and they can have a steep learning curve or high upfront cost.

 

However, it is very rare that I recommend people use spreadsheets or word processing software for a qualitative research project. Obviously I have a vested interest here, but I would say the same thing even if I didn’t design qualitative analysis software for a living. I just know too many people who have started out without dedicated software and hit a brick wall.

 

 

Spreadsheet cells are not very good ways to store text.


If you are going to use Excel or an equivalent, you will need to store your qualitative text data in it somehow. The most common method I have seen is to keep quotes or paragraphs as a separate cell in a column for the text. I’ve done this in a large project, and it fiddly to copy and paste the text in the right way. You will also find yourself struggling with formatting (hint – get familiar with the different wrap text and auto column width options). It also becomes a chore to separate out paragraphs into smaller sections to code them differently, or merge them together. Also, if you have data in other formats (like audio or video) it’s not really possible to do anything meaningful with them in Excel.

 


You must master Excel to master your analysis

 

As Excel or other spreadsheets are not really designed for qualitative analysis, you need to use a bit of imagination to sort and categorise themes and sources. With separate columns for source names and your themes, this is possible (although can get a little laborious). However, to be able to find particular quotes, themes and results from sources, you will need to properly understand how to use Pivot Tables and filters. This will allow you some ability to manage and sort your coded data.

 

It’s also a good idea to get to grips with some of the keyboard shortcuts for your spreadsheet software, as these will help take away some of the repetitive data entry you will need to do when coding extracts. There is no quick drag-and-drop way to assign text to a code, so coding will almost always be slower than using dedicated software.

 

For these reasons, although it seems like just using software like Excel you already know will be easier, it can quickly become a false economy in terms of the time required to code and learn advanced sorting techniques.

 


Word makes coding many different themes difficult.

 

I see a lot of people (mostly students) who start out doing line-by-line coding in Word, using highlight colours to show different topics. It’s very easy to fall into this: while reading through a transcript, you highlight with colours bits that are obviously about one topic or another, and before you know it there is a lot of text sorted and coded into themes and you don’t want to loose your structure. Unfortunately, you have already lost it! There is no way in Word or other word processing software to look at all the text highlighted in one colour, so to review everything on one topic you have to look through the text yourself.

 

There is also a hard limit of 15 (garish) colours, which limits the number of themes you can code, and it’s not possible to code a section with more than one colour. Comments and shading (in some word-processors) can get around this, but it is still limited: there is no way to create groups or hierarchies of similar themes.

 

I get a lot of requests from people wanting to bring coded work from a word processor into Quirkos (or other qualitative software) but it is just not possible.

 


No reports, or other outputs


Once you have your coded data – how do you share it, summarise it or print it out to read through away from the glow of the computer? In Word or Excel this is difficult. Spreadsheets can produce summaries of quantitative data, but have very few tools that deal with text. Even getting something as simple as a word count is a pain without a lot of playing around with macros. So getting a summary of your coding framework, or seeing differences between different sources is hard.

 

Also, I have done large coding projects in Excel, and printing off huge sheets and long rows and columns is always a struggle. For meetings and team work, you will almost always need to get something out of a spreadsheet to share, and I have not found a way to do this neatly. Suggestions welcome!

 

 


I’m not trying to say that using Word or Excel is always a bad option, indeed Quirkos lets you export coded data to Word or spreadsheet format to read, print and share with people who don’t have qualitative software, and to do more quantitative analysis. However, be aware that if you start your analysis in Word or Excel it is very hard to bring your codes into anything else to work on further.

 

Quirkos tries to make dedicated qualitative software as easy to learn and use as familiar spreadsheet and word processing tools, but with all the dedicated features that make qualitative analysis simple and more enlightening. It’s also one of the most affordable packages on the market, and there is a free trial so you can see for yourself how much you gain by stepping up to real qualitative analysis software!

 

 

Include qualitative analysis software in your qualitative courses this year

teaching qualitative modues

 

A new term is just beginning, so many lecturers, professors and TAs are looking at their teaching schedule for the next year. Some will be creating new courses, or revising existing modules, wondering what to include and what’s new. So why not include qualitative analysis software (also known as CAQDAS or QDA software)?

 

There’s a common misconception that software for qualitative research takes too long to teach, and instructors often aren’t confident themselves in the software (Gibbs 2014), leading to a perception that including it in courses will be too difficult (Rodik and Primorac 2015). It’s also a sad truth that few universities or colleges have support from IT departments or experts when training students on CAQDAS software (Blank 2004).

 

However, we have specifically designed Quirkos to address these challenges, and make teaching qualitative analysis with software simpler. It should be possible to teach the basics of qualitative analysis, as well as provide students with a solid understanding of qualitative software in a one or two hour seminar, workshop or lecture. One of the main aims with Quirkos was to ensure it is easy to teach, as well as learn.

 

With a unique and very visual approach to coding and displaying qualitative data, Quirkos tries to simplify the qualitative analysis process with a reduced set of features and buttons. This means there are fewer steps to go over, a less confusing interface for those starting qualitative analysis for the first time, and fewer places for students to get stuck.

 

To make teaching this as straightforward for educators as possible, we provide free ready-to-use training materials to help educators teach qualitative analysis. We have PowerPoint slides detailing each of the main features and operations. These can be adapted for your class, so you can use some or all of the slides, or even just take the screenshot images and edit the specifics for your own use.

 

Example qualitative data sets are available for use in classes. There are two of these: one very basic set of people talking about breakfast habits and a more detailed one on politics and the Scottish Independence Referendum. With these, you can have complete sources of data and exercises to use in class, or to set a more extensive piece of homework or practical assessed project.

 

We also provide two manuals as PDF files that can be shared as course materials or printed out. There is a full manual, but also a Getting Started guide which includes a step-by-step walkthrough of basic operations, ideal for following in a session. Finally, there are video guides which can be shown as part of classes, or included in links to course materials. These range in length from 5 minute overviews to 1 hour long detailed walkthroughs, depending on the need.

 

There is more information in our blog post on integrating qualitative analysis software into existing curriculums, but it’s also worth remembering that there is a one month free trial for yourself and students. The trial version has all the features with no restrictions, and is identical for students working on Windows, Mac or even Linux.

 

However, if you have any questions about Quirkos and how to teach it, feel free to get in touch. We can tell you about others using Quirkos in their classes, some tips and tricks and any other questions you have on comparing Quirkos to other qualitative analysis software.  You can reach us on Skype (quirkos), email (support@quirkos.com) or by phone during UK office hours (+44 131 555 3736). We’ll always be happy to set up a demo for you: we are all qualitative researchers ourselves, so are happy to share our tips and advice.

 

Good luck for the new semester!

 

Workshop exercises for participatory qualitative analysis

participatory workshop

I am really interested in engaging research participants in the research process. While there is an increasing expectation to get ‘lay’ researchers to set research questions, sit on review boards and even ask questions in qualitative studies, it can be more difficult to engage them with the analysis of the research data and this is much rarer in the literature (see Nind 2011).


However, Quirkos was specifically designed to make qualitative text analysis engaging and easy enough that participants could learn to do it with just a little training, and in this blog post article from last year I describe how a small group were able to learn the software and code qualitative data in a two hour session. I am revisiting and expanding on this work for a talk in the Digital Tools thread of the 2016 ICQI conference in Champagne Illinois this month, so wanted to revisit this a little.


When I had attempted to engage participants in qualitative analysis before, it had been fairly limited in scope. The easiest way was essentially to present an already complete draft ‘report’ or overview of the analysis and findings, and use that for comment and discussion. While this allows respondents some ability to influence the final outputs and conclusions, the analysis process is still entirely led by the researcher, and they don’t really get a chance to change how data is interpreted. The power dynamics between researcher and participant are not significantly altered.


My feeling is that respondents are often the experts in what is being studied (i.e. their own experiences), and I worry that if presented with all the data, they might rightly conclude “You’ve interpreted this as being about x when it is really about y”.


Yet there are obvious problems that occur when you want respondents to engage with this level of detail. First of all, there is the matter of time: qualitative analysis is extremely time consuming, and in most projects asking someone to analyse all the data is asking for days or even months of commitment. This is not feasible for most respondents, especially if asked to do this work voluntarily parallel to the full time, paid job of the researcher! 


Most approaches in the literature choose to engage a small number of participants in the analysis of some of the data. For example Jackson (2008) uses group exercises successfully with people from different educational backgrounds. The DEPICT model breaks down the work so that the whole dataset is covered, but each team member only has a few transcripts to code (Flicker and Nixon 2015).


However, when it came to run participatory analysis workshops for the research project we did on the Scottish Referendum Project, we had an additional secret weapon: Quirkos! One of the main design briefs for Quirkos was to ensure that it was simple enough to learn that it could be used by research participants with little or no formal training in research or qualitative analysis. The workshops we ran with research-naïve respondents showed that such a software package could be used in such a way.


I was initially really worried about how the process would work practically, and how to create a small realistic task that would be a meaningful part of the analysis process. Before I started, I considered a series of tasks and scenarios that could be used in a participatory analysis workshop to get respondent input into the analysis process. I’ve included some brief details of these below, just in case they are helpful to anyone else considering participatory analysis.

 

 

Blank Sheet


The most basic, and most scary scenario: the coding team is provided with just the raw transcript(s), with no existing topic framework or coded data. They start from scratch, creating their own coding framework, and coding data. This is probably the most time consuming, and conceptually challenging approach, but the most neutral in terms of influence from the researchers. Participants are not provided with any preconceptions of what they should be exploring in the data (although they could be provided with the research questions), and are free to make their own interpretations.

 

 

Framework Creation


Here, I envisage a series of possible exercises where the focus is on not the coding of the data explicitly, but consideration of the coding framework and possible topics of interest. Participants choose topics of significance to them, or that they feel are appearing in the data. Here the process is like grounded theory, participants are given one (or several) transcripts to read, and asked what topics are significant. This works well on large sheets of paper with Post-it notes, but by creating the coding framework directly in the software, participants and researchers can easily utilise the developed framework for coding later. Could exist in several variants:


Emergent Coding
As above: creating a coding framework (probably from scratch, or with some example topics already provided by the researcher)
 

Grouping Exercise
A simpler task would be to present a pre-prepared list of many possible topics of interest created by the researcher, and ask participants to group them either thematically, or by order of importance. This gives respondents an easier start on the coding framework, allowing them to familiarise themselves with the process and topics. It is more restrictive, and plants directions of interest for the participants, but they would remain able to challenge, add to, or exclude topics for examination.
 

Category Prompts
Here the researcher has created a few very broad categories (for example Health, Housing, Family) and the participants are encouraged to populate the framework with more specific sub categories. This approach is a good middle ground, where the researcher can set some broad areas of interest, but participants have say in what direction topics should be explored in detail (say Expensive food, Lack of open space).

After one or more of these exercises, the participants could go on to use the coding framework to code the data themselves, or the researcher can use the contributed topic guide to focus their own coding.

 

 

Coding exercises


In these three exercises, I envisage a scenario where some coding has already been completed, the focus of the session is to look either at coded transcripts (on screen or printout) and discuss how the data has been interpreted. This could take the form of:


Researcher Challenge: Where the researcher asks the participants to justify or explain how they have coded the data
Participant Challenge: Participants examine data coded by researchers, question their rationale and suggest changes
Group Challenge: Participants and researchers code the same transcript separately, and get together to compare, contrast and discuss their results.


With all these approaches, one can apply several overall philosophies:
Individual: Where each respondent or researcher works on their own, adding separately to the total coding of the project
Collaborative: Analysis is done as part of a team, working together
Comparative: Where analysts work separately, but come together to discuss and contrast their work, creating a final dialogue from the input of the whole group.

 

Finally, the team should consider whether the aim of the project is to actually create a direct analysis outcome from these sessions, or if they are exercises which are themselves part of the qualitative data generated from the project. For our sessions, we also recorded, transcribed and analysed the discussion which took place around the coding, which itself also contributed nuanced and valuable insight into the thought processes of the participants. Of course, this leaves the problem of creating an infinite Ouroboros loop of data generation, if respondents were then invited to analyse the transcripts of their own analysis sessions!

 

Which approach, and how far the participatory process is taken will obviously depend on the research project and desires of the researcher. However, my main aim here is to just get people thinking about the possibilities, and if engaging participants in the research process in some way will challenge the assumptions of the research team, or lead to better results, and more relevant and impactful outputs.

 

Here are the slides of my ICQI 2016 talk, and the complete data (raw and coded) and summary report on the Scottish Referendum Project is here. I would welcome more discussion on this, in the forum, by e-mail (daniel@quirkos.com) or in the literature!

 

Don't forget, the new version of Quirkos is now available, for researchers and participants alike to bring their qualitative analysis to life. Download a free trial today!

 

 

Finding, using and some cautions on secondary qualitative data

secondary data analysis

 

Many researchers instinctively plan to collect and create new data when starting a research project. However, this is not always needed, and even if you end up having to collect your own data, looking for other sources already out there can help prevent redundancy and improve your conceptualisation of a project. Broadly you can think of two different types of secondary data: sources collected previously for specific research projects, and data 'scraped' from other sources, such as Hansard transcripts or Tweets.

 

Other data sources include policy documents, news articles, social media posts, other research articles, and repositories of qualitative data collected by other researchers, which might be interviews, focus groups, diaries, videos or other sources. Secondary analysis of qualitative data is not very common, since the data tends to be collected with a very narrow research interest, and of a depth that makes anonymisation and aggregation difficult. However, there is a huge amount of rich data that could be used again for other subjects, see this article by Heaton 2008 for a general overview, and Irwin 2013 for discussion of some of the ethical issues.

 

The advantage with using qualitative data analysis software is that you can keep many disperate sources of evidence together in one place. If you have an article positing a particular theory, you can quickly cross code supportive evidence with sections of text from that article, and evidence why your data does or does not support the literature. Examining social policy? Put the law or government guidelines into your project file, and cross reference statements from participants that challenge or support these dictates in practice. The best research does not exist in isolation: it must engage with both the existing literature and real policy to make an impact.

 


Data from other sources has the advantage that the researcher doesn’t have to spend time and resources collecting and recruiting. However, the constant disadvantage is that the data was not specifically obtained to meet your particular research questions or needs. For example, data from Twitter might give valuable insights into people’s political views. But statements that people make do not always equate with their views (this is true for directly collected research methods as well), so someone may make a controversial statement just to get more followers, or be suppressing their true beliefs if they believe that expressing them will be unpopular. 

 

Each different website has it’s own culture as well, which can affect what people share, and in how much detail. A paper by Hine (2012) shows that posts from the popular UK 'Mumsnet' forum have particular attitudes that are acceptable, and often posters are looking for validation of their behaviour from others. Twitter and Facebook are no exception, their each have different styles and acceptable posts that true internet ethnographers should understand well!

 

Even when using secondary data collected for a specific academic research project, the data might not be suitable for your needs. A great series of qualitative interviews about political views may seem to be a great fit for your research, but might not have asked a key question (for example about respondents parent’s beliefs) which renders the data unusable for your purpose. Additionally, it is usually impossible to identify respondents in secondary data sets to ask follow on questions, since data is anonymised. It’s sometimes even difficult to see the original research questions and interview schedule, and so to find out what questions were asked and for what purpose.

 

But despite all this, it is usually a good idea to look for secondary sources. It might give you an insight into the area of study you hadn’t considered, highlighting interesting issues that other research as picked up on. It might also reduce the amount of data you need to collect: if someone has done something similar in this area, you can design your data collection to highlight relevant gaps, and build on what has been done before (theoretically, all research should do this to a certain degree).

 

I know it’s something I keep reiterating, but it’s really important to understand who your data represents: you need some kind of contextual or demographic data. This is sometimes difficult to find when using data gathered from social media, where people are often given the option to only state very basic data, such as gender, location or age, but many people may not disclose. It can also be a pain to extract comments from social media posts in such a way that the identity of the poster and their posts are kept with it – however there are 3rd party tools that can help with this.

 

When writing up your research, you will also want to make explicit how you found and collected this source of data. For example, if you are searching Twitter for a particular hashtag or phrase, when did you run the search? If you run it the next day, or even minute, the results will be different, and how far back did you include posts? What languages? Are there comments that you excluded - especially if they look like spam or promotional posts? Think about making it replicable: what information would someone need to get the same data as you?

 

You should also try and be as comprehensive as possible. If you are examining newspaper articles for something that has changed over time (such as use of the phrase ‘tactical warfare’) don’t assume all your results will be online. While some projects have digitised newspaper archives from major titles, there are a lot of sources that are still print only, or reside in special databases. You can gain help and access to these from national libraries, such as the British Library.


There are growing repositories of open access data, including qualitative datasets. A good place to start is the UK Data Service, even if you are outside the UK, as it contains links to a number of international stores of qualitative data. Start here, but note that you will generally have to register, or even gain approval to access some datasets. This shouldn’t put you off, but don’t expect to always be able to access the data immediately, and plan to prepare a case for why you should be granted access. In the USA there is a qualitative specific data repository called the Qualitative Data Repository (or QDR) hosted by Syracuse University.

 

If you have found a research article based on interesting data that is not held on a public repository, it is worth contacting the authors anyway to see if they are able to share it. Research based on government funding increasingly comes with stipulations that the data should be made freely available, but this is still a fairly new requirement, and investigators from other projects may still be willing and able to grant access. However, authors can be protective over their data, and may not have acquired consent from participants in such a way that they would be allowed to share the data with third parties. This is something to consider when you do your own work: make sure that you are able to give back to the research community and share your own data in the future.

 


Finally, a note of caution about tailored results. Google, Facebook and other search engines do not show the same results in the same order to all people. They are customised for what they think you will be interested in seeing, based on your own search history and their assumptions of your gender, location, ethnicity, and political leanings. This article explains the impact of the ‘research bubble’ and this will impact the results that you get from social media (especially Facebook).

 

To get around this in a search, you can use a privacy focused search engine (like DuckDuckGo), add &pws=0 to the end of a Google search, or use a browser in ‘Private’ or ‘Incognito’ mode. However, it’s much more difficult to get neutral results from Facebook, so bare this in mind.

 

Hopefully these tips have given you food-for-thought about sources and limitations of secondary data sources, if you have any comments or suggestions, please share them in the forum. Don’t forget that Quirkos has simple copy and paste source generation that allows you to bring in secondary data from lots of different formats and internet feeds, and the visual interface makes coding and exploring them a breeze. Download a free trial from www.quirkos.com/get.html.

 

 

Analysing text using qualitative software

I'm really happy to see that the talks from the University of Surrey CAQDAS 2014 are now up online (that's 'Computer Assisted Qualitative Data Analysis Software' to you and me). It was a great conference about the current state of software for qualitative analysis, but for me the most interesting talks were from experienced software trainers, about how people actually were using packages in practice.

There were many important findings being shared, but for me one of the most striking was that people spend most of their time coding, and most of what they are coding is text.

In a small survey of CAQDAS users from a qualitative research network in Poland, Haratyk and Kordasiewicz found that 97% of users were coding text, while only 28% were coding images, and 23% directly coding audio. In many ways, the low numbers of people using images and audio are not surprising, but it is a shame. Text is a lot quicker to skip though to find passages compared to audio, and most people (especially researchers) and read a lot faster than people speak. At the moment, most of the software available for qualitative analysis struggles to match audio with meaning, either by syncing up transcripts, or through automatic transcription to help people understand what someone is saying.

Most qualitative researchers use audio as an intermediary stage, to create a recording of a research event, such as in interview or focus group, and have the text typed up word-for-word to analyse. But with this approach you risk losing all of the nuance that we are attuned to hear in the spoken word, emphasis, emotion, sarcasm – and these can subtly or completely transform the meaning of the text. However, since audio is usually much more laborious to work with, I can understand why 97% of people code with text. Still, I always try to keep the audio of an interview close to hand when coding, so that I can listen to any interesting or ambiguous sections, and make sure I am interpreting them fairly.

Since coding text is what most people spend most of their time doing, we spent a lot of time making the text coding process in Quirkos was as good as it could be. We certainly plan to add audio capabilities in the future, but this needs to be carefully done to make sure it connects closely with the text, but can be coded and retrieved as easily as possible.

 

But the main focus of the talk was the gaps in users' theoretical knowledge, that the survey revealed. For example, when asked which analytical framework the researchers used, only 23% described their approach as Grounded Theory. However, when the Grounded Theory approach was described in more detail, 61% of respondents recognised this method as being how they worked. You may recall from the previous top-up, bottom-down blog article that Grounded Theory is essentially finding themes from the text as they appear, rather than having a pre-defined list of what a researcher is looking for. An excellent and detailed overview can be found here.

Did a third of people in this sample really not know what analytical approach they were using? Of course it could be simply that they know it by another name, Emergent Coding for example, or as Dey (1999) laments, there may be “as many versions of grounded theory as there were grounded theorists”.

 

Finally, the study noted users comments on advantages and disadvantages with current software packages. People found that CAQDAS software helped them analyse text faster, and manage lots of different sources. But they also mentioned a difficult learning curve, and licence costs that were more than the monthly salary of a PhD student in Poland. Hopefully Quirkos will be able to help on both of these points...