Include qualitative analysis software in your qualitative courses this year

teaching qualitative modues


A new term is just beginning, so many lecturers, professors and TAs are looking at their teaching schedule for the next year. Some will be creating new courses, or revising existing modules, wondering what to include and what’s new. So why not include qualitative analysis software (also known as CAQDAS or QDA software)?


There’s a common misconception that software for qualitative research takes too long to teach, and instructors often aren’t confident themselves in the software (Gibbs 2014), leading to a perception that including it in courses will be too difficult (Rodik and Primorac 2015). It’s also a sad truth that few universities or colleges have support from IT departments or experts when training students on CAQDAS software (Blank 2004).


However, we have specifically designed Quirkos to address these challenges, and make teaching qualitative analysis with software simpler. It should be possible to teach the basics of qualitative analysis, as well as provide students with a solid understanding of qualitative software in a one or two hour seminar, workshop or lecture. One of the main aims with Quirkos was to ensure it is easy to teach, as well as learn.


With a unique and very visual approach to coding and displaying qualitative data, Quirkos tries to simplify the qualitative analysis process with a reduced set of features and buttons. This means there are fewer steps to go over, a less confusing interface for those starting qualitative analysis for the first time, and fewer places for students to get stuck.


To make teaching this as straightforward for educators as possible, we provide free ready-to-use training materials to help educators teach qualitative analysis. We have PowerPoint slides detailing each of the main features and operations. These can be adapted for your class, so you can use some or all of the slides, or even just take the screenshot images and edit the specifics for your own use.


Example qualitative data sets are available for use in classes. There are two of these: one very basic set of people talking about breakfast habits and a more detailed one on politics and the Scottish Independence Referendum. With these, you can have complete sources of data and exercises to use in class, or to set a more extensive piece of homework or practical assessed project.


We also provide two manuals as PDF files that can be shared as course materials or printed out. There is a full manual, but also a Getting Started guide which includes a step-by-step walkthrough of basic operations, ideal for following in a session. Finally, there are video guides which can be shown as part of classes, or included in links to course materials. These range in length from 5 minute overviews to 1 hour long detailed walkthroughs, depending on the need.


There is more information in our blog post on integrating qualitative analysis software into existing curriculums, but it’s also worth remembering that there is a one month free trial for yourself and students. The trial version has all the features with no restrictions, and is identical for students working on Windows, Mac or even Linux.


However, if you have any questions about Quirkos and how to teach it, feel free to get in touch. We can tell you about others using Quirkos in their classes, some tips and tricks and any other questions you have on comparing Quirkos to other qualitative analysis software.  You can reach us on Skype (quirkos), email ( or by phone during UK office hours (+44 131 555 3736). We’ll always be happy to set up a demo for you: we are all qualitative researchers ourselves, so are happy to share our tips and advice.


Good luck for the new semester!


Qualitative coding with the head and the heart

qualitative coding head and heart


In the analysis of qualitative data, it can be easy to fall in the habit of creating either very descriptive, or very general theoretical codes. It’s often a good idea to take a step back, and examine your coding framework, challenging yourself to look at the data in a fresh way. There are some more suggestions for how to do this in a blog post article about turning coding strategies on their head. But while in Delhi recently to deliver some training on using Quirkos, I was struck by a couple of exhibits at the National Museum which in a roundabout way made me think about coding qualitative data, and getting the balance right between analytical and emotional coding frameworks.


There were several depictions of Hindu deities trampling a dwarf called Apasmāra, who represented ignorance. I loved this focus of minimising ignorance, but it’s important to note that in Hindu mythology, ignorance should not be killed or completely vanquished, lest knowledge become too easy to obtain without effort.


Another sculpture depicted Yogini Vrishanna, a female deity that had taken the bull-head form. It was apparently common for deities to periodically take on an animal head to prevent over-intellectualism, and allow more instinctive, animalistic behaviour!


I was fascinated between this balance being depicted between venerating study and thought, but at the same time warning against over thinking. I think this is a message that we should really take to heart when coding qualitative data. It’s very easy to create coding themes that are often far too simple and descriptive to give much insight in the data: to treat the analysis as purely a categorization exercise. When this happens, students often create codes that are basically an index of low-level themes in a text. While this is often a useful first step, it’s important to go beyond this, and create codes (or better yet, a whole phase of coding) which are more interpretive, and require a little more thinking.


However, it’s also possible to go too far in the opposite direction and over-think your codes. Either this comes from looking at the data too tightly, focusing on very narrow and niche themes, or from the over-intellectualising that Yogini Vrishanna was trying to avoid above. When the researcher has their head deeply in the theory (and lets be honest this is an occupational hazard for those in the humanities and social sciences), there is a tendency to create very complicated high-level themes. Are respondents really talking about ‘social capital’, ‘non-capitalocentric ethics’ or ‘epistemic activism’? Or are these labels which the researcher has imposed on the data?


These might be the times we have to put on our imaginary animal head, and try to be more inductive and spontaneous with our coding. But it also requires coding less from the head, and more from the heart. In most qualitative research we are attempting to understand the world through the perspective of our respondents, and most people are emotional beings, acting not just for rational reasons.


If our interpretations are too close to the academic, and not the lived experiences of our study communities, we risk missing the true picture. Sometimes we need a this theoretical overview to see more complex trends, but they should never be too far from the data in a single qualitative study. Be true to both your head and your heart in equal measure, and don’t be afraid to go back and look at your data again with a different animal head on!


If you need help to organise and visualise all the different stages of coding, try using qualitative analysis software like Quirkos! Download a free trial, and see for yourself...


10 tips for sharing and communicating qualitative research

sharing qualitative research data

Writing up and publishing research based on qualitative or mixed methods data is one thing, but most researchers will want to go beyond this, and engage with the wider public and decision makers. This requires a different style of publication, and a different style of writing. We are not talking about journal articles, funders reports, book chapters or a thesis here, but creating short, engaging and impactful summaries of your research that anyone can read and share. Research that just sits on a shelf doesn't change the world!


The aim here is primarily outreach and impact – making sure that your research is read and has applications beyond academia, especially to the general public and decision makers. These could be local or national government, NGOs, funding bodies or service providers. It may even be that your research has implications for a particular group of people, for example those with a particular health condition, demographic group, or work in a certain field. There’s also the increasingly common expectation to should share results with participants.


Generally speaking, outputs and reports for these groups of people should be short and use non-technical language: it’s not enough to just provide with a copy of your thesis or a journal article. Those are almost always written for a very specific audience, academics, and are difficult to read for the general public. Outside academia, most don’t have access to journals which often require subscriptions, and some government departments have cut down on library services in these areas.


In the research I’ve been involved with, we’ve even created short summaries of our research for GPs, clinicians, and health managers: people who are certainly familiar with journal articles, but rarely have the time to read them. It became clear in our discussions with people we were trying to engage with that one page summaries were the best format to get findings read.


It sounds like a lot of extra work at the end of a research project on top of publications, but this can be just as important, possibly even an ethical requirement. Some funding boards and IRBs require the creation of research outputs for lay audiences.


There are plenty of guides to help with writing up qualitative data for articles and book chapters (eg Ryan 2006), but what about writing up qualitative research for non-academic audiences? I’ve written about the challenges of explaining qualitative data before, but these tips below contain more specific advice on creating short, engaging summaries for the general public. They also provide prompts to make you think about promoting your research so it is read by more people, and has more impact.


1. Create specific outputs for each audience

It maybe that there are different groups of people you want to reach: the general public, politicians, or experts in a particular field. Consider creating several short outputs targeted at each one. A short summary written for a lay audience might not have the level of detail a government body would want to see, and you might also want to highlight findings which are interesting to certain readers.


Think about the different groups of people you want to engage with, and draw up a list of what outputs would work best for each one. These don’t have to be written either, it could be a coffee morning, presenting at a meeting, or a short video. Choose a format that works best for each type of audience, and the most important message to get across to each.


2. Link to topical issues

Qualitative research often takes a very in-depth approach to a specific research question, but this depth also means that it can engage with wider but connected themes. It maybe that your research is already on something topical, such as diet or social media. But there maybe important local issues your research can feed into, or wider problems that your research project illustrates a small part of. Engaging with a currently trending issue can not only get more views, but hopefully improve the quality of debate.


Where possible, try and consider issues that are not just part of a short-term media cycle, but are longer trends likely to come up again and again, such as house prices or obesity. It’s not necessary to twist your main finding to make them fit, just find a relevant connection.


3. Tell the story

I think this is the key to communicating qualitative data. People engage with stories about people better than they do with cold reports and statistics. That is why media tends to focus on individual politicians and celebrities more than their policies or what they do, and is also a better way to retain information. Give your stories context and causality (because of this, that happened to this person), and you are following the same basic rules for good storytelling that scriptwriters and novelists follow.


While you can take this in very creative directions, for example creating an animated video story based on one of your participants, it maybe that a box containing a case study is enough to provide a report with illuminating context. Stories are the most powerful part of qualitative research, make sure you use them!


4. Make them visual, and beautiful

People are more likely to pick up and engage with visual outputs, and pictures or other visual elements help people understand and associate with the findings. Try not to rely on generic clip-art or stock photos, choose images that are unique and specific to your work. If it’s not appropriate to include pictures of respondents or the local area, consider talking to artists or photographers that could create or let you use something more abstract.


It’s also important to consider how a research output looks: a written report shouldn’t just be a wall of long text, make sure it is broken up, and has plenty of white-space. It may even be worth getting a designer involved: this often doesn’t cost too much, but can make the output look a lot more professional, tempting to pick up and easier to follow. The same goes for presentations and events too!


Think about creating visualisations of your qualitative data, rather than just a series of quotes. Qualitative analysis software can help with this by making things like word-clouds or visualisations of important findings and connections in the research. Quirkos tries to make visualisations like this easy to understand and export, so check out the rest of the website to see how it can help with qualitative analysis.


5. Explain the methods (but briefly)

When presenting qualitative data, you should consider the fact that many people aren’t familiar with qualitative research or the methods you might have used. Those more used to quantitative data (especially in the public sector) might consider that your sample size is too small, or your research findings aren’t rigorous enough. It can be worth pre-empting these criticisms, but not by being apologetic. Don’t just say that ‘this is a limited study’ or ‘further research is needed’, be positive about the depth of your investigation, how commonly used your methods are, and if appropriate show how your work contradicts with or supports other research.


Have a short section describing your research methods, but don’t provide too much detail. If the reader is interested in this, provide a way for them to read more somewhere else, for example a publication or a project website. It’s better to tease the reader and make them want more, rather than providing too much detail in the first place. Speaking of which:


6. Stay away from the academic debates

Generally, this is not the place for debates on epistemology, ontology or what other academics are saying in the field. While it is sometimes possible to explain these issues with non-technical language, it is probably not something that this audience is interested in.


This can be hard if your research question was specifically focused around, say, a Foucauldian interpretation of language used to reviews of artisan coffee shops, but focus on the findings that are of interest to the general public. This is why just creating a summary of a journal article or full report generally doesn’t work well for a general output.


7. Write for others

Writing a popular output is one thing, but finding  readers is hard. So why not find channels that already engage with the group of people you want to reach, and write for them instead. This could be a popular blog on a health condition, a trade magazine or even something for the popular press. Approach these people and ask if they would be interested in a piece about your research, stressing why it would be interesting to their readers. Some are glad for the opportunity to have something to fill space, and if you can demonstrate relevance to a topical issue, journalists might get involved as well.


If you are considering going down the media route, your university may have a press office that could help create press releases, and suggest the right editors and provide training to researchers on being interviewed on radio or TV. Just make sure that the outreach is serving your agenda, not just promoting the university or trying to spark controversy.


Look for relevant events you could present at such as workshops and conferences, since these can be targeted at professionals like health workers, not just academics.



8. Promote your outputs

It’s not good enough to create a report, stick it on your own website and forget about it. You need to promote your outputs and make sure that people can find them. Promotion should also be audience specific: where are my readers, and what do they already engage with? If you are running an event, should you have posters in cafés, or an advert in the local paper?


If you have a project website, this needs to be promoted too. Make sure people can find it if they are searching for issues around your research, and ask other websites in the area to put in a link to your web page. Keywords are important too: what searches are your readers going to make? It’s probably not “qualitative research on peer support for cancer” but “support groups for cancer”. Make sure the right terms appear on your website and as the heading for your outputs.


9. Make them long-term accessible

A one off event or report is great, but only can target people currently looking to engage with your research. Policy makers change year after year, and with health issues, new people will be diagnosed all the time and will look for information.


If you have a project website, you need to consider a long-term strategy for it: make sure it is accessible for a long period of time, and can be updated. The right people should have physical copies of reports, that way they can access them later. There also might be good places to keep distributing summaries, like libraries, community centres or GP surgeries.


It’s also worth coming back to your project and promoting it after a period of time. This is difficult in academia where funding is research and time limited, but set aside a one and two year anniversary and spend time reaching out to new people. Impact and engagement is important in academia, and a fresh attempt at reaching out after a period of time can dramatically increase the number of people reached.



10. Don’t forget social media!

Social media can be a good way to promote your research, as it is fairly easy to find people from the right audience. They may be following a particular person in the field, or declare an interest in a relevant hobby or workplace. Try Linkedin to contact people working in a certain field, or Facebook for getting out to the general public.


However, it is also possible to share findings in social media too. A Tweet is a very small amount of text for a qualitative research project, but is enough to tease a finding and provide a link containing more information. You can also create outputs in the form of infographics, pictures and video which people can share with others.


Creative and varied outputs are more likely to get general engagement, so experiment: make your materials fun and stand out from the crowd to get the word out!



Making qualitative analysis software accessible

accessible qualitative analysis software

Studies and surveys seem to show that the amount of qualitative research is growing, and that more and more people are using software to help with their qualitative analysis (Woods et al. 2015). However, these studies also highlight that users report problems with learning qualitative software, and that universities sometimes struggle to provide enough expertise to teach and troubleshoot them (Gibbs 2014).

Quirkos was specifically designed to address many of these issues, and our main aim is to be a champion for the field of qualitative research by making analysis software more accessible. But what does accessibility mean in this context, and what problems still need to be overcome?


Limitations of paper

The old paper and highlighters method is a very easy and accessible approach to qualitative analysis. Indeed, it’s common for some stages in the analysis exploration to be done on paper (such as reviewing), even if most of it is done in software. However, when projects get above a certain size or complexity, it can be difficult to keep track of all the different sources and themes. Should you have dozens of topics you are looking for in the project, you can quickly run out of different colours for your highlighters or Post-it notes (6 colours seems to be the most you can easily find) and I’ve seen very creative but laborious use of alternating coloured stripes and other techniques!


In these situations, qualitative analysis software can actually be more accessible, and make the process a lot easier. The big advantage to computers is that they have huge memories, and think nothing of working with hundreds of sources, and hundreds of coding topics. There are some people that are able to keep hundreds of topics in their head at once, (my former boss Dr Sarah Salway was one of these) but for us mortals, software can really help. However, software needs to try and be as easy to use as paper, and make sure that it doesn’t start making the data more difficult to see, or makes the coding process seem more important than deep reading and comprehension.


Learning curve

Secondly, if the software is going to be accessible, it has to be easy to learn and understand. While the best way to learn is often with face-to-face teaching, not everyone has the luxury of access to this, and it can be expensive. So there needs to be good, and freely available training materials. Ideally the software would be so simple that it didn’t need tuition at all, but inevitably people will get stuck, and good video guides and manuals should be easily available.

The software has to tread a fine line between being clear and non-patronising. I did have a discussion with one trainer in qualitative analysis about introducing an animated guide like Clippy to QDA software, to guide users through the process. Can you imagine what this would be like? A little character that pops up and says things like “Hi! It looks like you are doing grounded theory! Would you like some help with that?”. But most users I talk to want the software to be as invisible as possible. If it gets in the way frequently it is hindering, not helping the analysis process.



Software also needs to be as flexible as possible, it’s no good if it doesn’t fit your approach or the way you need to work. So it has to allow you to work with the type of data you have, without having to spend ages reformatting it.
It should be neutral to your approach as well, making sure that whatever the methodological and theoretical approach the user is taking, the software will allow researchers to work their own way. A lot of flexibility requirements comes when working with others too, getting data both in and out should be painless, and fit the rest of a researcher’s workflow.


Sharing with others

Most qualitative researchers like to work with others, either as part of a research team, or just as a resource to bounce ideas off. Sending project files from qualitative analysis software to another research is easy enough, but often only if they are using the same software on the same operating system. Cross platform working is really important, and it is frustrating at the moment how difficult it is to get coded data from one software package to another. We are having discussions with other developers of qualitative software about making sure that there is interoperability, but it is going to be a long journey.

It’s also important for software to create exports of the data in more common formats, such as PDF, Word files or the like, so people without specialist CAQDAS software can still engage and see the data.


Visual impairment

At the moment Quirkos is a very visual piece of software, and not well suited to those with visual or physical limitations. We have tried to choose options that make the software easier for those with vision impairment, such as high contrast text and large font sizes, but there is still a long way to go. At the moment, although shortcut keys can make using Quirkos a lot easier, navigating and selecting text without a mouse is not possible. We want to add the ability to run all the main operations from the keyboard or a specialist controller so that there are fewer barriers for those with reduced mobility.

We’ve even had serious discussions with blind qualitative researchers about how Quirkos could meet their needs! The main problem here seems to be the wide range of specialist computers and equipment – although there are fantastic tools out there for people with total or near-total visual impairment, they are all very different, and getting software that would work with them all is a huge challenge.



However, there is another barrier to access for many: the price of software licences. In many countries, relative low wages mean that qualitative analysis software is prohibitively expensive. This is not just in Latin America, Africa and many parts of Asia: even in Eastern Europe, a single CAQDAS licence can cost as much as many earn in a month. (Haratyk and Kordasiewicz 2014).

So also am proud to announce from today, that we will offer a 25% discount for researchers based in ‘developing’ or ‘emerging’ nations. I don’t like these terms, so for clarity I am taking this to mean any country with a GDP Per-Capita below US$2600 PPP, or a monthly average salary below 1000EUR. This is on top of our existing discounts for students, education, charity and education sectors. As far as I can see, we are the first qualitative analysis software company to offer a discount of this type. To check if your country qualifies, and to place an order with this discount, just send an e-mail to and we will be happy to help.

Quirkos is already around half the price of the other major CAQDAS software packages, but from now we are able to provide an extra discount to researchers in 150 countries, representing nearly 80% of the world population. We hope this will help qualitative researchers in these countries to use qualitative research to explore and answer difficult questions in health, development, transparency and increasing global happiness.


Reaching saturation point in qualitative research

saturation in qualitative research


A common question from newcomers to qualitative research is, what’s the right sample size? How many people do I need to have in my project to get a good answer for my research questions? For research based on quantitative data, there is usually a definitive answer: you can decide ahead of time what sample size is needed to gain a significant result for a particular test or method.


This post is hosted by Quirkos, simple and affordable qualitative analysis software. Download a one-month free trial today!


In qualitative research, there is no neat measure of significance, so getting a good sample size is more difficult. The literature often talks about reaching ‘saturation point’ - a term taken from physical science to represent a moment during the analysis of the data where the same themes are recurring, and no new insights are given by additional sources of data. Saturation is for example when no more water can be absorbed by a sponge, but it’s not always the case in research that too much is a bad thing. Saturation in qualitative research is a difficult concept to define Bowen (2008), but has come to be associated with the point in a qualitative research project when there is enough data to ensure the research questions can be answered.


However, as with all aspects of qualitative research, the depth of the data is often more important than the numbers (Burmeister & Aitken, 2012). A small number of rich interviews or sources, especially as part of a ethnography can have the importance of dozens of shorter interviews. For Fusch (2015):


“The easiest way to differentiate between rich and thick data is to think of rich as quality and thick as quantity. Thick data is a lot of data; rich data is many - layered, intricate, detailed, nuanced, and more. One can have a lot of thick data that is not rich; conversely, one can have rich data but not a lot of it. The trick, if you will, is to have both.”


So the quantity of the data is only one part of the story. The researcher needs to engage with it at an early level to ensure “all data [has] equal consideration in the analytic coding procedures. Frequency of occurrence of any specific incident should be ignored. Saturation involves eliciting all forms of types of occurrences, valuing variation over quantity.” Morse (1995). When the amount of variation in the data is levelling off, and new perspectives and explanations are no longer coming from the data, you may be approaching saturation. The other consideration is when there are no new perspectives on the research question, for example Brod et al. (2009) recommend constructing a ‘saturation grid’ listing the major topics or research questions against interviews or other sources, and ensuring all bases have been covered.


But despite this, is it still possible to put rough numbers on how many sources are required for a qualitative research project? Many papers have attempted to do this, and as could be expected, the results vary greatly. Mason (2010) looked at the average number of respondents in PhD thesis using on qualitative research. They found an average of 30 sources were used, but with a low of 1 source, a high of 95 and a standard deviation of 18.5! It is interesting to look at their data tables, as they show succinctly the differences in sample size expected for different methodological approaches, such as case study, ethnography, narrative enquiry, or semi-structured interviews.


While 30 in-depth interviews may seem high (especially for what is practical in a PhD study) others work with much less: a retrospective examination from a qualitative project by Guest et al. (2006) found that even though they conducted 60 interviews, they had saturation after 12, with most of the themes emergent after just 6. On the other hand, if students have supervisors who have more of a mixed-method or quantitative background, they will often struggle to justify the low number of participants suggested for methods of qualitative enquiry.


The important thing to note is that it is nearly impossible for a researcher to know when they have reached saturation point unless they are analysing the data as it is collected. This exposes one of the key ties of the saturation concept to grounded theory, and it requires an iterative approach to data collection and analysis. Instead of setting a fixed number of interviews or focus-groups to conduct at the start of the project, the investigator should be continuously going through cycles of collection and analysis until nothing new is being revealed.


This can be a difficult notion to work with, especially when ethics committees or institutional review boards, limited time or funds place a practical upper limit on the quantity of data collection. Indeed Morse et al (2014) found that in most dissertations they examined, the sample size was chosen for often practical reasons, not because a claim of saturation was made.


You should also be aware that many take umbrage at the idea that one should use the concept of saturation. O’Reilly (2003) notes that since the concept of saturation comes out of grounded theory, it’s not always appropriate to apply to research projects, and the term has become over used in the literature. It’s also not a good indicator by itself of the quality of qualitative research.


For more on these issues, I would recommend any of the articles referenced above, as well as discussion with supervisors, peers and colleagues. There is also more on sampling considerations in qualitative research in our previous blog post article.



Finally, don’t forget that Quirkos can help you take an iterative approach to analysis and data collection, allowing you to quickly analyse your qualitative data as you go through your project, helping you visualise your path to saturation (if you so choose this approach!). Download a free trial for yourself, and take a closer look at the rest of the features the software offers.


Tips for managing mixed method and participant data in Quirkos and CAQDAS software

mixed method data


Even if you are working with pure qualitative data, like interview transcripts, focus groups, diaries, research diaries or ethnography, you will probably also have some categorical data about your respondents. This might include demographic data, your own reflexive notes, context about the interview or circumstances around the data collection. This discrete or even quantitative data can be very useful in organising your data sources across a qualitative project, but it can also be used to compare groups of respondents.


It’s also common to be working with more extensive mixed data in a mixed method research project. This frequently requires triangulating survey data with in-depth interviews for context and deeper understanding. However, much survey data also includes qualitative text data in the form of open-ended questions, comments and long written answers.


This blog has looked before at how to bring in survey data from on-line survey platforms like Surveymonkey, Qualtrics and Limesurvey. It’s really easy to do this, whatever you are using, just export as a CSV file, which Quirkos can read and import directly. You’ll get the option to change whether you want each question to be treated as discrete data, a longer qualitative answer, or even the name/identifier for each source.


But even if you haven’t collected your data using an online platform, it is quite easy to format it in a spreadsheet. I would recommend this as an option for many studies, it’s simply good data management to be able to look at all your participant data together. I often have a table of respondent’s data (password protected of course) which contains a column for names, contact details, whether I have consent forms, as well as age, occupation and other relevant information. During data collection and recruitment having this information neatly arranged helps me remember who I have contacted about the research project (and when), who has agreed to take part, as well as suggestions from snowball sampling for other people to contact.


Finally, a respondent ‘database’ like this can also be used to record my own notes on the respondents and data collection. If there is someone I have tried to contact many times but seems reluctant to take part, this is important to note. It can remind me when I have agreed to interview people, and keep together my own comments on how well this went. I can record which audio and transcript files contain the interview for this respondent, acting as a ‘master key’ of anonymised recordings. 


So once you have your long-form qualitative data, how best to integrate this with the rest of the participant data? Again, I’m going to give examples using Quirkos here, but the similar principals will apply to many other CAQDAS/QDA software packages.


First, you could import the spreadsheet data as is, and add the transcripts later. To do this, just save your participant database as a CSV file in Excel, Google Docs, LibreOffice or your spreadsheet software of choice. You can bring in the file into a blank Quirkos project using the ‘Import source from CSV’ on the bottom right of the screen. The wizard in the next page will allow you to choose how you want to treat each column in the spreadsheet, and each row of data will become a new source. When you have brought in the data from the spreadsheet, you can individually bring the qualitative data in as the text source for each participant, copy and pasting from wherever you have the transcript data.


However, it’s also possible to just put the text into a column in the spreadsheet. It might look unmanageable in Excel when a single cell has pages of text data, but it will make for an easy one step import into Quirkos. Now when you bring in the data to Quirkos, just select the column with the text data as the ‘Question’ and discrete data as the ‘Properties’ (although they should be auto-detected like this).


You can also do direct data entry in Quirkos itself, and there are some features to help make this quick and relatively painless. The Properties and Values editor allows you to create categories and values to define your sources. There are also built in values for True/False, Yes/No, options from 1 -10 or Likert scales from Agree to Disagree. These let you quickly enter common types of data, and select them for each source. It’s also possible to export this data later as a CSV file to bring back into spreadsheet software.


mixed method data entry in quirkos


Once your data has been coded in Quirkos, you can use tools like the query view and the comparison views to quickly see differences between groups of respondents. You can also create simple graphs and outputs of your quantitative and discrete data. Having not just demographic information, but also your notes and thoughts together is vital context to properly interpreting your qualitative and quantitative data.



A final good reason to keep a good database of your research data is to make sure that it is properly documented for secondary analysis and future use. Should you want to ever work with the data again, share it with another research team, or the wider community, an anonymised data table like this is important to make sure the research has the right metadata to be used for different lines of enquiry.



Get an overview of Quirkos and then try for yourself with our free trial, and see how it can help manage pure qualitative or mixed method research projects.




What actually is Grounded Theory? A brief introduction

grounded theory


“It’s where you make up as you go along!”


For a lot of students, Grounded Theory is used to describe a qualitative analytical method, where you create a coding framework on the fly, from interesting topics that emerge from the data. However, that's not really accurate. There is a lot more to it, and a myriad of different approaches.

Basically, grounded theory aims to create a new theory of interpreting the world, either when it’s an area where there isn’t any existing theory, or you want to challenge what is already out there. An approach that is often overused, it is a valuable way of approaching qualitative research when you aren’t sure what questions to ask. However, it is also a methodological box of worms, with a number of different approaches and confusing literature.

One of my favourite quotes on the subject is from Dey (1999) who says that there are “probably as many versions of grounded theory as there are grounded theorists”. And it can be a problem: a quick search of Google Scholar will show literally hundreds of qualitative research articles with the phrase “grounded theory was used” and no more explanation than this. If you are lucky, you’ll get a reference, probably to Strauss and Corbin (1990). And you can find many examples in peer-reviewed literature describing grounded theory as if there is only one approach.


Realistically there are several main types of grounded theory:


Classical (CGT)
Classical grounded theory is based on the Glaser and Strauss (1967) book “The Discovery of Grounded Theory”, in which it is envisaged more as a theory generation methodology, rather than just an analytical approach. The idea is that you examine data and discover in it new theory – new ways of explaining the world. Here everything is data, and you should include fieldwork notes as well as other literature in your process. However, a gap is recommended so that literature is not examined first (like when doing a literature review) creating bias too early, but rather engaging with existing theory as something to be challenged.

Here the common coding types are substantive and theoretical – creating an iterative one-two punch which gets you from data to theory. Coding is considered to be very inductive, having less initial focus from the literature.


Modified (Straussian)
The way most people think about grounded theory probably links closest to the Strauss and Corbin (1990) interpretation of grounded theory, which is probably more systematic and concerned with coding and structuring qualitative data. It traditionally proposes a three (or sometimes two) stage iterative coding approach, first creating open codes (inductive), then grouping and relating them with axial coding, and finally a process of selective coding. In this approach, you may consider a literature review to be a restrictive process, binding you to prejudices from existing theory. But depending on the different interpretations, modified grounded theory might be more action oriented, and allow more theory to come from the researcher as well as the data. Speaking of which…


The seminal work on constructivism here is from Charmaz (2000 or 2006), and it’s about the way researchers create their own interpretations of theory from the data. It aims to challenge the idea that theory can be ‘discovered’ from the data – as if it was just lying there, neutral and waiting to be unearthed. Instead it tries to recognise that theory will always be biased by the way researchers and participants create their own understanding of society and reality. This engagement between participants and researchers is often cited as a key part of the constructivist approach.
Coding stages would typically be open, focused and then theoretical. Whether you see this as being substantively different from the ‘open – axial – selective’ modified grounded theory strategy is up to you. You’ll see many different interpretations and implementations of all these coding approaches, so focus more on choosing the philosophy that lies behind them.


A lot of the literature here comes from the nursing field, including Kushner and Morrow (2003), Wuest (1995), and Keddy (2006). There are clear connections here with constructivist and post-modern approaches: especially the rejection of positivist interpretations (even in grounded theory!), recognition of multiple possible interpretations of reality, and the examination of diversity, privilege and power relations.


Again, a really difficult segmentation to try and label, but for starters think Foucault, power and discourse. Mapping of the social world can be important here, and some writers argue that the practice of trying to generate theory at all is difficult to include in a postmodern interpretation. This is a reaction against the positivist approach some see as inherent in classical grounded theory. For where this leaves the poor researcher practically, there are at least one main suggested approach here from Clarke (2005) who focuses on mapping the social world, including actors and noting what has been left unsaid.


There are also what seem to me to be a variety of approaches plus a particular methodology, such as discursive grounded theory where the focus is more on the language used in the data (McCreaddie and Payne 2010). It basically seeks to integrate discourse analysis to look at how participants use language to describe themselves and their worlds. However, I would argue that many different ways of analysing data like discourse analysis can be combined with grounded theory approaches, so I am not sure they are a category of their own right.



To do grounded theory justice, you really need to do more than read this crude blog post! I’d recommend the chapter on Grounded Theory in Savin-Baden and Howell Major’s (2013) textbook on Qualitative Research. There’s also the wonderfully titled "A Novice Researcher’s First Walk Through the Maze of Grounded Theory" by Evans (2013). Grounding Grounded Theory (Dey 1999) is also a good read – much more critical and humorous than most. However, grounded theory is such a pervasive trope in qualitative research, indeed is seen by some to define qualitative research, that it does require some understanding and engagement.


But it’s also worth noting that for practical purposes, it’s not necessary to get involved in all the infighting and debate in the grounded theory literature. For most researchers the best advice is to read a little of each, and decide which approach is going to work best for you based on your research questions and personal preferences. Even better is if you can find another piece of research that describes a grounded theory approach you like, then you can just follow their lead: either citing them or their preferred references. Or, as Dey (1999) notes, you can just create your own approach to grounded theory! Many argue that grounded theory encourages such interpretation and pluralism, just be clear to yourself and your readers what you have chosen to do and why!


Merging and splitting themes in qualitative analysis

split and merge qual codes

To merge or to split qualitative codes, that is the question…


One of the most asked questions when designing a qualitative coding structure is ‘How many codes should I have?’. It’s easy to start out a project thinking that just a few themes will cover the research questions, but sooner or later qualitative analysis tends towards ballooning thematic structure, and before you’ve even started you might have a framework with dozens of codes. And while going through and analysing the data, you might end up with another couple dozen more. So it’s quite common for researchers to end up with more than a hundred codes (or sometimes hundreds)!


This can be alarming for students doing qualitative analysis for the first time, but I would argue it’s fine in most situations. While it can be confusing and disorienting if you are using paper and highlighters, when using CAQDAS software a large number of themes can be quite manageable. However, this itself can be a problem, since qualitative software makes it almost too easy to create an unwieldy number of codes. While some restraint is always advisable, when I am running workshops I usually advise new coders not to worry, since with the software it is easier to merge codes later, than split them apart.


I’m going to use the example of Quirkos here, but the same principal applies to any qualitative data analysis package. When you are going through and analysing your qualitative text sources, reading and coding them is the most time consuming part. If you create a new code for a theme half way through coding your data because you can see it is becoming important, you will have to go back to the beginning and read through the already coded sources to make sure you have complete coverage. That’s why it’s normally easier to think through codes before starting a code/read through.


Of course there is some methodological variance here: if you are doing certain types of grounded theory this may not apply as you will want to create themes on the fly. It’s also worth noting that good qualitative coding is an iterative process, and you should expect to go through the data several times anyway. Usually each time you do this you will look at the code structure in a different way – maybe creating a more higher-level, theory driven coding framework on each pass.


However, there is another way that QDA software helps you manage your qualitative themes: since it is simple to merge smaller codes together under a more general heading. In Quirkos, just right click on the code bubble you want to keep, and you will see the dialogue below:


merging qualitative codes in quirkos

Then select from the drop down list of other themes in your project which topic you want to merge into the Quirk you selected first. That’s it! All the coded text in the second bubble will get added to the first one, and it will keep the name of that code, appended with “(merged)” so you can identify it.


Since it is so easy to merge topics in qualitative software, I generally suggest that people aren’t afraid to create a large number of very specific topics, knowing they can merge them together later. For example, if you are create a code for when people are talking about eating out at a restaurant, why not start with separate codes for Fast food, Mexican, Chinese, Haute cuisine etc - since you can always merge them later under the generic ‘Restaurant’ theme if you decide you don’t need that much detail.


It is also possible to retroactively split broad codes into smaller categories, but this is a much more engaged process. To do this in Quirkos, I would start by taking the code you want to expand (say Restaurant) and make sure it is a top level code – in other words is not a subcategory of another code. Then, create the codes you want to break out (for example Thai, Vegetarian, Organic) and make them sub categories of the main node. Then, double click on the top Quirk, and you will get a list of all the text coded to the top node (Restaurant). From this view in Quirkos, you can drag and drop each code into the relevant subcategory (eg Organic, Thai):

splitting qualitative codes in quirkos

Once you have gone through and recoded all the quotes into new codes, you can either delete the quotes from the top level code (Restaurant) one by one (by right clicking on the highlight stripe), or remove all quotes from that node by deleting the top-node entirely. If you still want to have a Restaurant Quirk at the top to contain the sub categories, just recreate it, and add the sub-categories to it. That way you will have a blank ‘Restaurant’ theme to keep the subcategories (Thai, Organic) together.


So to summarise, don’t be afraid to have too many codes in CAQDAS software – use the flexibility it gives you to experiment. While there is always too much of a good thing, the software will help you see all the coding options at once, so you can decide the best place to categorise each quote. With the ability to merge, and even split apart codes with a little effort, it’s always possible to adjust  your coding framework later – in fact you should anticipate the need to do this as you refine your interpretations. You can also save your project at one stage of the coding, and go back to that point if you need to revert to an earlier state to try a different approach. For more information about large or small coding strategies, this blog post article goes into more depth.

If you want to see how this works in Quirkos, just download the free trial and try for yourself. Quirkos makes operations like merge and split really easy, and the software is designed to be intuitive, visual and colourful. So give it a try, and always contact us if you have any questions or suggestions on how we can make common operations like this quicker and simpler!



Using qualitative analysis software to teach critical thought

teaching qualitative software


It’s a key part of the curriculum for British secondary school and American high school education to teach critical thought and analysis. It’s a vital life skill: the ability to look at who is saying what, and pick apart what is being said. I’ve been thinking about the possible role for qualitative analysis in education, and how qualitative data analysis software in particular could help develop critical analysis skills in students of all ages.


While using qualitative analysis software is fairly common at university level, it’s a little unusual (possibly unprecedented at a quick glance) to use it at higher/secondary level with pre-college students. But why is this the case? It may well be that previously the software was too complex or expensive to use in mainstream schools, especially when you consider the amount of training the teachers and educators would have to have.


However, Quirkos was designed to make qualitative analysis more accessible by being easier to learn and teach, while also reducing the cost of licences. Thus it may make a better fit than previous options for the higher education sector. But how would such an approach work, and how would it fit into an already tight curriculum?


First of all, the notion of critical reading and analysis is prominent as a ‘core skill’ in UK secondary and USA K-12 education. For example the UK English curriculum states that:
“Critical reading, discussing, appreciating and exploring texts is essential for learning across the curriculum”

In History, teachers should:
“equip pupils to ask perceptive questions, think critically, weigh evidence, sift arguments, and develop perspective and judgement… [and] understand the methods of historical enquiry, including how evidence is used rigorously to make historical claims, and discern how and why contrasting arguments and interpretations of the past have been constructed”

Even in the USA the Common Core State Standards “stresses critical-thinking, problem-solving, and analytical skills that are required for success in college, career, and life”

I would argue that trying to include qualitative analysis in a curriculum can tick many of these boxes, and provide a flexible way to integrate these skills in other lesion plans. For example, in History, students could be given a number of newspaper articles covering an important historical event. These may come from different countries or papers with different viewpoints, and using qualitative software students could perform comparative analysis, identifying sections of the text that show bias or contradict.


In an English class, students could be provided with a digital copy of a book on the reading list, and given a framework with topics to explore, encouraging them to identify metaphor, similes, or more specific issues like ‘representations of women’ or other recurring themes. If qualitative analysis software became a standard tool in schools, it could easily fit into a variety of activities, with teachers easily able to look at student’s outputs for marking and group discussion.


Finally, students of any age could be encouraged to do their own qualitative research project, surveying their peers or community on topics both topical and relevant to the curriculum. That way, children can also learn about setting research questions, bias, and presenting results, helping them better understand and critique the barrage of studies they are exposed to in the media.



The visual, colourful and interactive interface of Quirkos is very intuitive to the digital touch-screen generation: it not only looks like a game, but provides visual stimulation and feedback in the same way. Watching their bubble codes grow, and organising topics like petals in a flower should be intuitive for children of all ages, but is also fundamentally teaching them the basics of qualitative analysis, sorting and categorising categories, and thinking about what different sources are saying.


We are talking to educators in the UK already about developing example lesson plans and curriculums around Quirkos and qualitative analysis. There are a lot of practical hurdles to overcome, including the plurality of different IT systems schools use, and the limited amount of time teachers get to learn and enact new methods.


But the benefits are considerable: a background in qualitative research and analysis techniques is a great transferable skill for students to take into their working life. Although there doesn’t seem to be a lot of jobs outside research that make qualitative analysis experience an essential criteria, many jobs include a lot of dealing with written text in just such a way. Few workers in office environments can get by without engaging with company or government policy documents, and in areas like HR staff have to critically appraise (in a replicable, and guided way) written documents like CVs and covering letters on a regular basis.


And it’s a frequent complaint from employers that these are exactly the kind of skills applicants are lacking:

“In survey after survey, they rate young applicants as deficient in such key workplace skills as written and oral communication, critical thinking and analytical reasoning.”

The Collegiate Learning Assessment Plus measure used in the US university system measures analytical reasoning, critical thinking, document literacy, writing and communication skills – all considered essential areas by employers from all backgrounds. A recent study found that 40% of students, even at University level, lacked proficiency in these areas.


Qualitative analysis requires students to develop all of these skills, and getting started at a young age will not only help high school students start their academic studies where critical reasoning will become a daily task, but get them on the right step to employment, and to becoming an engaged and informed member of society.



In vivo coding and revealing life from the text

Ged Carrol

Following on from the last blog post on creating weird and wonderful categories to code your qualitative data, I want to talk about an often overlooked way of creating coding topics – using direct quotes from participants to name codes or topics. This is sometimes called “in vivo” coding, from the Latin ‘in life’ and not to be confused with the ubiquitous qualitative analysis software ‘Nvivo’ which can be used for any type of coding, not just in vivo!

In an older article I did talk about having a category for ‘key quotes’ - those beautiful times when a respondent articulates something perfectly, and you know that quote is going to appear in an article, or even be the article title. However, with in vivo coding, a researcher will create a coding category based on a key phrase or word used by a participant. For example someone might say ‘It felt like I was hit by a bus’ to describe their shock at the event, and rather than creating a topic/node/category/Quirk for ‘shock’, the researcher will name it ‘hit by a bus’. This is especially useful when metaphors like this are commonly used, or someone uses an especially vivid turn of phrase.

In vivo coding doesn’t just apply to metaphor or emotions, and can keep researchers close to the language that respondents themselves are using. For example when talking about how their bedroom looks, someone might talk about ‘mess’, ‘chaos’, or ‘disorganised’ and their specific choice of word may be revealing about their personality and embarrassment. It can also mitigate the tendency for a researcher to impose their own discourse and meaning onto the text.

This method is discussed in more depth in Johnny Saldaña’s book, The Coding Manual for Qualitative Researchers, which also points out how a read-through of the text to create in vivo codes can be a useful process to create a summary of each source.

Ryan and Bernard (2003) use a different terminology, indigenous categories or typographies after Patton (1990). However, here the meaning is a little different – they are looking for unusual or unfamiliar terms which respondents use in their own subculture. A good example of these are slang terms unique to a particular group, such as drug users, surfers, or the shifting vernacular of teenagers. Again, conceptualising the lives of participants in their own words can create a more accurate interpretation, especially later down the line when you are working more exclusively with the codes.

Obviously, this method is really a type of grounded theory, letting codes and theory emerge from the data. In a way, you could consider that if in vivo coding is ‘from life’ or grows from the data, then framework coding to an existing structure is more akin to ‘in vitro’ (from glass) where codes are based on a more rigid interpretation of theory. This is just like the controlled laboratory conditions of in vitro research with more consistent, but less creative, creations.

However, there are problems in trying to interpret the data in this way, most obviously, how ubiquitous will an in-vivo code from one source be across everyone’s transcripts? If someone talks about a shocking event in one source as feeling like being ‘hit by a bus’ and another ‘world dropped out from under me’, would we code the same text together? Both are clearly about ‘shock’ and would probably belong in the same theme, but does the different language require a slightly different interpretation? Wouldn’t you loose some of the nuance of the in vivo coding process if similar themes like these were lumped together?

The answer to all of these issues is probably ‘yes’. However, they are not insurmountable. In fact, Johnny Saldaña suggests that an in vivo coding process works best as a first reading of the data, creating not just a summary if read in order,  but a framework from each source which should be later combined with a ‘higher’ level of second coding across all the data. So after completing in vivo coding, the researcher can go back and create grouped coding categories based around common elements (like shock) or/and conceptual theory level codes (like long term psychological effects) which resonate across all the sources.

This sounds like it would be a very time consuming process, but in fact multi-level coding (which I often advocate) can be very efficient, especially with an in vivo coding as the first process. It may be that you just highlight some of these key words, on paper or Word, or create a series of columns in Excel adjacent to each sentence or paragraph of source material. Since the researcher doesn’t have to ponder the best word or phrase to describe the category at this stage, creating the coding framework is quick. It’s also a great process for participatory analysis, since respondents can quickly engage with selecting juicy morsels of text.

Don’t forget, you don’t have to use an exclusivly in vivo coding framework: just remember that it’s an option, and use for key illuminating quotes along side your other codes. Again, there is no one-size-fits-all approach for qualitative analysis, but knowing the range of methods allows you to choose the best way forward for each research question or project.

CAQDAS/QDA software makes it easy to keep all the different stages of your coding process together, and also create new topics by splitting and emerging existing codes. While the procedure will vary a little across the different qualitative analysis packages, the basics are very similar, so I’ll give a quick example of how you might do this in Quirkos.

Not a lot of people know this, but you can create a new Quirk/topic in Quirkos by dropping a section of text directly onto the create new bubble button, so this is a good way to create a lot of themes on the fly (as with in vivo coding). Just name these according to the in vivo phrase, and make sure that you highlight the whole section of relevant text for coding, so that you can easily see the context and what they are talking about.

Once you have done a full (or partial) reading and coding of your qualitative data, you can work with these codes in several ways. Perhaps the easiest is to create a umbrella (or parent) code (like shock) to which you can make relevant in vivo codes subcategories, just by dragging and dropping them onto the top node. Now, when you double click on the main node, you will see quotes from all the in vivo subcategories in one place.


qualitative research software - quirkos


It’s also possible to use the Levels feature in Quirkos to group your codes: this is especially useful when you might want to put an in vivo code into more than one higher level group. For example, the ‘hit by a bus’ code might belong in ‘shock’ but also a separate category called ‘metaphors’. You can create levels from the Quirk Properties dialogue of any Quirk, assign codes to one or more of these levels, and explore them using the query view. See this blog post for more on how to use levels in Quirkos.

It’s also possible to save a snapshot of your project at any point, and then actually merge codes together to keep them all under the same Quirk. You will loose most of the original in vivo codes this way (which is why the other options are usually better) but if you find yourself just dealing with too many codes, or want to create a neat report based on a few key concepts this can be a good way to go. Just right click on the Quirks you want to keep, and select ‘Merge Quirk with...’ to choose another topic to be absorbed into it. Don’t forget all actions in Quirkos have Undo and Redo options!

We don’t have an example dataset coded using in vivo quotes, but if you look at some of the sources from our Scottish Independence research project, you will see some great comments about politics and politicians that leap out of the page and would work great for in vivo coding. So why not try it out, and give in vivo coding a whirl with the free trial of Quirkos: affordable, flexible qualitative software that makes coding all these different approaches a breeze!