Comparing qualitative software with spreadsheet and word processor software

word and excel for qualitative analysis

An article was recently posted on the excellent Digital Tools for Qualitative Research blog on how you can use standard spreadsheet software like Excel to do qualitative analysis. There are many other articles describing this kind of approach, for example Susan Eliot or Meyer and Avery (2008). However, it’s also possible to use word processing software as well, see for example this presentation from Jean Scandlyn on the pros and cons of common software for analysing qualitative data.

 

For a lot of researchers, using Word or Excel seems like a good step up from doing qualitative analysis with paper and highlighters. It’s much easier to keep your data together, and you can easily correct, undo and do text searches. You also get the advantage of being able to quickly copy and paste sections from your analysis into research articles or a thesis. It’s also tempting because nearly everyone has access to either Microsoft Office products or free equivalents like OpenOffice (http://www.libreoffice.org) or Google Docs and knows how to use them. In contrast, qualitative analysis software can be difficult to get hold of: not all institutions have licences for them, and they can have a steep learning curve or high upfront cost.

 

However, it is very rare that I recommend people use spreadsheets or word processing software for a qualitative research project. Obviously I have a vested interest here, but I would say the same thing even if I didn’t design qualitative analysis software for a living. I just know too many people who have started out without dedicated software and hit a brick wall.

 

 

Spreadsheet cells are not very good ways to store text.


If you are going to use Excel or an equivalent, you will need to store your qualitative text data in it somehow. The most common method I have seen is to keep quotes or paragraphs as a separate cell in a column for the text. I’ve done this in a large project, and it fiddly to copy and paste the text in the right way. You will also find yourself struggling with formatting (hint – get familiar with the different wrap text and auto column width options). It also becomes a chore to separate out paragraphs into smaller sections to code them differently, or merge them together. Also, if you have data in other formats (like audio or video) it’s not really possible to do anything meaningful with them in Excel.

 


You must master Excel to master your analysis

 

As Excel or other spreadsheets are not really designed for qualitative analysis, you need to use a bit of imagination to sort and categorise themes and sources. With separate columns for source names and your themes, this is possible (although can get a little laborious). However, to be able to find particular quotes, themes and results from sources, you will need to properly understand how to use Pivot Tables and filters. This will allow you some ability to manage and sort your coded data.

 

It’s also a good idea to get to grips with some of the keyboard shortcuts for your spreadsheet software, as these will help take away some of the repetitive data entry you will need to do when coding extracts. There is no quick drag-and-drop way to assign text to a code, so coding will almost always be slower than using dedicated software.

 

For these reasons, although it seems like just using software like Excel you already know will be easier, it can quickly become a false economy in terms of the time required to code and learn advanced sorting techniques.

 


Word makes coding many different themes difficult.

 

I see a lot of people (mostly students) who start out doing line-by-line coding in Word, using highlight colours to show different topics. It’s very easy to fall into this: while reading through a transcript, you highlight with colours bits that are obviously about one topic or another, and before you know it there is a lot of text sorted and coded into themes and you don’t want to loose your structure. Unfortunately, you have already lost it! There is no way in Word or other word processing software to look at all the text highlighted in one colour, so to review everything on one topic you have to look through the text yourself.

 

There is also a hard limit of 15 (garish) colours, which limits the number of themes you can code, and it’s not possible to code a section with more than one colour. Comments and shading (in some word-processors) can get around this, but it is still limited: there is no way to create groups or hierarchies of similar themes.

 

I get a lot of requests from people wanting to bring coded work from a word processor into Quirkos (or other qualitative software) but it is just not possible.

 


No reports, or other outputs


Once you have your coded data – how do you share it, summarise it or print it out to read through away from the glow of the computer? In Word or Excel this is difficult. Spreadsheets can produce summaries of quantitative data, but have very few tools that deal with text. Even getting something as simple as a word count is a pain without a lot of playing around with macros. So getting a summary of your coding framework, or seeing differences between different sources is hard.

 

Also, I have done large coding projects in Excel, and printing off huge sheets and long rows and columns is always a struggle. For meetings and team work, you will almost always need to get something out of a spreadsheet to share, and I have not found a way to do this neatly. Suggestions welcome!

 

 


I’m not trying to say that using Word or Excel is always a bad option, indeed Quirkos lets you export coded data to Word or spreadsheet format to read, print and share with people who don’t have qualitative software, and to do more quantitative analysis. However, be aware that if you start your analysis in Word or Excel it is very hard to bring your codes into anything else to work on further.

 

Quirkos tries to make dedicated qualitative software as easy to learn and use as familiar spreadsheet and word processing tools, but with all the dedicated features that make qualitative analysis simple and more enlightening. It’s also one of the most affordable packages on the market, and there is a free trial so you can see for yourself how much you gain by stepping up to real qualitative analysis software!

 

 

Making the most of bad qualitative data

 

A cardinal rule of most research projects is things don’t always go to plan. Qualitative data collection is no difference, and the variability in approaches and respondents means that there is always the potential for things to go awry. However, the typical small sample sizes can make even one or two frustrating responses difficult to stomach, since they can represent such a high proportion of the whole data set.


Sometimes interviews just don’t go well: the respondent might only give very short answers, or go off on long tangents which aren’t useful to the project. Usually the interviewer can try and facilitate these situations to get better answers, but sometimes people can just be difficult. You can see this in the transcript of the interview with ‘Julie’ in the example referendum project. Despite initially seeming very keen on the topic, perhaps she was tired on the day, but cannot be coaxed into giving more than one or two word answers!


It’s disappointing when something like this happens, but it is not the end of the world. If one interview is not as verbose or complete as some of the others it can look strange, but there is probably still useful information there. And the opinions of this person are just as valid, and should be treated with the same weight. Even if there is no explanation, disagreeing with a question by just saying ‘No’ is still an insight.


You can also have people who come late to data collection sessions, or have to leave early resulting in incomplete data. Ideally you would try and do follow up questions with the respondent, but sometimes this is just not possible. It is up to you to decide whether it is worth including partial responses, and if there is enough data to make inclusion and comparison worthwhile.


Also, you may sometimes come across respondents who seem to be outright lying – their statements contradict, they give ridiculous or obviously false answers, or flat out refuse to answer questions. Usually I would recommend that these data sources are included, as long as there is a note of this in the source properties and a good justification for why the researcher believes the responses may not be trusted. There is usually a good reason that a respondent chooses to behave in such a way, and this can be important context for the study.


In focus group settings there can sometimes be one or two participants who derail the discussion, perhaps by being hostile to other members of the group or only wanting to talk about their pet topics and not the questions on the table. This is another situation where practice at mediating and facilitating data collection can help, but sometimes you just have to try and extract whatever is valuable. But organising focus groups can be very time consuming, and consume so many potentially good respondents in one go, so having poor data quality from one of the sessions can be upsetting. Don’t be afraid to go back to some of the respondents and see if they would do another smaller session, or one-on-ones to get more of their input.


However, the most frustrating situation is when you get disappointing data from a really key informant: someone that is an important figure in the field, is well connected or has just the right experience. These interviews don’t always go to plan, especially with senior people who may not be willing to share, or have their own agenda in how they shape the discussion. In these situations it is usually difficult to find another respondent who will have the same insight or viewpoint, so the data is tricky to replace. It’s best to leave these key interviews until you have done a few others; that way you can be confident in your research questions, and will have some experience in mediating the discussions.


Finally, there is also lost data. Dictaphones that don’t record or get lost. Files gone missing and lost passwords. Crashed computers that take all the data with them to an early and untimely grave! These things happen more often than they should, and careful planning, precautions and backups are the only way to protect against these.


But often the answer to all these problems is to collect more data! Most people using qualitative methodologies should have a certain amount of flexibility in their recruitment strategy, and should always be doing some review and analysis on each source as it is collected. This way you can quickly identify gaps or problems in the data, and make sure forthcoming data collection procedures cover everything.


So don’t leave your analysis too late, get your data into an intuitive tool like Quirkos, and see how it can bring your good and bad research data to light! We have a one month free trial, and lots of support and resources to help you make the most of the qualitative data you have. And don’t forget to share your stories of when things went wrong on Twitter using the hashtag #qualdisasters!

 

Practice projects and learning qualitative data analysis software

image by zaui/Scott Catron

 

Coding and analysing qualitative data is not only a time consuming, it’s a difficult interpretive exercise which, like learning a musical instrument, gets much better with practice. However, lots of students starting their first major qualitative or mixed method research project will benefit from completing a smaller project first, rather than starting by trying to learn a giant symphony. This will allow them to get used to qualitative analysis software, working with qualitative data, developing a coding framework and getting a realistic expectation of what can be done in a fixed time frame. Often people will try and learn all these aspects for the first time when they start a major project like a masters or PhD dissertation, and then struggle to get going and take the most effective approach.

 

Many scholars, including those advocating the 5 Level QDA approach suggest that learning the capabilities of the software and qualitative data separately, since one can effect the other. And a great way to do this is to actually dig in and get started with a separate smaller project. Reading textbooks and the literature can only prepare you so much (see for example this article on coding your first qualitative data), but a practical project to experiment and make mistakes in is a great preparation for the main event.

 

But what should a practice project look like? Where can I find some example qualitative data to play with? A good guideline is to take just a few sources, even just 3 or 4 from a method that is similar to the data collection you will use for your project. For example, if you are going to have focus groups, try and find some already existing focus group transcripts to transcribe. Although this can be daunting, there are actually lots of ways to quickly find qualitative data that will not only make you more familiar with real qualitative data, but also the analysis process and accompanying software tools. This article gives a couple of suggestions for a mini project to hone your skills!

 


News and media

A quick way to practice your basic coding skills is to do a small project using news articles. Just choose a recent (or historical) event, collect a few articles either from different news websites or over a period of time. Looking at conflicts in how events are described can be revealing, and is good for getting the right analytical eye to examine differences from respondents in your main project. It’s easy to go to different major news websites (like the Telegraph, Daily Mail, BBC News or the NYT) and copy and paste articles into Quirkos or other qualitative analysis software. All these sites have searchable archives, so you can look for a particular topic and find older articles.

 

Set yourself a good research question (or two), and use this project to practice generating a coding framework and exploring connections and differences across sources.

 

 

Qualitative Data Archives

If you want some more involved experience, browse some of the online repositories of qualitative data. These allow you to download the complete data set from research projects large and small. Since much government (or funding board) funded research requires data to be made publicly available, there are increasing numbers of data sets available to download which make a great way to look at real qualitative data, and practice your analysing skills. I’ll share two examples here, the first is the UK Data Archive and the second the Syracuse Qualitative Data Repository.

 

Regardless of where you are based, these resources offer an amazing variety of data types and topics. This can make your practice fun – there are data sets on some fascinating and obscure areas, so choose something interesting to you, or even something completely new and different as a challenge. You also don’t have to use all the sources from a large project – just choose three or four to start with, you can always add more later if you need extra coding experience.

 

 

Literature reviews

Actually, qualitative analysis software is a great way to get to grips with articles, books and policy documents relating to your research project. Since most people will want to do a systematic or literature review before they start a project, bringing your literature into qualitative software is a good way to learn the software while also working towards your project. While reading through your literature, you can create codes/themes to describe key points in theory or previous studies, and compare findings from different research projects.

In Quirkos it is easy to bring in PDF articles from journals or ebooks, and then you will have not only a good reference management system, but somewhere you can keep the full text of relevant articles, tagged and coded so you can find key quotes quickly for writing up. Our article here gives some good advice on using qualitative software for systematic and literature reviews.

 

 


Our own projects

Quirkos also has made two example projects freely available for learning qualitative analysis with any software package. The first is great for beginners, a fictional project about healthy eating options for breakfast. These 6 sources are short, but with rich information, so can be fully coded in less than an hour. Secondly, we conduded a real research project on the Scottish Referendum for Independence, and 12 transcribed semi-structured interviews are made available for your own practice and interpretation.

 

The advantage of these projects is that they both have fully coded project files to use as examples and comparison. It’s rare to find people sharing their coding (especially as an accessible project file) but can be a useful guide or point of comparison to your own framework and coding decisions.

 

 

Ask around

Talk to faculty in your department and see if they have any example data sets you can use. Some academics will already have these for teaching purposes or taken from other research projects they are able to share.

 

It can also be a good exercise to do a coding project with someone else. Regardless of which option you choose from the example qualitative data sources above, share the data with another student or colleague, and go and do your own coding  separately. Once you are both done, meet up and compare your results – it will be really revealing to see how different people interpreted the data, how their coding framework looked, and how they found working with the software. It’s also good motivation and time management to have to work to a mutually set deadline!

 

 

The great thing about starting a small learning project is that it can be a perfect opportunity to experiment with different qualitative analysis software. It may be that you only have access to one option like Nvivo, MAXQDA, or Atlas.Ti at your institution, but student licences are very cheap and affordable, so make a great option for learning qualitative analysis. All the major packages have a free trial, so you can try several (or them all!) and find out which one works best for you. Doing this with a small example project lets you practice key techniques and qualitative methods, and also think through how best to collect and structure your data for analysis.

 

Quirkos probably has the best deal for qualitative research software, for example our student licences are cheap at just $59 (£49 or €58) and don’t expire. Most of the other packages only give you six months or a year but we let you use Quirkos as long as you need, so you will always be able to access your data – even after you graduate. Even academics and supervisors will find that Quirkos is much more affordable and easier to learn. Of course, there is a no obligation or registration trial, and all our support and training materials are free as well. So make sure you make the most informed decision before you start your research, and we hope that Quirkos becomes your instrument of choice for qualitative analysis!

 

 

Looking back and looking forward to qualitative analysis in 2017

2017 in qualitative analysis software - Janus

 

In the month named for Janus, it’s a good time to look back at the last year for Quirkos and qualitative analysis software and look forward to new developments for 2017.

 

It’s been a good year of growth for Quirkos, we now can boast of users in more than 100 universities across the world. But we can see how many more people are using Quirkos in these institutions as the word grows. There is no greater complement than when researchers recommend Quirkos to their peers, and this has been my favourite thing to see this year.

 

We also we honoured to take part in many international conferences and events, including TQR in January, ICQI in May, KWALON in August and QDR in October. Next year already has many international events on the calendar, and we hope to be in your neck of the woods soon! We have also run training workshops in many universities across the UK, and demand ensures these will continue in 2017.

 

Our decision to offer a 25% discount to researchers in developing countries has opened the door to a lot of interest and we are helping many researchers use qualitative analysis software for the first time.

 

The blog has also become a major resource for qualitative researchers, with more than 110 posts and counting now archived on the site, attracting thousands of visitors a month. In the next year we will be adding some new experimental formats and training resources to complement our methodology articles.

 

In terms of the Quirkos software itself, 2016 saw our most major upgrade to date, v1.4 which brought huge improvements in speed for larger projects. In early 2017 we will release a minor update (v1.4.1) which will provide a few bug fixes. We are already working towards v1.5 which will be released later in the year and add some major new requested features and refinements, but keep the same simple interface and workflow that people love.

 

We also have a couple of major announcements in the next month about the future of qualitative analysis software, Quirkos and the new skills we will be bringing on board. Watch this space!

 

How Quirkos can change the way you look at your qualitative data

Quirkos qualitative software seeing data

We always get a lot of inquiries in December from departments and projects who are thinking of spending some left-over money at the end of the financial year on a few Quirkos licences. A great early Christmas present for yourself the team! It’s also a good long term investment, since our licences don’t expire and can be used year after year. They are transferable to new computers, and we’ve committed to provide free updates for the current version. We don’t want the situation where different teams are using different versions, and so can’t share projects and data. Our licences are often a fraction of the cost of other qualitative software packages, but for the above reasons we think that we offer much more value than just the comparative sticker price.

 

But since Quirkos also has a different ethos (accessibility) and some unique features, it also helps you approach your qualitative research data in a different way to other software. In the two short years that Quirkos has been available, it’s become used by more than 100 universities across the world, as well as market research firms and public sector organisations. That has given me a lot of feedback that helps us improve the software, but also a sense of what things people love the most about it. So following is the list of things I hear most about the software in workshops and e-mails.

 

It’s much more visual and colourful

quirkos qualitative coding bubbles

Experienced researchers who have used other software are immediately struck by how colourful and visual the Quirkos approach is. The interface shows growing bubbles that dynamically show the coding in each theme (or node), and colours all over the screen. For many, the Quirkos design allows people to think in colours, spatially, and in layers, improving the amount of information they can digest and work with. Since the whole screen is a live window into the data, there is less need to generate separate reports, and coding and reviewing is a constant (and addictive) feedback process.


This doesn’t appeal to everyone, so we still have a more traditional ‘tree’ list structure for the themes which users can switch between at any time.

 

 

I can get started with my project quicker


We designed Quirkos so it could be learnt in 20 minutes for use in participatory analysis, so the learning curve is much lower than other qualitative software. Some packages can be intimidating to the first-time user, and often have 2 day training courses. All the training and support materials for Quirkos are available for free on our website, without registration. We increasingly hear that students want self-taught options, which we provide in many different formats. This means that not only can you start using Quirkos quickly, setting up and putting data into a new project is a lot quicker as well, making Quirkos useful for smaller qualitative projects which might just have a few sources.

 

 

I’m kept closer to my data

qualitative software comparison view


It’s not just the live growing bubbles that mean researchers can see themes evolve in their analysis, there are a suite of visualisations that let you quickly explore and play with the data. The cluster views generate instant Venn diagrams of connection and co-occurrences between themes, and the query views show side-by-side comparisons for any groups of your data you want to compare and contrast. Our mantra has been to make sure that no output is more than one click away, and this keeps users close to their data, not hidden away behind long lists and sub-menus.

 

 

It’s easier to share with others

qualitative word export


Quirkos provides some unique options that make showing your coded qualitative data to other people easier and more accessible. The favourite feature is the Word export, which creates a standard Word document of your coded transcripts, with all the coding shown as colour coded comments and highlights. Anyone with a word processor can see the complete annotated data, and print it out to read away from the computer.


If you need a detailed summary, the reports can be created as an interactive webpage, or a PDF which anyone can open. For advanced users you can also export your data as a standard spreadsheet CSV file, or get deep into the standard SQLite database using any tool (such as http://sqlitebrowser.org/) or even a browser extension.

 

 

I couldn’t get to grips with other qualitative software

quirkos spreadsheet comparison


It is very common for researchers to come along to our workshops having been to training for other qualitative analysis software, and saying they just ‘didn’t get it’ before. While very powerful, other tools can be intimidating, and unless you are using them on a regular basis, difficult to remember all the operations. We love how people can just come back to Quirkos after 6 months and get going again.


We also see a lot of people who tried other specialist qualitative software and found it didn’t fit for them. A lot of researchers go back to paper and highlighters, or even use Word or Excel, but get excited by how intuitive Quirkos makes the analysis process.

 

 

Just the basics, but everything you need


I always try and be honest in my workshops and list the limitations of Quirkos. It can’t work with multimedia data, can’t provide quantitative statistical analysis, and has limited memo functionality at the moment. But I am always surprised at how the benefits outweigh the limitations for most people: a huge majority of qualitative researchers only work with text data, and share my belief that if quantiatitve statistics are needed, they should be done in dedicated software. The idea has always been to focus on making the core actions that researchers do all the time (coding, searching, developing frameworks and exploring data) and make them as smooth and quick as possible.

 


If you have comments of your own, good or bad, we love to hear them, it’s what keeps us focused on the diverse needs of qualitative researchers.


Get in touch and we can help explain the different licence options, including ‘seat’ based licences for departments or teams, as well as the static licences which can be purchased immediately through our website. There are also discounts for buying more than 3 licences, for different sectors, and developing countries.


Of course, we can also provide formal quotes, invoices and respond to purchase orders as your institution requires. We know that some departments take time to get things through finances, and so we can always provide extensions to the trial until the orders come through – we never want to see researchers unable to get at their data and continue their research!


So if you are thinking about buying a licence for Quirkos, you can download the full version to try for free for one month, and ask us any questions by email (sales@quirkos.com), Skype ‘quirkos’ or a good old 9-to-5 phone call on (+44) 0131 555 3736. We are here for qualitative researchers of all (coding) stripes and spots (bubbles)!

 

Snapshot data and longitudinal qualitative studies

longitudinal qualitative data


In the last blog post, we looked at creating archives of qualitative data that can be used by other researchers (or yourself in the future) for secondary analysis. In that article I postulated that secondary data analysis could make collecting new data a rarer, and expensive event. However, some (including Dr Susanne Friese) pointed out that as the social world is always changing, there is a constant need to collect new data. I totally agree with this, but it made me think about another problem with research studies – they are time-limited collection events that end up capturing a snapshot of the world when they were recorded.


Most qualitative research collects data as a series of one-off dives into the lives and experience of participants. These might be a single interview or focus group, or the results of a survey that captures opinions at a particular point in time. This might not be a certain date, it might also be at a key point in a participant’s journey – such as when people are discharged from hospital, or after they attend a training event. However, even within a small qualitative research project it is possible to measure change, by surveying the people at different stages in their individual or collective journeys.


This is sometimes called ‘Qualitative Longitudinal Research’ (Farrall et al. 2006), and often involves contacting people at regular intervals, possibly even after years or decades. Examples of this type of project include the five year ‘Timescapes’ project in the UK which looked at changes in personal and family relationships (Holland 2011), or a mutli-year look at homelessness and low-income families in Worcester, USA for which the archived data is available (yay!).


However, such projects tend to be expensive, as they require having researchers working on a project for many years. They also need quite a high sample size because there is an inevitable annual attrition of people who move, fall ill or stop responding to the researchers. Most projects don’t have the luxury of this sort of scale and resources, but there is another possibility to consider: if it is appropriate to collect data over a shorter timescale. This is often collecting data at ‘before and after’ data points – for example bookending a treatment or event, so are often used in well planned evaluations. Researchers can ask questions about expectations before the occasion, and how they felt afterwards. This is useful in a lot of user experience research, but also to understand motivations of actors, and improve delivery of key services.


But there are other reasons to do more than one interview with a participant. Even having just two data points from a participant can show changes in lives, circumstances and opinions, even when there isn’t an obvious event distinguishing the two snapshots. It also gets people to reflect on  answers they gave in the first interview, and see if their feelings or opinions have changed. It can also give an opportunity to further explore topics that were interesting or not covered in the first interview, and allow for some checking – do people answer the questions in a consistent way? Sometimes respondents (or even interviewers) are having a bad day, and this doubles the chance of getting high quality data.


In qualitative analysis it is also an opportunity to look through all the data from a number of respondents and go back to ask new questions that are revealed from the data. In a grounded theory approach this is very valuable, but it can also be used to check researcher’s own interpretations and hypothesis about the research topic.


There are a number of research methods which are particularly suitable to longitudinal qualitative data studies. Participant diaries for example can be very illuminating, and can work over long or short periods of time. Survey methods also work well for multiple data points, and it is fairly easy to administer these at regular intervals by collecting data via e-mail, telephone or online questionnaires. These don’t even have to have very fixed questions, it is possible to do informal interviews via e-mail as well (eg Meho 2006), and this works well over long and medium periods of time.


So when planning a research project it’s worth thinking about whether your research question could be better answered with a longitudinal or multiple contact approach. If you decide this is the case, just be aware of the possibility that you won’t be able to contact everyone multiple times, and if means that some people’s data can’t be used, you need to over-recruit at the initial stages. However, the richer data and greater potential for reliability can often outweigh the extra work required. I also find it very illuminating as a researcher, to talk to participants after analysing some of their data. Qualitative data can become very abstract from participants (especially once it is transcribed, analysed and dissected) and meeting the person again and talking through some of the issues raised can help reconnect with the person and story.


Researchers should also consider how they will analyse such data. In qualitative analysis software like Quirkos, it is usually fairly easy to categorise the different sources you have by participant or time scale, so that you can look at just 1st or 2nd interviews or all together. I have had many questions from users into how best to organise their projects for such studies: and my advice is generally to keep everything in one project file. Quirkos makes it easy to do analysis on just certain data sources, and show results and reports from everything, or just one set of data.


So consider giving Quirkos a try with a free download for Windows, Mac or Linux, and read about how to build complex queries to explore your data over time, or between different groups of respondents.

 

 

Archiving qualitative data: will secondary analysis become the norm?

archive secondary data

 

Last month, Quirkos was invited to a one day workshop in New York on archiving qualitative data. The event was hosted by Syracuse University, and you can read a short summary of the event here. This links neatly into the KWALON led initiative to create a common standard for interchange of coded data between qualitative software packages.


The eventual aim is to develop a standardised file format for qualitative data, which not only allows use of data on any qualitative analysis software, but also for coded qualitative data sets to be available in online archives for researchers to access and explore. There are several such initiatives around the world, for example QDR in the United States, and the UK Data Archive in the United Kingdom. Increasingly, research funders are requiring that data from research is deposited in such public archives.


A qualitative archive should be a dynamic and accessible resource. It is of little use creating something like the warehouse at the end of ‘Raiders of the Lost Ark’: a giant safehouse of boxed up data which is difficult to get to. Just like the UK Data Archive it must be searchable, easy to download and display, and have enough detail and metadata to make sure data is discoverable and properly catalogued. While there is a direct benefit to having data from previous projects archived, the real value comes from the reuse of that data: to answer new questions or provide historical context. To do this, we need a change in practice from researchers, participants and funding bodies.


In some disciplines, secondary data analysis of archival data is common place, think for example of history research that looks at letters and news from the Jacobean period (eg Coast 2014). The ‘Digital Humanities’ movement in academia is a cross-dicipline look at how digital archives of data (often qualitative and text based) can be better utilised by researchers. However, there is a missed opportunity here, as many researchers in digital humanities don’t use standard qualitative analysis software, preferring instead their own bespoke solutions. Most of the time this is because the archived data is in such a variety of different formats and formatting.


However, there is a real benefit to making not just historical data like letters and newspapers, but contemporary qualitative research projects (of the typical semi-structured interview / survey / focus-group type) openly available. First, it allows others to examine the full dataset, and check or challenge conclusions drawn from the data. This allows for a higher level of rigour in research - a benefit not just restricted to qualitative data, but which can help challenge the view that qualitative research is subjective, by making the full data and analysis process available to everyone.

 

Second, there is huge potential for secondary analysis of qualitative data. Researchers from different parts of the country working on similar themes can examine other data sources to compare and contrast differences in social trends regionally or historically. Data that was collected for a particular research question may also have valuable insights about other issues, something especially applicable to rich qualitative data. Asking people about healthy eating for example may also record answers which cover related topics like attitudes to health fads in the media, or diet and exercise regimes.

 

At the QDR meeting last month, it was postulated that the generation of new data might become a rare activity for researchers: it is expensive, time consuming, and often the questions can be answered by examining existing data. With research funding facing an increasing squeeze, many funding bodies are taking the opinion that they get better value from grants when the research has impact beyond one project, when the data can be reused again and again.

 

There is still an element of elitism about generating new data in research – it is an important career pathway, and the prestige given to PIs running their own large research projects are not shared with those doing ‘desk-based’ secondary analysis of someone else’s data. However, these attitudes are mostly irrational and institutionally driven: they need to change. Those undertaking new data collection will increasingly need to make sure that they design research projects to maximise secondary analysis of their data, by providing good documentation on the collection process, research questions and detailed metadata of the sources.

 

The UK now has a very popular research grant programme specifically to fund researchers to do secondary data analysis. Loiuse Corti told the group that despite uptake being slow in the first year, the call has become very competitive (like the rest of grant funding). These initiatives will really help raise the profile and acceptability of doing secondary analysis, and make sure that the most value is being gained from existing data.


However, making qualitative data available for secondary analysis does not necessarily mean that it is publicly available. Consent and ethics may not allow for the release of the complete data, making it difficult to archive. While researchers should increasingly start planning research projects and consent forms to make data archivable and shareable, it is not always appropriate. Sometimes, despite attempts to anonymise qualitative data, the detail of life stories and circumstances that participants share can make them identifiable. However, the UK Data Archive has a sort of ‘vetting’ scheme, where more sensitive datasets can only be accessed by verified researchers who have signed appropriate confidentiality agreements. There are many levels of access so that the maximum amount of data can be made available, with appropriate safeguards to protect participants.


I also think that it is a fallacy to claim that participants wouldn’t want their data to be shared with other researchers – in fact I think many respondents assume this will happen. If they are going to give their time to take part in a research project, they expect this to have maximum value and impact, many would be outraged to think of it sitting on the shelf unused for many years.


But these data archives still don’t have a good way to share coded qualitative data or the project files generated by qualitative software. Again, there was much discussion on this at the meeting, and Louise Corti gave a talk about her attempts to produce a standard for qualitative data (QuDex). But such a format can become very complicated, if we wish to support and share all the detailed aspects of qualitative research projects. These include not just multimedia sources and coding, but date and metadata for sources, information about the researchers and research questions, and possibly even the researcher’s journals and notes, which could be argued to be part of the research itself.

 

The QuDex format is extensive, but complicated to implement – especially as it requires researchers to spend a lot of time entering metadata to be worthwhile. Really, this requires a behaviour change for researchers: they need to record and document more aspects of their research projects. However, in some ways the standard was not comprehensive enough – it could not capture the different ways that notes and memos were recorded in Atlas.ti for example. This standard has yet to gain traction as there was little support from the qualitative software developers (for commercial and other reasons). However, the software landscape and attitudes to open data are changing, and major agreements from qualitative software developers (including Quirkos) mean that work is underway to create a standard that should eventually create allow not just for the interchange of coded qualitative data, but hopefully for easy archival storage as well.


So the future of archived qualitative archive requires behaviour change from researchers, ethics boards, participants, funding bodies and the designers of qualitative analysis software. However, I would say that these changes, while not necessarily fast enough, are already in motion, and the word is moving towards a more open world for qualitative (and quantitative) research.


For things to be aware of when considering using secondary sources of qualitative data and social media, read this post. And while you are at it, why not give Quirkos a try and see if it would be a good fit for your next qualitative research project – be it primary or secondary data. There is a free trial of the full version to download, with no registration or obligations. Quirkos is the simplest and most visual software tool for qualitative research, so see if it is right for you today!

 

 

Stepping back from coding software and reading qualitative data

printing and reading qualitative data

There is a lot of concern that qualitative analysis software distances people from their data. Some say that it encourages reductive behaviour, prevents deep reading of the data, and leads to a very quantified type of qualitative analysis (eg Savin-Baden and Major 2013).

 

I generally don’t agree with these statements, and other qualitative bloggers such as Christina Silver and Kristi Jackson have written responses to critics of qualitative analysis software recently. However, I want to counter this a little with a suggestion that it is also possible to be too close to your data, and in fact this is a considerable risk when using any software approach.

 

I know this is starting to sound contradictory, but it is important to strike a happy balance so you can see the wood from the trees. It’s best to have both a close, detailed reading and analysis of your data, as well as a sense of the bigger picture emerging across all your sources and themes. That was the impetus behind the design of Quirkos: that the canvas view of your themes, where the size of each bubble shows the amount of data coded to it, gives you a live birds-eye overview of your data at all times. It’s also why we designed the cluster view, to graphically show you the connections between themes and nodes in your qualitative data analysis.

 

It is very easy to treat analysis as a close reading exercise, taking each source in turn, reading it through and assigning sections to codes or themes as you go. This is a valid first step, but only part of what should be an iterative, cyclical process. There are also lots of ways to challenge your coding strategy to keep you alert to new things coming from the data, and seeing trends in different ways.

 

However, I have a confession. I am a bit of a Luddite in some ways: I still prefer to print out and read transcripts of data from qualitative projects away from the computer. This may sound shocking coming from the director of a qualitative analysis software company, but for me there is something about both the physicality of reading from paper, and the process of stepping away from the analysis process that still endears paper-based reading to me. This is not just at the start of the analysis process either, but during. I force myself to stop reading line-by-line, put myself in an environment where it is difficult to code, and try and read the corpus of data at more of a holistic scale.
I waste a lot of trees this way (even with recycled paper), but always return to the qualitative software with a fresh perspective, finish my coding and analysis there, but having made the best of both worlds. Yes, it is time consuming to have so many readings of the data, but I think good qualitative analysis deserves this time.

 

I know I am not the only researcher who likes to work in this way, and we designed Quirkos to make this easy to do. One of the most unique and ‘wow’ features of Quirkos is how you can create a standard Word document of all the data from your project, with all the coding preserved as colour-coded highlights. This makes it easy to printout, take away and read at your leisure, but still see how you have defined and analysed your data so far.

word export qualitative data

 

There are also some other really useful things you can do with the Word export, like share your coded data with a supervisor, colleague or even some of your participants. Even if you don’t have Microsoft Office, you can use free alternatives like LibreOffice or Google Docs, so pretty much everyone can see your coded data. But my favourite way to read away from the computer is to make a mini booklet, with turn-able pages – I find this much more engaging than just a large stack of A4/Letter pages stapled in the top corner. If you have a duplex printer that can print on both sides of the page, generate a PDF from the Word file (just use Save As…) and even the free version of Adobe Reader has an awesome setting in Print to automatically create and format a little booklet:

word booklet

 

 

I always get a fresh look at the data like this, and although I am trying not to be too micro-analytical and do a lot of coding, I am always able to scribble notes in the margin. Of course, there is nothing to stop you stepping back and doing a reading like this in the software itself, but I don’t like staring at a screen all day, and I am not disciplined enough to work on the computer and not get sucked into a little more coding. Coding can be a very satisfying and addictive process, but at the time I have to define higher-level themes in the coding framework, I need to step back and think about the bigger picture, before I dive into creating something based on the last source or theme I looked at. It’s also important to get the flow and causality of the sources sometimes, especially when doing narrative and temporal analysis. It’s difficult to read the direction of an interview or series of stories just from looking at isolated coded snippets.

 

Of course, you can also print out a report from Quirkos, containing all the coded data, and the list of codes and their relations. This is sometimes handy as a key on the side, especially if there are codes you think you are underusing. Normally at this stage in the blog I point out how you can do this with other software as well, but actually, for such a commonly required step, I find this very hard to do in other software packages. It is very difficult to get all the ‘coding stripes’ to display properly in Nvivo text outputs, and MaxQDA has lots of options to export coded data, but not whole coded sources that I can see. Atlas.ti does better here with the Print with Margin feature, which shows stripes and code names in the margin – however this only generates a PDF file, so is not editable.

 

So download the trial of Quirkos today, and every now and then step back and make sure you don’t get too close to your qualitative data…

 

 

Problems with quantitative polling, and answers from qualitative data

 

The results of the US elections this week show a surprising trend: modern quantitative polling keeps failing to predict the outcome of major elections.

 

In the UK this is nothing new, in both the 2015 general election and the EU referendum polling failed to predict the outcome. In 2015 the polls suggested very close levels of support for Labour and the Conservative party but on the night the Conservatives won a significant majority. Secondly, the polls for the Referendum on leaving the EU indicated there was a slight preference for Remain, when voters actually voted to Leave by a narrow margin. We now have a similar situation in the States, where despite polling ahead of Donald Trump, Hillary Clinton lost the Electoral College system (while winning a slight majority in the popular vote). There are also recent examples of polling errors in Israel, Greece and the Scottish Independence Referendum.

 

Now, it’s fair to say that most of these polls were within the margin of error, (typically 3%) and so you would expect these inaccurate outcomes to happen periodically. However, there seems to be a systematic bias here, in each time underestimating the support for more conservative attitudes. There is much hand-wrangling about this in the press, see for example this declaration of failure from the New York Times. The suggestion that journalists and traditional media outlets are out of touch with most of the population may be true, but does not explain the  polling discrepancies.

 

There are many methodological problems: numbers of people responding to telephone surveys is falling, perhaps not surprising considering the proliferation of nuisance calls in the UK. But this remains for most pollsters a vital way to get access to the largest group of voters: older people. In contrast, previous attempts to predict elections through social media and big data approaches have been fairly inaccurate, and likely will remain that way if social media continues to be dominated by the young.

 

However, I think there is another problem here: pollsters are not asking the right questions. Look how terribly worded the exit poll questions are, they try to get people to put themselves in a box as quickly as possible: demographically, religiously, and politically. Then they ask a series of binary questions like “Should illegal immigrants working in the U.S. should be offered legal status or deported to their home country?” giving no opportunity for nuance. The aim is clear – just get to a neat quantifiable output that matches a candidate or hot topic.

 

There’s another question which I think in all it’s many iterations is poorly worded: Who are you going to vote for? People might change whether they would support a different politician at any moment in time (including in a polling booth), but are unlikely to suddenly decide their family is not important to them. It’s often been shown that support for a candidate is not a reliable metric: people give you answers influenced by the media, the resdearcher and of course they can change their mind. But when you ask people questions about their beliefs, not a specific candidate, they tend to be much more accurate. It also does not always correlate that a person will believe a candidate is good, and vote for them. As we saw in Brexit, and possibly with the last US election, many people want to register a protest vote – they are not being heard or represented well, and people aren’t usually asked if this is one of the reasons they vote. It’s also very important to consider that people are often strategic voters, and are themselves influenced by the polls which are splashed everywhere. The polls have become a constant daily race of who’s ahead, possibly increasing voter fatigue and leading to complacency for supporters of who ever is ahead the day of the finishing line. These made future predictions much more difficult.

 


In contrast, here’s two examples of qualitative focus group data on the US election. The first is a very heavily moderated CBS group, which got very aggressive. Here, although there is a strong attempt to ask for one word answers on candidates, what comes out is a general distrust of the whole political system. This is also reflected in the Lord Ashcroft focus groups in different American states, which also include interviews with local journalists and party leaders. When people are not asked specific policy or candidate based questions, there is surprising  agreement: everyone is sick of the political system and the election process.


This qualitative data is really no more subjective than polls based on who answers a phone on a particular day, but provides a level of nuance lacking in the quantitative polls and mainstream debate, which helps explain why people are voting different ways – something many are still baffled by. There are problems with this type of data as well, it is difficult to accurately summarise and report on, and rarely are complete transcripts available for scrutiny. But if you want to better gauge the mood of a nation, discussion around the water-cooler or down the pub can be a lot more illuminating, especially when as a researcher or ethnographer you are able to get out of the way and listen (as you should when collecting qualitative data in focus groups).

 

Political data doesn’t have to be focus group driven either – these group discussions are done because they are cheap, but qualitative semi-structured interviews can really let you understand key individuals that might help explain larger trends. We did this before the 2015 general election, and the results clearly predicted and explained the collapse in support for the Labour party in Scotland.

 

There has been a rush in the polling to add more and more numbers to the surveys, with many reaching tens or even hundreds of thousands of respondents. But these give a very limited view of voter opinions, and as we’ve seen above can be very skewed by question and sampling method. It feels to me that deep qualitative conversations with a much smaller number of people from across the country would be a better way of gauging the social and political climate. And it’s important to make sure that participants have the power to set the agenda, because pollsters don’t always know what issues matter most to people. And for qualitative researchers and pollsters alike: if the right questions don’t get asked, you won’t get the right answers!

 

Don't forget to try Quirkos, the simplest and most visual way to analyse your qualitative text and mixed method data. We work for you, with a free trial and training materials, licences that don't expire and expert researcher support. Download Quirkos and try for your self!

 

 

 

Tips for running effective focus groups

In the last blog article I looked at some of the justifications for choosing focus groups as a method in qualitative research. This week, we will focus on some practical tips to make sure that focus groups run smoothly, and to ensure you get good engagement from your participants.

 


1. Make sure you have a helper!

It’s very difficult to run focus groups on your own. If you are wanting to layout the room, greet people, deal with refreshment requests, check recording equipment is working, start video cameras, take notes, ask questions, let in late-comers and facilitate discussion it’s much easier with two or even three people for larger groups. You will probably want to focus on listening to the discussion, not have to take notes and problem solve at the same time. Having another facilitator or helper around can make a lot of difference to how well the session runs, as well as how much good data is recorded from it.

 


2. Check your recording strategy

Most people will record audio and transcribe their focus groups later. You need to make sure that your recording equipment will pick up everyone in the room, and also that you have a backup dictaphone and batteries! Many more tips in this blog post article. If you are planning to video the session, think this through carefully.

 

Do you have the right equipment? A phone camera might seem OK, but they usually struggle to record long sessions, and are difficult to position in a way that will show everyone clearly. Special cameras designed for gig and band practice are actually really good for focus groups, they tend to have wide-angle lenses and good microphones so you don’t need to record separate audio. You might also want to have more than one camera (in a round-table discussion, someone will always have their back to the camera. Then you will want to think about using qualitative analysis software like Transana that will support multiple video feeds.

 

You also need to make sure that video is culturally appropriate for your group (some religions and cultures don’t approve of taking images) and that it won’t make people nervous and clam up in discussion. Usually I find a dictaphone less imposing than a camera lens, but you then loose the ability to record the body language of the group. Video also makes it much easier to identify different speakers!

 


3. Consent and introductions

I always prefer to do the consent forms and participant information before the session. Faffing around with forms to sign at the start or end of the workshop takes up a lot of time best used for discussion, and makes people hurried to read the project information. E-mail this to people ahead of time, so at least they can just sign on the day, or bring a completed form with them. I really feel that participants should get the option to see what they are signing up for before they agree to come to a session, so they are not made uncomfortable on the day if it doesn't sound right for them. However, make sure there is an opportunity for people to ask any questions, and state any additional preferences, privately or in public.

 


4. Food and drink

You may decide not to have refreshments at all (your venue might dictate that) but I really love having a good spread of food and drink at a focus group. It makes it feel more like a party or family occasion than an interrogation procedure, and really helps people open up.

 

While tea, coffee and biscuits/cookies might be enough for most people, I love baking and always bring something home-baked like a cake or cookies. Getting to talk about and offer  food is a great icebreaker, and also makes people feel valued when you have spent the time to make something. A key part of getting good data from a good focus group is to set a congenial atmosphere, and an interesting choice of drinks or fruit can really help this. Don’t forget to get dietary preferences ahead of time, and consider the need for vegetarian, diabetic and gluten-free options.

 


5. The venue and layout

A lot has already been said about the best way to set out a focus group discussion (see Chambers 2002), but there are a few basic things to consider. First, a round or rectangle table arrangement works best, not lecture hall-type rows. Everyone should be able to see the face of everyone else. It’s also important not to have the researcher/facilitator at the head or even centre of the table. You are not the boss of the session, merely there to guide the debate. There is already a power dynamic because you have invited people, and are running the session. Try and sit yourself on the side as an observer, not director of the session.

 

In terms of the venue, try and make sure it is as quiet as possible, and good natural light and even high ceilings can help spark creative discussion (Meyers-Levy and Zhu 2007).

 


6. Set and state the norms

A common problem in qualitative focus group discussions is that some people dominate the debate, while others are shy and contribute little. Chambers (2002) just suggests to say at the beginning of the session this tends to happen, to make people conscious of sharing too much or too little. You can also try and actively manage this during the session by prompting other people to speak, go round the room person by person, or have more formal systems where people raise their hands to talk or have to be holding a stone. These methods are more time consuming for the facilitator and can stifle open discussion, so it's best to use them only when necessary.

 

You should also set out ground rules, attempting to create an open space for uncritical discussion. It's not usually the aim for people to criticise the view of others, nor for the facilitator to be seen as the leader and boss. Make these things explicit at the start to make sure there is the right atmosphere for sharing: one where there is no right or wrong answer, and everyone has something valuable to contribute.

 


7. Exercises and energisers

To prompt better discussion when people are tired or not forthcoming, you can use exercises such as card ranking exercises, role play exercises and prompts for discussion such as stories or newspaper articles. Chambers (2002) suggests dozens of these, as well as some some off-the-wall 'energizer' exercises: fun games to get people to wake up and encourage discussion. More on this in the last blog post article. It can really help to go round the room and have people introduce themselves with a fun fact, not just to get the names and voices on tape for later identification, but as a warm up.

 

Also, the first question, exercise or discussion point should be easy. If the first topic is 'How did you feel when you had cancer?' that can be pretty intimidating to start with. Something much simpler, such as 'What was hospital food like?' or even 'How was your trip here?' are topics everyone can easily contribute to and safely argue over, gaining confidence to share something deeper later on.

 


8. Step back, and step out

In focus groups, the aim is usually to get participants to discuss with each-other, not a series of dialogues with the facilitator. The power dynamics of the group need to reflect this, and as soon as things are set in motion, the researcher should try and intervene as little as possible – occasionally asking for clarification or to set things back on track. Thus it's also their role to help participants understand this, and allow the group discussion to be as co-interactive as possible.

 

“When group dynamics worked well the co-participants acted as co-
researchers taking the research into new and often unexpected directions and engaging in interaction which were both complementary (such as sharing common experiences) and argumentative” 
- Kitzinger 1994

 


9. Anticipate depth

Focus groups usually last a long time, rarely less than 2 hours, but even a half or whole day of discussion can be appropriate if there are lots of complex topics to discuss. It's OK to consider having participants do multiple focus groups if there is lots to cover, just consider what will best fit around the lives of your participants.

 

At the end of these you should find there is a lot of detailed and deep qualitative data for analysis. It can really help digesting this to make lots of notes during the session, as a summary of key issues, your own reflexive comments on the process, and the unspoken subtext (who wasn't sharing on what topics, what people mean when they say, 'you know, that lady with the big hair').


You may also find that qualitative analysis software like Quirkos can help pull together all the complex themes and discussions from your focus groups, and break down the mass of transcribed data you will end up with! We designed Quirkos to be very simple and easy to use, so do download and try for yourself...