Archaeologies of coding qualitative data

recoding qualitative data


In the last blog post I referenced a workshop session at the International Conference of Qualitative Inquiry entitled the ‘Archaeology of Coding’. Personally I interpreted archaeology of qualitative analysis as being a process of revisiting and examining an older project. Much of the interpretation in the conference panel was around revisiting and iterating coding within a single analytical attempt, and this is very important.

In qualitative analysis it is rarely sufficient to only read through and code your data once. An iterative and cyclical process is preferable, often building on and reconsidering previous rounds of coding to get to higher levels of interpretation. This is one of the ways to interpret an ‘archaeology’ of coding – like Jerusalem, the foundations of each successive city is built on the groundwork of the old. And it does not necessarily involve demolishing the old coding cycle to create a new one – some codes and themes (just like significant buildings in a restructured city) may survive into the new interpretation.

But perhaps there is also a way in which coding archaeology can be more like a dig site: going back down through older layers to uncover something revealing. I allude to this more in the blog post on ‘Top down or bottom up’ coding approaches, because of course you can start your qualitative analysis by identifying large common themes, and then breaking these up into more specific and nuanced insight into the data.


But both these iterative techniques are envisaged as part of a single (if long) process of coding. But what about revisiting older research projects? If you get the opportunity to go back and re-examine old qualitative data and analysis?

Secondary data analysis can be very useful, especially when you have additional questions to ask of the data, such as in this example by Notz (2005). But it can also be useful to revisit the same data, question or topic when the context around them changes, for example due to a major event or change in policy.

A good example is our teaching dataset conducted after the referendum for Scottish Independence a few years ago. This looked to see how the debate had influenced voters interpretations of the different political parties and how they would vote in the general election that year. Since then, there has been a referendum on the UK leaving the EU, and another general election. Revisiting this data would be very interesting in retrospect of these events. It is easy to see from the qualitative interviews that voters in Scotland would overwhelmingly vote to stay in the EU. However, it would not be up-to-date enough to show the ‘referendum fatigue’ that was interpreted as a major factor reducing support for the Scottish National Party in the most recent election. Yet examining the historical data in this context can be revealing, and perhaps explain variance in voting patterns in the changing winds of politics and policy in Scotland.


While the research questions and analysis framework devised for the original research project would not answer the new questions we have of the data, creating new or appended analytical categories would be insightful. For example, many of the original codes (or Quirks) identifying political parties people were talking about will still be useful, but how they map to policies might be reinterpreted, or higher level themes such as the extent that people perceive a necessity for referendum, or value of remaining part of the EU (which was a big question if Scotland became independent). Actually, if this sounds interesting to you, feel free to re-examine the data – it is freely available in raw and coded formats.


Of course, it would be even more valuable to complement the existing qualitative data with new interviews, perhaps even from the same participants to see how their opinions and voting intentions have changed. Longitudinal case studies like this can be very insightful and while difficult to design specifically for this purpose (Calman, Brunton, Molassiotis 2013), can be retroactively extended in some situations.

And of course, this is the real power of archaeology: when it connects patterns and behaviours of the old with the new. This is true whether we are talking about the use of historical buildings, or interpretations of qualitative data. So there can be great, and often unexpected value in revisiting some of your old data. For many people it’s something that the pressures of marking, research grants and the like push to the back burner. But if you get a chance this summer, why not download some quick to learn qualitative analysis software like Quirkos and do a bit of data archaeology of your own?


Making the most of bad qualitative data


A cardinal rule of most research projects is things don’t always go to plan. Qualitative data collection is no difference, and the variability in approaches and respondents means that there is always the potential for things to go awry. However, the typical small sample sizes can make even one or two frustrating responses difficult to stomach, since they can represent such a high proportion of the whole data set.

Sometimes interviews just don’t go well: the respondent might only give very short answers, or go off on long tangents which aren’t useful to the project. Usually the interviewer can try and facilitate these situations to get better answers, but sometimes people can just be difficult. You can see this in the transcript of the interview with ‘Julie’ in the example referendum project. Despite initially seeming very keen on the topic, perhaps she was tired on the day, but cannot be coaxed into giving more than one or two word answers!

It’s disappointing when something like this happens, but it is not the end of the world. If one interview is not as verbose or complete as some of the others it can look strange, but there is probably still useful information there. And the opinions of this person are just as valid, and should be treated with the same weight. Even if there is no explanation, disagreeing with a question by just saying ‘No’ is still an insight.

You can also have people who come late to data collection sessions, or have to leave early resulting in incomplete data. Ideally you would try and do follow up questions with the respondent, but sometimes this is just not possible. It is up to you to decide whether it is worth including partial responses, and if there is enough data to make inclusion and comparison worthwhile.

Also, you may sometimes come across respondents who seem to be outright lying – their statements contradict, they give ridiculous or obviously false answers, or flat out refuse to answer questions. Usually I would recommend that these data sources are included, as long as there is a note of this in the source properties and a good justification for why the researcher believes the responses may not be trusted. There is usually a good reason that a respondent chooses to behave in such a way, and this can be important context for the study.

In focus group settings there can sometimes be one or two participants who derail the discussion, perhaps by being hostile to other members of the group or only wanting to talk about their pet topics and not the questions on the table. This is another situation where practice at mediating and facilitating data collection can help, but sometimes you just have to try and extract whatever is valuable. But organising focus groups can be very time consuming, and consume so many potentially good respondents in one go, so having poor data quality from one of the sessions can be upsetting. Don’t be afraid to go back to some of the respondents and see if they would do another smaller session, or one-on-ones to get more of their input.

However, the most frustrating situation is when you get disappointing data from a really key informant: someone that is an important figure in the field, is well connected or has just the right experience. These interviews don’t always go to plan, especially with senior people who may not be willing to share, or have their own agenda in how they shape the discussion. In these situations it is usually difficult to find another respondent who will have the same insight or viewpoint, so the data is tricky to replace. It’s best to leave these key interviews until you have done a few others; that way you can be confident in your research questions, and will have some experience in mediating the discussions.

Finally, there is also lost data. Dictaphones that don’t record or get lost. Files gone missing and lost passwords. Crashed computers that take all the data with them to an early and untimely grave! These things happen more often than they should, and careful planning, precautions and backups are the only way to protect against these.

But often the answer to all these problems is to collect more data! Most people using qualitative methodologies should have a certain amount of flexibility in their recruitment strategy, and should always be doing some review and analysis on each source as it is collected. This way you can quickly identify gaps or problems in the data, and make sure forthcoming data collection procedures cover everything.

So don’t leave your analysis too late, get your data into an intuitive tool like Quirkos, and see how it can bring your good and bad research data to light! We have a one month free trial, and lots of support and resources to help you make the most of the qualitative data you have. And don’t forget to share your stories of when things went wrong on Twitter using the hashtag #qualdisasters!


How Quirkos can change the way you look at your qualitative data

Quirkos qualitative software seeing data

We always get a lot of inquiries in December from departments and projects who are thinking of spending some left-over money at the end of the financial year on a few Quirkos licences. A great early Christmas present for yourself the team! It’s also a good long term investment, since our licences don’t expire and can be used year after year. They are transferable to new computers, and we’ve committed to provide free updates for the current version. We don’t want the situation where different teams are using different versions, and so can’t share projects and data. Our licences are often a fraction of the cost of other qualitative software packages, but for the above reasons we think that we offer much more value than just the comparative sticker price.


But since Quirkos also has a different ethos (accessibility) and some unique features, it also helps you approach your qualitative research data in a different way to other software. In the two short years that Quirkos has been available, it’s become used by more than 100 universities across the world, as well as market research firms and public sector organisations. That has given me a lot of feedback that helps us improve the software, but also a sense of what things people love the most about it. So following is the list of things I hear most about the software in workshops and e-mails.


It’s much more visual and colourful

quirkos qualitative coding bubbles

Experienced researchers who have used other software are immediately struck by how colourful and visual the Quirkos approach is. The interface shows growing bubbles that dynamically show the coding in each theme (or node), and colours all over the screen. For many, the Quirkos design allows people to think in colours, spatially, and in layers, improving the amount of information they can digest and work with. Since the whole screen is a live window into the data, there is less need to generate separate reports, and coding and reviewing is a constant (and addictive) feedback process.

This doesn’t appeal to everyone, so we still have a more traditional ‘tree’ list structure for the themes which users can switch between at any time.



I can get started with my project quicker

We designed Quirkos so it could be learnt in 20 minutes for use in participatory analysis, so the learning curve is much lower than other qualitative software. Some packages can be intimidating to the first-time user, and often have 2 day training courses. All the training and support materials for Quirkos are available for free on our website, without registration. We increasingly hear that students want self-taught options, which we provide in many different formats. This means that not only can you start using Quirkos quickly, setting up and putting data into a new project is a lot quicker as well, making Quirkos useful for smaller qualitative projects which might just have a few sources.



I’m kept closer to my data

qualitative software comparison view

It’s not just the live growing bubbles that mean researchers can see themes evolve in their analysis, there are a suite of visualisations that let you quickly explore and play with the data. The cluster views generate instant Venn diagrams of connection and co-occurrences between themes, and the query views show side-by-side comparisons for any groups of your data you want to compare and contrast. Our mantra has been to make sure that no output is more than one click away, and this keeps users close to their data, not hidden away behind long lists and sub-menus.



It’s easier to share with others

qualitative word export

Quirkos provides some unique options that make showing your coded qualitative data to other people easier and more accessible. The favourite feature is the Word export, which creates a standard Word document of your coded transcripts, with all the coding shown as colour coded comments and highlights. Anyone with a word processor can see the complete annotated data, and print it out to read away from the computer.

If you need a detailed summary, the reports can be created as an interactive webpage, or a PDF which anyone can open. For advanced users you can also export your data as a standard spreadsheet CSV file, or get deep into the standard SQLite database using any tool (such as or even a browser extension.



I couldn’t get to grips with other qualitative software

quirkos spreadsheet comparison

It is very common for researchers to come along to our workshops having been to training for other qualitative analysis software, and saying they just ‘didn’t get it’ before. While very powerful, other tools can be intimidating, and unless you are using them on a regular basis, difficult to remember all the operations. We love how people can just come back to Quirkos after 6 months and get going again.

We also see a lot of people who tried other specialist qualitative software and found it didn’t fit for them. A lot of researchers go back to paper and highlighters, or even use Word or Excel, but get excited by how intuitive Quirkos makes the analysis process.



Just the basics, but everything you need

I always try and be honest in my workshops and list the limitations of Quirkos. It can’t work with multimedia data, can’t provide quantitative statistical analysis, and has limited memo functionality at the moment. But I am always surprised at how the benefits outweigh the limitations for most people: a huge majority of qualitative researchers only work with text data, and share my belief that if quantiatitve statistics are needed, they should be done in dedicated software. The idea has always been to focus on making the core actions that researchers do all the time (coding, searching, developing frameworks and exploring data) and make them as smooth and quick as possible.


If you have comments of your own, good or bad, we love to hear them, it’s what keeps us focused on the diverse needs of qualitative researchers.

Get in touch and we can help explain the different licence options, including ‘seat’ based licences for departments or teams, as well as the static licences which can be purchased immediately through our website. There are also discounts for buying more than 3 licences, for different sectors, and developing countries.

Of course, we can also provide formal quotes, invoices and respond to purchase orders as your institution requires. We know that some departments take time to get things through finances, and so we can always provide extensions to the trial until the orders come through – we never want to see researchers unable to get at their data and continue their research!

So if you are thinking about buying a licence for Quirkos, you can download the full version to try for free for one month, and ask us any questions by email (, Skype ‘quirkos’ or a good old 9-to-5 phone call on (+44) 0131 555 3736. We are here for qualitative researchers of all (coding) stripes and spots (bubbles)!


Snapshot data and longitudinal qualitative studies

longitudinal qualitative data

In the last blog post, we looked at creating archives of qualitative data that can be used by other researchers (or yourself in the future) for secondary analysis. In that article I postulated that secondary data analysis could make collecting new data a rarer, and expensive event. However, some (including Dr Susanne Friese) pointed out that as the social world is always changing, there is a constant need to collect new data. I totally agree with this, but it made me think about another problem with research studies – they are time-limited collection events that end up capturing a snapshot of the world when they were recorded.

Most qualitative research collects data as a series of one-off dives into the lives and experience of participants. These might be a single interview or focus group, or the results of a survey that captures opinions at a particular point in time. This might not be a certain date, it might also be at a key point in a participant’s journey – such as when people are discharged from hospital, or after they attend a training event. However, even within a small qualitative research project it is possible to measure change, by surveying the people at different stages in their individual or collective journeys.

This is sometimes called ‘Qualitative Longitudinal Research’ (Farrall et al. 2006), and often involves contacting people at regular intervals, possibly even after years or decades. Examples of this type of project include the five year ‘Timescapes’ project in the UK which looked at changes in personal and family relationships (Holland 2011), or a mutli-year look at homelessness and low-income families in Worcester, USA for which the archived data is available (yay!).

However, such projects tend to be expensive, as they require having researchers working on a project for many years. They also need quite a high sample size because there is an inevitable annual attrition of people who move, fall ill or stop responding to the researchers. Most projects don’t have the luxury of this sort of scale and resources, but there is another possibility to consider: if it is appropriate to collect data over a shorter timescale. This is often collecting data at ‘before and after’ data points – for example bookending a treatment or event, so are often used in well planned evaluations. Researchers can ask questions about expectations before the occasion, and how they felt afterwards. This is useful in a lot of user experience research, but also to understand motivations of actors, and improve delivery of key services.

But there are other reasons to do more than one interview with a participant. Even having just two data points from a participant can show changes in lives, circumstances and opinions, even when there isn’t an obvious event distinguishing the two snapshots. It also gets people to reflect on  answers they gave in the first interview, and see if their feelings or opinions have changed. It can also give an opportunity to further explore topics that were interesting or not covered in the first interview, and allow for some checking – do people answer the questions in a consistent way? Sometimes respondents (or even interviewers) are having a bad day, and this doubles the chance of getting high quality data.

In qualitative analysis it is also an opportunity to look through all the data from a number of respondents and go back to ask new questions that are revealed from the data. In a grounded theory approach this is very valuable, but it can also be used to check researcher’s own interpretations and hypothesis about the research topic.

There are a number of research methods which are particularly suitable to longitudinal qualitative data studies. Participant diaries for example can be very illuminating, and can work over long or short periods of time. Survey methods also work well for multiple data points, and it is fairly easy to administer these at regular intervals by collecting data via e-mail, telephone or online questionnaires. These don’t even have to have very fixed questions, it is possible to do informal interviews via e-mail as well (eg Meho 2006), and this works well over long and medium periods of time.

So when planning a research project it’s worth thinking about whether your research question could be better answered with a longitudinal or multiple contact approach. If you decide this is the case, just be aware of the possibility that you won’t be able to contact everyone multiple times, and if means that some people’s data can’t be used, you need to over-recruit at the initial stages. However, the richer data and greater potential for reliability can often outweigh the extra work required. I also find it very illuminating as a researcher, to talk to participants after analysing some of their data. Qualitative data can become very abstract from participants (especially once it is transcribed, analysed and dissected) and meeting the person again and talking through some of the issues raised can help reconnect with the person and story.

Researchers should also consider how they will analyse such data. In qualitative analysis software like Quirkos, it is usually fairly easy to categorise the different sources you have by participant or time scale, so that you can look at just 1st or 2nd interviews or all together. I have had many questions from users into how best to organise their projects for such studies: and my advice is generally to keep everything in one project file. Quirkos makes it easy to do analysis on just certain data sources, and show results and reports from everything, or just one set of data.

So consider giving Quirkos a try with a free download for Windows, Mac or Linux, and read about how to build complex queries to explore your data over time, or between different groups of respondents.



Archiving qualitative data: will secondary analysis become the norm?

archive secondary data


Last month, Quirkos was invited to a one day workshop in New York on archiving qualitative data. The event was hosted by Syracuse University, and you can read a short summary of the event here. This links neatly into the KWALON led initiative to create a common standard for interchange of coded data between qualitative software packages.

The eventual aim is to develop a standardised file format for qualitative data, which not only allows use of data on any qualitative analysis software, but also for coded qualitative data sets to be available in online archives for researchers to access and explore. There are several such initiatives around the world, for example QDR in the United States, and the UK Data Archive in the United Kingdom. Increasingly, research funders are requiring that data from research is deposited in such public archives.

A qualitative archive should be a dynamic and accessible resource. It is of little use creating something like the warehouse at the end of ‘Raiders of the Lost Ark’: a giant safehouse of boxed up data which is difficult to get to. Just like the UK Data Archive it must be searchable, easy to download and display, and have enough detail and metadata to make sure data is discoverable and properly catalogued. While there is a direct benefit to having data from previous projects archived, the real value comes from the reuse of that data: to answer new questions or provide historical context. To do this, we need a change in practice from researchers, participants and funding bodies.

In some disciplines, secondary data analysis of archival data is common place, think for example of history research that looks at letters and news from the Jacobean period (eg Coast 2014). The ‘Digital Humanities’ movement in academia is a cross-dicipline look at how digital archives of data (often qualitative and text based) can be better utilised by researchers. However, there is a missed opportunity here, as many researchers in digital humanities don’t use standard qualitative analysis software, preferring instead their own bespoke solutions. Most of the time this is because the archived data is in such a variety of different formats and formatting.

However, there is a real benefit to making not just historical data like letters and newspapers, but contemporary qualitative research projects (of the typical semi-structured interview / survey / focus-group type) openly available. First, it allows others to examine the full dataset, and check or challenge conclusions drawn from the data. This allows for a higher level of rigour in research - a benefit not just restricted to qualitative data, but which can help challenge the view that qualitative research is subjective, by making the full data and analysis process available to everyone.


Second, there is huge potential for secondary analysis of qualitative data. Researchers from different parts of the country working on similar themes can examine other data sources to compare and contrast differences in social trends regionally or historically. Data that was collected for a particular research question may also have valuable insights about other issues, something especially applicable to rich qualitative data. Asking people about healthy eating for example may also record answers which cover related topics like attitudes to health fads in the media, or diet and exercise regimes.


At the QDR meeting last month, it was postulated that the generation of new data might become a rare activity for researchers: it is expensive, time consuming, and often the questions can be answered by examining existing data. With research funding facing an increasing squeeze, many funding bodies are taking the opinion that they get better value from grants when the research has impact beyond one project, when the data can be reused again and again.


There is still an element of elitism about generating new data in research – it is an important career pathway, and the prestige given to PIs running their own large research projects are not shared with those doing ‘desk-based’ secondary analysis of someone else’s data. However, these attitudes are mostly irrational and institutionally driven: they need to change. Those undertaking new data collection will increasingly need to make sure that they design research projects to maximise secondary analysis of their data, by providing good documentation on the collection process, research questions and detailed metadata of the sources.


The UK now has a very popular research grant programme specifically to fund researchers to do secondary data analysis. Loiuse Corti told the group that despite uptake being slow in the first year, the call has become very competitive (like the rest of grant funding). These initiatives will really help raise the profile and acceptability of doing secondary analysis, and make sure that the most value is being gained from existing data.

However, making qualitative data available for secondary analysis does not necessarily mean that it is publicly available. Consent and ethics may not allow for the release of the complete data, making it difficult to archive. While researchers should increasingly start planning research projects and consent forms to make data archivable and shareable, it is not always appropriate. Sometimes, despite attempts to anonymise qualitative data, the detail of life stories and circumstances that participants share can make them identifiable. However, the UK Data Archive has a sort of ‘vetting’ scheme, where more sensitive datasets can only be accessed by verified researchers who have signed appropriate confidentiality agreements. There are many levels of access so that the maximum amount of data can be made available, with appropriate safeguards to protect participants.

I also think that it is a fallacy to claim that participants wouldn’t want their data to be shared with other researchers – in fact I think many respondents assume this will happen. If they are going to give their time to take part in a research project, they expect this to have maximum value and impact, many would be outraged to think of it sitting on the shelf unused for many years.

But these data archives still don’t have a good way to share coded qualitative data or the project files generated by qualitative software. Again, there was much discussion on this at the meeting, and Louise Corti gave a talk about her attempts to produce a standard for qualitative data (QuDex). But such a format can become very complicated, if we wish to support and share all the detailed aspects of qualitative research projects. These include not just multimedia sources and coding, but date and metadata for sources, information about the researchers and research questions, and possibly even the researcher’s journals and notes, which could be argued to be part of the research itself.


The QuDex format is extensive, but complicated to implement – especially as it requires researchers to spend a lot of time entering metadata to be worthwhile. Really, this requires a behaviour change for researchers: they need to record and document more aspects of their research projects. However, in some ways the standard was not comprehensive enough – it could not capture the different ways that notes and memos were recorded in Atlas.ti for example. This standard has yet to gain traction as there was little support from the qualitative software developers (for commercial and other reasons). However, the software landscape and attitudes to open data are changing, and major agreements from qualitative software developers (including Quirkos) mean that work is underway to create a standard that should eventually create allow not just for the interchange of coded qualitative data, but hopefully for easy archival storage as well.

So the future of archived qualitative archive requires behaviour change from researchers, ethics boards, participants, funding bodies and the designers of qualitative analysis software. However, I would say that these changes, while not necessarily fast enough, are already in motion, and the word is moving towards a more open world for qualitative (and quantitative) research.

For things to be aware of when considering using secondary sources of qualitative data and social media, read this post. And while you are at it, why not give Quirkos a try and see if it would be a good fit for your next qualitative research project – be it primary or secondary data. There is a free trial of the full version to download, with no registration or obligations. Quirkos is the simplest and most visual software tool for qualitative research, so see if it is right for you today!



Stepping back from coding software and reading qualitative data

printing and reading qualitative data

There is a lot of concern that qualitative analysis software distances people from their data. Some say that it encourages reductive behaviour, prevents deep reading of the data, and leads to a very quantified type of qualitative analysis (eg Savin-Baden and Major 2013).


I generally don’t agree with these statements, and other qualitative bloggers such as Christina Silver and Kristi Jackson have written responses to critics of qualitative analysis software recently. However, I want to counter this a little with a suggestion that it is also possible to be too close to your data, and in fact this is a considerable risk when using any software approach.


I know this is starting to sound contradictory, but it is important to strike a happy balance so you can see the wood from the trees. It’s best to have both a close, detailed reading and analysis of your data, as well as a sense of the bigger picture emerging across all your sources and themes. That was the impetus behind the design of Quirkos: that the canvas view of your themes, where the size of each bubble shows the amount of data coded to it, gives you a live birds-eye overview of your data at all times. It’s also why we designed the cluster view, to graphically show you the connections between themes and nodes in your qualitative data analysis.


It is very easy to treat analysis as a close reading exercise, taking each source in turn, reading it through and assigning sections to codes or themes as you go. This is a valid first step, but only part of what should be an iterative, cyclical process. There are also lots of ways to challenge your coding strategy to keep you alert to new things coming from the data, and seeing trends in different ways.


However, I have a confession. I am a bit of a Luddite in some ways: I still prefer to print out and read transcripts of data from qualitative projects away from the computer. This may sound shocking coming from the director of a qualitative analysis software company, but for me there is something about both the physicality of reading from paper, and the process of stepping away from the analysis process that still endears paper-based reading to me. This is not just at the start of the analysis process either, but during. I force myself to stop reading line-by-line, put myself in an environment where it is difficult to code, and try and read the corpus of data at more of a holistic scale.
I waste a lot of trees this way (even with recycled paper), but always return to the qualitative software with a fresh perspective, finish my coding and analysis there, but having made the best of both worlds. Yes, it is time consuming to have so many readings of the data, but I think good qualitative analysis deserves this time.


I know I am not the only researcher who likes to work in this way, and we designed Quirkos to make this easy to do. One of the most unique and ‘wow’ features of Quirkos is how you can create a standard Word document of all the data from your project, with all the coding preserved as colour-coded highlights. This makes it easy to printout, take away and read at your leisure, but still see how you have defined and analysed your data so far.

word export qualitative data


There are also some other really useful things you can do with the Word export, like share your coded data with a supervisor, colleague or even some of your participants. Even if you don’t have Microsoft Office, you can use free alternatives like LibreOffice or Google Docs, so pretty much everyone can see your coded data. But my favourite way to read away from the computer is to make a mini booklet, with turn-able pages – I find this much more engaging than just a large stack of A4/Letter pages stapled in the top corner. If you have a duplex printer that can print on both sides of the page, generate a PDF from the Word file (just use Save As…) and even the free version of Adobe Reader has an awesome setting in Print to automatically create and format a little booklet:

word booklet



I always get a fresh look at the data like this, and although I am trying not to be too micro-analytical and do a lot of coding, I am always able to scribble notes in the margin. Of course, there is nothing to stop you stepping back and doing a reading like this in the software itself, but I don’t like staring at a screen all day, and I am not disciplined enough to work on the computer and not get sucked into a little more coding. Coding can be a very satisfying and addictive process, but at the time I have to define higher-level themes in the coding framework, I need to step back and think about the bigger picture, before I dive into creating something based on the last source or theme I looked at. It’s also important to get the flow and causality of the sources sometimes, especially when doing narrative and temporal analysis. It’s difficult to read the direction of an interview or series of stories just from looking at isolated coded snippets.


Of course, you can also print out a report from Quirkos, containing all the coded data, and the list of codes and their relations. This is sometimes handy as a key on the side, especially if there are codes you think you are underusing. Normally at this stage in the blog I point out how you can do this with other software as well, but actually, for such a commonly required step, I find this very hard to do in other software packages. It is very difficult to get all the ‘coding stripes’ to display properly in Nvivo text outputs, and MaxQDA has lots of options to export coded data, but not whole coded sources that I can see. Atlas.ti does better here with the Print with Margin feature, which shows stripes and code names in the margin – however this only generates a PDF file, so is not editable.


So download the trial of Quirkos today, and every now and then step back and make sure you don’t get too close to your qualitative data…



Problems with quantitative polling, and answers from qualitative data


The results of the US elections this week show a surprising trend: modern quantitative polling keeps failing to predict the outcome of major elections.


In the UK this is nothing new, in both the 2015 general election and the EU referendum polling failed to predict the outcome. In 2015 the polls suggested very close levels of support for Labour and the Conservative party but on the night the Conservatives won a significant majority. Secondly, the polls for the Referendum on leaving the EU indicated there was a slight preference for Remain, when voters actually voted to Leave by a narrow margin. We now have a similar situation in the States, where despite polling ahead of Donald Trump, Hillary Clinton lost the Electoral College system (while winning a slight majority in the popular vote). There are also recent examples of polling errors in Israel, Greece and the Scottish Independence Referendum.


Now, it’s fair to say that most of these polls were within the margin of error, (typically 3%) and so you would expect these inaccurate outcomes to happen periodically. However, there seems to be a systematic bias here, in each time underestimating the support for more conservative attitudes. There is much hand-wrangling about this in the press, see for example this declaration of failure from the New York Times. The suggestion that journalists and traditional media outlets are out of touch with most of the population may be true, but does not explain the  polling discrepancies.


There are many methodological problems: numbers of people responding to telephone surveys is falling, perhaps not surprising considering the proliferation of nuisance calls in the UK. But this remains for most pollsters a vital way to get access to the largest group of voters: older people. In contrast, previous attempts to predict elections through social media and big data approaches have been fairly inaccurate, and likely will remain that way if social media continues to be dominated by the young.


However, I think there is another problem here: pollsters are not asking the right questions. Look how terribly worded the exit poll questions are, they try to get people to put themselves in a box as quickly as possible: demographically, religiously, and politically. Then they ask a series of binary questions like “Should illegal immigrants working in the U.S. should be offered legal status or deported to their home country?” giving no opportunity for nuance. The aim is clear – just get to a neat quantifiable output that matches a candidate or hot topic.


There’s another question which I think in all it’s many iterations is poorly worded: Who are you going to vote for? People might change whether they would support a different politician at any moment in time (including in a polling booth), but are unlikely to suddenly decide their family is not important to them. It’s often been shown that support for a candidate is not a reliable metric: people give you answers influenced by the media, the resdearcher and of course they can change their mind. But when you ask people questions about their beliefs, not a specific candidate, they tend to be much more accurate. It also does not always correlate that a person will believe a candidate is good, and vote for them. As we saw in Brexit, and possibly with the last US election, many people want to register a protest vote – they are not being heard or represented well, and people aren’t usually asked if this is one of the reasons they vote. It’s also very important to consider that people are often strategic voters, and are themselves influenced by the polls which are splashed everywhere. The polls have become a constant daily race of who’s ahead, possibly increasing voter fatigue and leading to complacency for supporters of who ever is ahead the day of the finishing line. These made future predictions much more difficult.


In contrast, here’s two examples of qualitative focus group data on the US election. The first is a very heavily moderated CBS group, which got very aggressive. Here, although there is a strong attempt to ask for one word answers on candidates, what comes out is a general distrust of the whole political system. This is also reflected in the Lord Ashcroft focus groups in different American states, which also include interviews with local journalists and party leaders. When people are not asked specific policy or candidate based questions, there is surprising  agreement: everyone is sick of the political system and the election process.

This qualitative data is really no more subjective than polls based on who answers a phone on a particular day, but provides a level of nuance lacking in the quantitative polls and mainstream debate, which helps explain why people are voting different ways – something many are still baffled by. There are problems with this type of data as well, it is difficult to accurately summarise and report on, and rarely are complete transcripts available for scrutiny. But if you want to better gauge the mood of a nation, discussion around the water-cooler or down the pub can be a lot more illuminating, especially when as a researcher or ethnographer you are able to get out of the way and listen (as you should when collecting qualitative data in focus groups).


Political data doesn’t have to be focus group driven either – these group discussions are done because they are cheap, but qualitative semi-structured interviews can really let you understand key individuals that might help explain larger trends. We did this before the 2015 general election, and the results clearly predicted and explained the collapse in support for the Labour party in Scotland.


There has been a rush in the polling to add more and more numbers to the surveys, with many reaching tens or even hundreds of thousands of respondents. But these give a very limited view of voter opinions, and as we’ve seen above can be very skewed by question and sampling method. It feels to me that deep qualitative conversations with a much smaller number of people from across the country would be a better way of gauging the social and political climate. And it’s important to make sure that participants have the power to set the agenda, because pollsters don’t always know what issues matter most to people. And for qualitative researchers and pollsters alike: if the right questions don’t get asked, you won’t get the right answers!


Don't forget to try Quirkos, the simplest and most visual way to analyse your qualitative text and mixed method data. We work for you, with a free trial and training materials, licences that don't expire and expert researcher support. Download Quirkos and try for your self!




100 blog articles on qualitative research!

images by Paul Downey and AngMoKio


Since our regular series of articles started nearly three years ago, we have clocked up 100 blog posts on a wide variety of topics in qualitative research and analysis! These are mainly short overviews, aimed at students, newcomers and those looking to refresh their practice. However, they are all referenced with links to full-text academic articles should you need more depth. Some articles also cover practical tips that don't get into the literature, like transcribing without getting back-ache, and hot to write handy semi-strucutred interview guides. These have become the most popular part of our website, and there's now more than 80,000 words in my blog posts, easily the length of a good sized PhD thesis!


That's quite a lot to digest, so in addition to the full archive of qualitative research articles, I've put together a 'best-of', with top 5 articles on some of the main topics. These include Epistemology, Qualitative methods, Practicalities of qualitative research, Coding qualitative data, Tips and tricks for using Quirkos, and Qualitative evaluations and market research. Bookmark and share this page, and use it as a reference whenever you get stuck with any aspect of your qualitative research.


While some of them are specific to Quirkos (the easiest tool for qualitative research) most of the principles are universal and will work whatever software you are using. But don't forget you can download a free trial of Quirkos at any time, and see for yourself!



What is a Qualitative approach?
A basic overview of what constitutes a qualitative research methodology, and the differences between quantitative methods and epistimologies


What actually is Grounded Theory? A brief introduction
An overview of applying a grounded theory approach to qualitative research


Thinking About Me: Reflexivity in science and qualitative research
How to integrate a continuing reflexive process in a qualitative research project


Participatory Qualitative Analysis
Quirkos is designed to facilitate participatory research, and this post explores some of the benefits of including respondents in the interpretation of qualitative data


Top-down or bottom-up qualitative coding
Deciding whether to analyse data with high-level theory-driven codes, or smaller descriptive topics (hint – it's probably both!)



Qualitative methods

An overview of qualitative methods
A brief summary of some of the commonly used approaches to collect qualitative data


Starting out in Qualitative Analysis
First things to consider when choosing an analytical strategy


10 tips for semi-structured qualitative interviewing
Semi-structured interviews are one of the most commonly adopted qualitative methods, this article provides some hints to make sure they go smoothly, and provide rich data


Finding, using and some cautions on secondary qualitative data
Social media analysis is an increasingly popular research tool, but as with all secondary data analysis, requires acknowledging some caveats


Participant diaries for qualitative research
Longitudinal and self-recorded data can be a real gold mine for qualitative analysis, find out how it can help your study


Practicalities of qualitative research

Transcription for qualitative interviews and focus-groups
Part of a whole series of blog articles on getting qualitative audio transcribed, or doing it yourself, and how to avoid some of the pitfalls


Designing a semi-structured interview guide for qualitative interviews
An interview guide can give the researcher confidence and the right level of consistency, but shouldn't be too long or too descriptive...


Recruitment for qualitative research
While finding people to take part in your qualitative study can seem daunting, there are many strategies to choose, and should be closely matched with the research objectives


Sampling considerations in qualitative research
How do you know if you have the right people in your study? Going beyond snowball sampling for qualitative research


Reaching saturation point in qualitative research
You'll frequently hear people talking about getting to data saturation, and this post explains what that means, and how to plan for it



Coding qualitative data

Developing and populating a qualitative coding framework in Quirkos
How to start out with an analytical coding framework for exploring, dissecting and building up your qualitative data


Play and Experimentation in Qualitative Analysis
I feel that great insight often comes from experimenting with qualitative data and trying new ways to examine it, and your analytical approach should allow for this


6 meta-categories for qualitative coding and analysis
Don't just think of descriptive codes, use qualitative software to log and keep track of the best quotes, surprises and other meta-categories


Turning qualitative coding on its head
Sometimes the most productive way forward is to try a completely new approach. This post outlines several strange but insightful ways to recategorise and examine your qualitative data


Merging and splitting themes in qualitative analysis
It's important to have an iterative coding process, and you will usually want to re-examine themes and decide whether they need to be more specific or vague



Quirkos tips and tricks

Using Quirkos for Systematic Reviews and Evidence Synthesis
Qualitative software makes a great tool for literature reviews, and this article outlines how to sep up a project to make useful reports and outputs


How to organise notes and memos in Quirkos
Keeping memos is an important tool during the analytical process, and Quirkos allows you to organise and code memo sources in the same way you work with other data


Bringing survey data and mixed-method research into Quirkos
Data from online survey platforms often contains both qualitative and quantitative components, which can be easily brought into Quirkos with a quick tool


Levels: 3-dimensional node and topic grouping in Quirkos
When clustering themes isn't comprehensive enough, levels allows you to create grouped categories of themes that go across multiple clustered bubbles


10 reasons to try qualitative analysis with Quirkos
Some short tips to make the most of Quirkos, and get going quickly with your qualitative analysis



Qualitative market research and evaluations

Delivering qualitative market insights with Quirkos
A case study from an LA based market research firm on how Quirkos allowed whole teams to get involved in data interpretation for their client


Paper vs. computer assisted qualitative analysis
Many smaller market research firms still do most of their qualitative analysis on paper, but there are huge advantages to agencies and clients to adopt a computer-assisted approach


The importance of keeping open-ended qualitative responses in surveys
While many survey designers attempt to reduce costs by removing qualitative answers, these can be a vital source of context and satisfaction for users


Qualitative evaluations: methods, data and analysis
Evaluating programmes can take many approaches, but it's important to make sure qualitative depth is one of the methods adopted


Evaluating feedback
Feedback on events, satisfaction and engagement is a vital source of knowledge for improvement, and Quirkos lets you quickly segment this to identify trends and problems




Analytical memos and notes in qualitative data analysis and coding

Image adapted from - Frank Vincentz

There is a lot more to qualitative coding than just deciding which sections of text belong in which theme. It is a continuing, iterative and often subjective process, which can take weeks or even months. During this time, it’s almost essential to be recording your thoughts, reflecting on the process, and keeping yourself writing and thinking about the bigger picture. Writing doesn’t start after the analysis process, in qualitative research it often should precede, follow and run in parallel to a iterative interpretation.

The standard way to do this is either through a research journal (which is also vital during the data collection process) or through analytic memos. Memos create an important extra level of narrative: an interface between the participant’s data, the researcher’s interpretation and wider theory.

You can also use memos as part of a summary process, to articulate your interpretations of the data in a more concise format, or even throw the data wider and larger by drawing from larger theory.

It’s also a good cognitive exercise: regularly make yourself write what you are thinking, and keep yourself articulating yourself. It will make writing up at the end a lot easier in the end! Memos can be a very flexible tool, and qualitative software can help keep these notes organised. Here are 9 different ways you might use memos as part of your work-flow for qualitative data analysis:


Surprises and intrigue
This is probably the most obvious way to use memos: note during your reading and coding things that are especially interesting, challenging or significant in the data. It’s important to do more than just ‘tag’ these sections, reflect to yourself (and others) why these sections or statements stand out.


Points where you are not sure
Another common use of memos is to record sections of the data that are ambiguous, could be interpreted in different ways, or just plain don’t fit neatly in to existing codes or interpretations. But again, this should be more than just ‘flagging’ bits that need to be looked at again later, it’s important to record why the section is different: sometimes the act of having to describe the section can help comprehension and illuminate the underlying causation.


Discussion with other researchers
Large qualitative research projects will often have multiple people coding and analysing the data. This can help to spread the workload, but also allows for a plurality of interpretations, and peer-checking of assumptions and interpretations. Thus memos are very important in a team project, as they can be used to explain why one researcher interpreted or coded sources in a certain way, and flag up ambiguous or interesting sections for discussion.


Even if you are not working as part of a team, it can be useful to keep memos to explain your coding and analytical choices. This may be important to your supervisors (or viva panel) as part of a research thesis, and can be seen as good practice for sharing findings in which you are transparent about your interpretations. There are also some people with a positivist/quantitative outlook who find qualitative research difficult to trust because of the large amount of seemingly subjective interpretation. Memos which detail your decision making process can help ‘show your working out’ and justify your choices to others.


Challenging or confirming theory
This is another common use of memos, to discuss how the data either supports or challenges theory. It is unusual for respondents to neatly say something like “I don’t think my life fits with the classical structure of an Aeschylean tragedy” should this happen to be your theoretical approach! This means you need to make these observations and higher interpretation, and note how particular statements will influence your interpretations and conclusions. If someone says something that turns your theoretical framework on its head, note it, but also use the memos as a space to record context that might be used later to explain this outlier. Memos like this might also help you identify patterns in the data that weren’t immediately obvious.


Questioning and critiquing the data/sources
Respondents will not always say what they mean, and sometimes there is an unspoken agenda below the surface. Depending on the analytical approach, an important role of the researcher is often to draw deeper inferences which may be implied or hinted at by the discourse. Sometimes, participants will outright contradict themselves, or suggest answers which seem to be at odds with the rest of what they have shared. It’s also a great place to note the unsaid. You can’t code data that isn’t there, but sometimes it’s really obvious that a respondent is avoiding discussing a particular issue (or person). Memos can note this observation, and discuss why topics might be uncomfrotable or left out in the narrative.

Part of an iterative process
Most qualitative research does not follow a linear structure, it is iterative and researchers go back and re-examine the data at different stages in the process. Memos should be no different, they can be analysed themselves, and should be revisited and reviewed as you go along to show changes in thought, or wider patterns that are emerging.

Record your prejudices and assumptions
There is a lot of discussion in the literature about the importance of reflexivity in qualitative research, and recognising the influence of the non-neutral researcher voice. Too often, this does not go further than a short reflexivity/positionality statement, but should really be a constantly reconsidered part of the analytical process. Memos can be used as a prompt and record of your reflexive process, how the data is challenges your prejudices, or how you might be introducing bias in the interpretation of the data.

Personal thoughts and future directions
As you go through the data, you may be noticing interesting observations which are tangential, but might form the basis of a follow-on research project or reinterpretation of the data. Keeping memos as you go along will allow you to draw from this again and remember what excited you about the data in the first place.



Qualitative analysis software can help with the memo process, keeping them all in the same place, and allowing you to see all your memos together, or connected to the relevant section of data. However, most of the major software packages (Quirkos included) don’t exactly forefront the memo tools, so it is important to remember they are there and use them consistently through the analytical process.


Memos in Quirkos are best done using a separate source which you edit and write your memos in. Keeping your notes like this allows you to code your memos in the same way you would with your other data, and use the source properties to include or exclude your memos in reports and outputs as needed. However, it can be a little awkward to flip between the memo and active source, and there is currently no way to attach memos to a particular coding event. However, this is something we are working on for the next major release, and this should help researchers to keep better notes of their process as they go along. More detail on qualitative memos in Quirkos can be found in this blog post article.



There is a one-month free trial of Quirkos, and it is so simple to use that you should be able to get going just by watching one of our short intro videos, or the built-in guide. We are also here to help at any stage of your process, with advice about the best way to record your analytical memos, coding frameworks or anything else. Don’t be shy, and get in touch!


References and further reading:

Chapman, Y., Francis, K., 2008. Memoing in qualitative research, Journal of Research in Nursing, 13(1).


Gibbs, G., 2002, Writing as Analysis,

Saldana, J., 2015, The Coding Manual for Qualitative Researchers, Writing Analytic Memos about Narritative and Visual Data, Sage, London.



Tips for managing mixed method and participant data in Quirkos and CAQDAS software

mixed method data


Even if you are working with pure qualitative data, like interview transcripts, focus groups, diaries, research diaries or ethnography, you will probably also have some categorical data about your respondents. This might include demographic data, your own reflexive notes, context about the interview or circumstances around the data collection. This discrete or even quantitative data can be very useful in organising your data sources across a qualitative project, but it can also be used to compare groups of respondents.


It’s also common to be working with more extensive mixed data in a mixed method research project. This frequently requires triangulating survey data with in-depth interviews for context and deeper understanding. However, much survey data also includes qualitative text data in the form of open-ended questions, comments and long written answers.


This blog has looked before at how to bring in survey data from on-line survey platforms like Surveymonkey, Qualtrics and Limesurvey. It’s really easy to do this, whatever you are using, just export as a CSV file, which Quirkos can read and import directly. You’ll get the option to change whether you want each question to be treated as discrete data, a longer qualitative answer, or even the name/identifier for each source.


But even if you haven’t collected your data using an online platform, it is quite easy to format it in a spreadsheet. I would recommend this as an option for many studies, it’s simply good data management to be able to look at all your participant data together. I often have a table of respondent’s data (password protected of course) which contains a column for names, contact details, whether I have consent forms, as well as age, occupation and other relevant information. During data collection and recruitment having this information neatly arranged helps me remember who I have contacted about the research project (and when), who has agreed to take part, as well as suggestions from snowball sampling for other people to contact.


Finally, a respondent ‘database’ like this can also be used to record my own notes on the respondents and data collection. If there is someone I have tried to contact many times but seems reluctant to take part, this is important to note. It can remind me when I have agreed to interview people, and keep together my own comments on how well this went. I can record which audio and transcript files contain the interview for this respondent, acting as a ‘master key’ of anonymised recordings. 


So once you have your long-form qualitative data, how best to integrate this with the rest of the participant data? Again, I’m going to give examples using Quirkos here, but the similar principals will apply to many other CAQDAS/QDA software packages.


First, you could import the spreadsheet data as is, and add the transcripts later. To do this, just save your participant database as a CSV file in Excel, Google Docs, LibreOffice or your spreadsheet software of choice. You can bring in the file into a blank Quirkos project using the ‘Import source from CSV’ on the bottom right of the screen. The wizard in the next page will allow you to choose how you want to treat each column in the spreadsheet, and each row of data will become a new source. When you have brought in the data from the spreadsheet, you can individually bring the qualitative data in as the text source for each participant, copy and pasting from wherever you have the transcript data.


However, it’s also possible to just put the text into a column in the spreadsheet. It might look unmanageable in Excel when a single cell has pages of text data, but it will make for an easy one step import into Quirkos. Now when you bring in the data to Quirkos, just select the column with the text data as the ‘Question’ and discrete data as the ‘Properties’ (although they should be auto-detected like this).


You can also do direct data entry in Quirkos itself, and there are some features to help make this quick and relatively painless. The Properties and Values editor allows you to create categories and values to define your sources. There are also built in values for True/False, Yes/No, options from 1 -10 or Likert scales from Agree to Disagree. These let you quickly enter common types of data, and select them for each source. It’s also possible to export this data later as a CSV file to bring back into spreadsheet software.


mixed method data entry in quirkos


Once your data has been coded in Quirkos, you can use tools like the query view and the comparison views to quickly see differences between groups of respondents. You can also create simple graphs and outputs of your quantitative and discrete data. Having not just demographic information, but also your notes and thoughts together is vital context to properly interpreting your qualitative and quantitative data.



A final good reason to keep a good database of your research data is to make sure that it is properly documented for secondary analysis and future use. Should you want to ever work with the data again, share it with another research team, or the wider community, an anonymised data table like this is important to make sure the research has the right metadata to be used for different lines of enquiry.



Get an overview of Quirkos and then try for yourself with our free trial, and see how it can help manage pure qualitative or mixed method research projects.




Workshop exercises for participatory qualitative analysis

participatory workshop

I am really interested in engaging research participants in the research process. While there is an increasing expectation to get ‘lay’ researchers to set research questions, sit on review boards and even ask questions in qualitative studies, it can be more difficult to engage them with the analysis of the research data and this is much rarer in the literature (see Nind 2011).

However, Quirkos was specifically designed to make qualitative text analysis engaging and easy enough that participants could learn to do it with just a little training, and in this blog post article from last year I describe how a small group were able to learn the software and code qualitative data in a two hour session. I am revisiting and expanding on this work for a talk in the Digital Tools thread of the 2016 ICQI conference in Champagne Illinois this month, so wanted to revisit this a little.

When I had attempted to engage participants in qualitative analysis before, it had been fairly limited in scope. The easiest way was essentially to present an already complete draft ‘report’ or overview of the analysis and findings, and use that for comment and discussion. While this allows respondents some ability to influence the final outputs and conclusions, the analysis process is still entirely led by the researcher, and they don’t really get a chance to change how data is interpreted. The power dynamics between researcher and participant are not significantly altered.

My feeling is that respondents are often the experts in what is being studied (i.e. their own experiences), and I worry that if presented with all the data, they might rightly conclude “You’ve interpreted this as being about x when it is really about y”.

Yet there are obvious problems that occur when you want respondents to engage with this level of detail. First of all, there is the matter of time: qualitative analysis is extremely time consuming, and in most projects asking someone to analyse all the data is asking for days or even months of commitment. This is not feasible for most respondents, especially if asked to do this work voluntarily parallel to the full time, paid job of the researcher! 

Most approaches in the literature choose to engage a small number of participants in the analysis of some of the data. For example Jackson (2008) uses group exercises successfully with people from different educational backgrounds. The DEPICT model breaks down the work so that the whole dataset is covered, but each team member only has a few transcripts to code (Flicker and Nixon 2015).

However, when it came to run participatory analysis workshops for the research project we did on the Scottish Referendum Project, we had an additional secret weapon: Quirkos! One of the main design briefs for Quirkos was to ensure that it was simple enough to learn that it could be used by research participants with little or no formal training in research or qualitative analysis. The workshops we ran with research-naïve respondents showed that such a software package could be used in such a way.

I was initially really worried about how the process would work practically, and how to create a small realistic task that would be a meaningful part of the analysis process. Before I started, I considered a series of tasks and scenarios that could be used in a participatory analysis workshop to get respondent input into the analysis process. I’ve included some brief details of these below, just in case they are helpful to anyone else considering participatory analysis.



Blank Sheet

The most basic, and most scary scenario: the coding team is provided with just the raw transcript(s), with no existing topic framework or coded data. They start from scratch, creating their own coding framework, and coding data. This is probably the most time consuming, and conceptually challenging approach, but the most neutral in terms of influence from the researchers. Participants are not provided with any preconceptions of what they should be exploring in the data (although they could be provided with the research questions), and are free to make their own interpretations.



Framework Creation

Here, I envisage a series of possible exercises where the focus is on not the coding of the data explicitly, but consideration of the coding framework and possible topics of interest. Participants choose topics of significance to them, or that they feel are appearing in the data. Here the process is like grounded theory, participants are given one (or several) transcripts to read, and asked what topics are significant. This works well on large sheets of paper with Post-it notes, but by creating the coding framework directly in the software, participants and researchers can easily utilise the developed framework for coding later. Could exist in several variants:

Emergent Coding
As above: creating a coding framework (probably from scratch, or with some example topics already provided by the researcher)

Grouping Exercise
A simpler task would be to present a pre-prepared list of many possible topics of interest created by the researcher, and ask participants to group them either thematically, or by order of importance. This gives respondents an easier start on the coding framework, allowing them to familiarise themselves with the process and topics. It is more restrictive, and plants directions of interest for the participants, but they would remain able to challenge, add to, or exclude topics for examination.

Category Prompts
Here the researcher has created a few very broad categories (for example Health, Housing, Family) and the participants are encouraged to populate the framework with more specific sub categories. This approach is a good middle ground, where the researcher can set some broad areas of interest, but participants have say in what direction topics should be explored in detail (say Expensive food, Lack of open space).

After one or more of these exercises, the participants could go on to use the coding framework to code the data themselves, or the researcher can use the contributed topic guide to focus their own coding.



Coding exercises

In these three exercises, I envisage a scenario where some coding has already been completed, the focus of the session is to look either at coded transcripts (on screen or printout) and discuss how the data has been interpreted. This could take the form of:

Researcher Challenge: Where the researcher asks the participants to justify or explain how they have coded the data
Participant Challenge: Participants examine data coded by researchers, question their rationale and suggest changes
Group Challenge: Participants and researchers code the same transcript separately, and get together to compare, contrast and discuss their results.

With all these approaches, one can apply several overall philosophies:
Individual: Where each respondent or researcher works on their own, adding separately to the total coding of the project
Collaborative: Analysis is done as part of a team, working together
Comparative: Where analysts work separately, but come together to discuss and contrast their work, creating a final dialogue from the input of the whole group.


Finally, the team should consider whether the aim of the project is to actually create a direct analysis outcome from these sessions, or if they are exercises which are themselves part of the qualitative data generated from the project. For our sessions, we also recorded, transcribed and analysed the discussion which took place around the coding, which itself also contributed nuanced and valuable insight into the thought processes of the participants. Of course, this leaves the problem of creating an infinite Ouroboros loop of data generation, if respondents were then invited to analyse the transcripts of their own analysis sessions!


Which approach, and how far the participatory process is taken will obviously depend on the research project and desires of the researcher. However, my main aim here is to just get people thinking about the possibilities, and if engaging participants in the research process in some way will challenge the assumptions of the research team, or lead to better results, and more relevant and impactful outputs.


Here are the slides of my ICQI 2016 talk, and the complete data (raw and coded) and summary report on the Scottish Referendum Project is here. I would welcome more discussion on this, in the forum, by e-mail ( or in the literature!


Don't forget, the new version of Quirkos is now available, for researchers and participants alike to bring their qualitative analysis to life. Download a free trial today!



Sharing qualitative research data from Quirkos

exporting and sharing qualitative data

Once you’ve coded, explored and analysed your qualitative data, it’s time to share it with the world. For students, the first step will be supervisors, for researchers it might be peers or the wider research community, and for market research firms, it will be their clients. Regardless of who the end user of your research is, Quirkos offers a lot of different ways to get your hard earned coding out into the real world.


Share your project file
The best, and easiest way to share your coded data is to send your project file to someone. If they have a copy of Quirkos (even the trial) they will be able to explore the project in the same way you can, and you can work on it collaboratively. Files are compatible across Windows, Mac and Linux, and are small enough they can be e-mailed, put on a USB stick or Dropbox as needed.


Word export
One of people’s favourite features is the Word export, which creates a standard Word file of your data, with comments and coloured highlights showing your complete coding. This means that pretty much anyone can see your coding, since the file will open in Microsoft Office, LibreOffice/OpenOffice, Google Docs, Pages (on Mac) and many others. It’s also a great way to print out your project if you prefer to read though it on paper, while still being able to see all your coding. If you print the ‘Full Markup’ view, you will still be able to see the name (and author) of the code on a black and white printer!qualitative word export from quirkos

There are two options available in the ‘Project’ button – either ‘Export All Sources as Word Document’ which creates one long file, or ‘Export Each Source’ which creates a separate file for each source in the project in a folder you specify.


So this is the most conventional output in Quirkos, a customisable document which gives a summary of the project, and an ordered list of coded text segments. It also includes graphical views of your coding framework, including the clustered views which show the connections between themes. When generated in Quirkos, you will get a two columned preview, with a view of how the report will look on the left, and all the options for what you want to include in the report on the right.

You can print this directly, save it as a PDF document, or even save as a webpage. This last option creates a report folder that anyone can open, explore and customise in their browser, in the same way as you are able to in the Quirkos report view. This also creates a folder which contains all the images in the report (such as the canvas and overlap views) that you can then include directly in presentations or articles.

quirkos qualitative data report

There are many options available here, including the ability to list all quotes by source (ie everything one person said) or by theme (ie everything everyone said on one topic). You can change how these quotes are formatted (by making the text or highlight into the colour of the Quirk) and the level of detail, such as whether to include the source name, properties and percentage of coding.


Sub-set reports (query view)
By default, the report button will generate output of the whole project. But if you want to just get responses from a sub-set of your data, you can generate reports containing only the results of filters from the query view. So you could generate a report that only shows the responses from Men or Women, or by one of the authors in the project.


CSV export
Quirkos also gives you the option to export your project as CSV files – a common spreadsheet format which you can open with in Excel, SPSS or equivalents. This allows you to do more quantitative analysis in statistical software, generate graphs of your coding, and conduct more detailed sub-analysis. The CSV export creates a series of files which represent the different tables in the project database, with v_highlight.csv containing your coded quotes. Other files contain the question and answers (in a structured project), a list of all your codes, levels, and source properties (also called metadata).


Database editing
For true power users, there is also the option to perform full SQL operations on your project file. Since Quirkos saves all your project data as a standard SQLite database, it’s possible to open and edit it with a number of third party tools such as SQL Browser to perform advanced operations. You can also use standard command line operations (CLI) like SELECT FROM WHERE to explore and edit the database. Our full manual has more details on the database structure. Hopefully, this will also allow for better integration with other qualitative analysis software in the future.


If you are interesting in seeing how Quirkos can help with coding and presenting your qualitative data, you can download a one-month free trial and try for yourself. Good luck with your research!


Using properties to describe your qualitative data sources

Properties and values editor in Quirkos

In Quirkos, the qualitative data you bring into the project is grouped as 'sources'. Each source might be something like an interview transcript, a news article, your own notes and memos, or even journal articles. Since it can be any source of text data, you can have a project that includes a large number of different types of source, which can be useful when putting your research together. This means that you can code things like your research questions, articles on theory, or even grey literature, and keep them in the same place as your research data.

The benefit of this approach is that you can quickly cross-reference your own research together with written articles, coding them on the same themes so you can compare them. However, there will be times that you only want to look at data from some of your sources. Perhaps you only want to look at journal articles written between a certain period, or look at respondent's data from just one city. By using the Source Properties in Quirkos, you can do all this and more: it allows you an essentially unlimited number of ways to describe the data. You can then use the query view to see results that match one or more properties, and even do comparisons. This Properties-Query combo is the best way to examine your coded qualitative data for trends and differences.


This article will outline a few different ways that you can use the source properties, and how to get the most use out of your research data and other sources.

When you bring a data source into Quirkos, the computer doesn't know anything about it. It's good practice to describe it, using what is sometimes called 'metadata' or 'data about data'. So for example, respondent data might have values for Age, Gender, Location, Occupation, Purchasing Habits... the list is endless. Research papers and textbooks will have values like Journal Name, Pulbication Year, Volume, Author, Page number etc.


Each of these categories in Quirkos are called 'Properties' and the possible data belonging to each property are called 'Values'. So for example, the Age of a respondent is a Property, and the value could be 42. Quirkos lets you have a practically unlimited number of Properties that describe all the sources in a project, and an unlimited number of Values.

The values can also be numerical (like age in years), discrete (like categories for Old, Young or 20-29) or even comments (like 'This person was uncomfortable revealing their age'). Properties can even have a mix of different data types as values.

To create properties and values in your project, click on the small 'grid' button on the top right corner of the screen. This toggles the properties view, and will show you the properties and values for the data source you are currently viewing. To look at a different source, just select it from the tabs at the bottom, or the complete list of sources in the source browser button (bottom left of the source column).

One here, you can create a new property and value with the (+) button at the bottom of the column, or use the 'Properties and Values Editor' to add lots of data at once, or to remove or edit existing values. The Editor also gives you the option of rearranging Properties and Values, and changing a Property to be 'multiple-choice' will let you assign more than one Value to each Property (for example to show that a person has multiple hobbies).

There are also a couple of features that help speed up data entry, for example the Properties Editor also allows you to create Properties that have pre-existing common values, for example 'Yes/No' properties, or common Likert Agree-Disagree scales. To define values for a property, use the orange drop-down arrow next to each Property. When you click on this, you can see all the values that have already been defined, as well as the option to add a new value directly.

I always try and encourage people to also use the properties creatively. You can use them to quickly create groups of your sources, and explore them together. So you may create a property for 'Unusual case', select Yes for those sources, and see what makes them special. There might even be something you didn't collect survey data for, but  is a clear category in the text, such as how someone voted. You can make this a Property too, and easily see who these people are and what they said. They can also be process-based properties: 'Ones I haven't coded Yet' or 'Ones I need to go over again'. Use the properties as a flexible way to manage and sort your data, in anyway you see fit! You can of course create and remove properties and values at any stage of your project, and don't forget to describe the 'type' of source: article, transcript, notes etc.

When you want to explore the data by property, use the Query view. This lets you set up very simple filters that will show you results of coded data that comes from particular sources. You can even run two queries at once, and see the results side-by-side to compare them. While by default the [ = ] option will return sources that match the value, you can also use 'Not equal' [!=] and ranges for numerical or alphabetic values ( < > etc). It's also possible to add many queries together with a simple interface, to create complex filters. So for example you can return results from just people between the ages of 30-35, who are Male, and live in France OR Germany.


This was a quick summary of how to describe your qualitative data in Quirkos: as always you can find more information in the video guides, and ask us a question in the forum.



Our hyper-connected qualitative world

qualitative neurons and connections


We live in a world of deep qualitative data.


It’s often proposed that we are very quantitatively literate. We are exposed to numbers and statistics frequently in news reports, at work, when driving, with fitness apps etc. So we are actually pretty good at understanding things like percentages, fractions, and making sense of them quickly. It’s a good reason why people like to see graphs and numerical summaries of data in reports and presentations: it’s a near universal language that people can quickly understand.


But I believe we are also really good at qualitative understanding.


Bohn and Short in a 2009 study estimated that “The average American consumes 100,500 words of information in a single day”, comprised of conversations, TV shows, news, written articles, books… It sounds like a staggering amount of qualitative data to be exposed to, basically a whole PhD thesis every single day!


Obviously, we don’t digest and process all of this, people are extremely good at filtering this data; ignoring adverts, skim reading websites to get to the articles we are interested in and skim reading those, and of course, summarising the gist of conversations with a few words and feelings. That’s why I argue that we are nearly all qualitative experts, summarising and making connections with qualitative life all the time.

And those connections are the most important thing, and the skill that socially astute humans do so well. We can pick up on unspoken qualitative nuances when someone tells us something, and understand the context of a news article based on the author and what is being reported. Words we hear such as ‘economy’ and ‘cancer’ and ‘earthquake’ are imbued with meaning for us, connecting to other things such as ‘my job’ and ‘fear’ and ‘buildings’.


This neural network of meaning is a key part of our qualitative understanding of the world, and whether we want to challenge these by some type of Derridan deconstruction of our associations between language and meaning, they form a key part of our daily prejudices and understanding of the world in which we live.


For me, a key problem with qualitative analysis is that it struggles to preserve or record these connections and lived associations. I touched on this issue of reductionism in the last blog post article on structuring unstructured qualitative data, but it can be considered a major weakness of qualitative analysis software. Essentially, one removes these connected meanings from the data, and puts it as a binary category, or at best, represents it on a scale.


Incidentally, this debate about scaling and quantifying qualitative data has been going on for at least 70 years from Guttman, who even in this 1944 article notes that there has been ‘considerable discussion concerning the utility of such orderings’. What frustrates me at the moment is that while some qualitative analysis software can help with scaling this data, or even presenting it in a 2 or 3 dimensional scale by applying attributes such as weighting, it still is a crude approximation of the complex neural connections of meaning that deep qualitative data possesses.


In my experiments getting people with no formal qualitative or research experience to try qualitative analysis with Quirkos, I am always impressed at how quickly people take to it, and can start to code and assign meaning to qualitative text from articles or interviews. It’s something we do all the time, and most people don’t seem to have a problem categorising qualitative themes. However, many people soon find the activity restrictive (just like trained researchers do) and worry about how well a basic category can represent some of the more complex meanings in the data.


Perhaps one day there will be practical computers and software that ape the neural networks that make us all such good qualitative beings, and can automatically understand qualitative connections. But until then, the best way of analysing data seems to be to tap into any one of these freely available neural networks (i.e. a person) and use their lived experience in a qualitative world in partnership with a simple software tool to summarise complex data for others to digest.


After all, whatever reports and articles we create will have to compete with the other 100,000 words our readers are consuming that day!



Structuring unstructured data


The terms ‘unstructured data’ and ‘qualitative data’ are often used interchangeably, but unstructured data is becoming more commonly associated with data mining and big data approaches to text analytics. Here the comparison is drawn with databases of data where we have a defined field and known value and the loosely structured (especially to a computer) world of language, discussion and comment. A qualitative researcher lives in a realm of unstructured data, the person they might be interviewing doesn’t have a happy/sad sign above their head, the researcher (or friend) must listen and interpret their interactions and speech to make a categorisation based on the available evidence.

At their core, all qualitative analysis software systems are based around defining and coding: selecting a piece of text, and assigning it to a category (or categories). However, it is easy to see this process as being ‘reductionist’: essentially removing a piece of data from it’s context, and defining it as a one-dimensional attribute. This text is about freedom. This text is about liberty. Regardless of the analytical insight of the researcher in deciding what relevant themes should be, and then filtering a sentence into that category, the final product appears to be a series of lists of sections of text.

This process leads to difficult questions such as, is this approach still qualitative? Without the nuanced connections between complicated topics and lived experiences, can we still call something that has been reduced to a binary yes/no association qualitative? Does this remove or abstract researchers from the data? Isn't this a way of quantifying qualitative data?

While such debates are similarly multifaceted, I would usually argue that this process of structuring qualitative data does begin to categorise and quantify it, and it does remove researchers from their data. But I also think that for most analytical tasks, this is OK, if not essential! Lee and Fielding (1996) say that “coding, like linking in hypertext, is a form of data reduction, and for many qualitative researchers is an important strategy which they would use irrespective of the availability of software”. When a researcher turns a life into 1 year ethnography, or a 1 hour interview, that is a form of data reduction. So is turning an audio transcript into text, and so is skim reading and highlighted printed versions of that text.

It’s important to keep an eye on the end game for most researchers: producing a well evidenced, accurate summary of a complex issue. Most research, as a formula to predict the world or a journal article describing it, is a communication exercise that (purely by the laws of entropy if not practicality) must be briefer than the sum of it’s parts. Yet we should also be much more aware that we are doing this, and together with our personal reflexivity think about the methodological reflexivity, and acknowledge what is being lost or given prominence in our chosen process.

Our brains are extremely good at comprehending the complex web of qualitative connections that make everyday life, and even for experienced researchers our intuitive insight into these processes often seem to go beyond any attempt to rationalise them. A structuralist approach to qualitative data can not only help as an aide-mémoir, but also to demonstrate our process to others, and challenge our own assumptions.

In general I would agree with Kelle (1997) that “the danger of methodological biases and distortion arising from the use of certain software packages is overemphasized in current discussions”. It’s not the tool, it’s how you use it!

Qualitative evaluations: methods, data and analysis

reports on a shelf

Evaluating programmes and projects are an essential part of the feedback loop that should lead to better services. In fact, programmes should be designed with evaluations in mind, to make sure that there are defined and measurable outcomes.


While most evaluations generally include numerical analysis, qualitative data is often used along-side the quantitative, to show richness of project impact, and put a human voice in the process. Especially when a project doesn’t meet targets, or have the desired level of impact, comments from project managers and service users usually give the most information into what went wrong (or right) and why.


For smaller pilot and feasibility projects, qualitative data is often the mainstay of the evaluation data, when numerical data wouldn’t reach statistical analysis, or when it is too early in a programme to measure intended impact. For example, a programme looking at obesity reductions might not be able to demonstrate a lower number of diabetes referrals at first, but qualitative insight in the first year or few months of the project might show how well messages from the project are being received, or if targeted groups are talking about changing their behaviour. When goals like this are long term (and in public health and community interventions they often are) it’s important to continuously assess the precursors to impact: namely engagement, and this is usually best done in a qualitative way.


So, what is best practice for qualitative evaluations? Fortunately, there are some really good guides and overviews that can help teams choose the right qualitative approach. Vaterlaus and Higgenbotham give a great overview of qualitative evaluation methods, while Professor Frank Vanclay talks at a wider level about qualitative evaluations, and innovative ways to capture stories. However, there was also a nice ‘tick-box’ style guide produced by the old Public Health Resource Unit which can still be found at this link. Essentially, the tool suggests 10 questions that can be used to assess the quality of a qualitative based-evaluation – really useful when looking at evaluations that come from other fields or departments.


But my contention is that the appraisal tool above is best implemented as a guide for producing qualitative evaluations. If you start by considering the best approach, how you are going to demonstrate rigour, choosing appropriate methods and recruitment, you’ll get a better report at the end of it. I’d like to discuss and expand on some of the questions used to assess the rigour of the qualitative work, because this is something that often worries people about qualitative research, and these steps help demystify good practice.


  1. The process: Start by planning the whole evaluation from the outset: What do you plan to do? All the rest will then fall into place.
  2. The research questions: what are they and why were these chosen? Are the questions going to give the evaluation the data it needs, and will the methods capture that correctly?
  3. Recruitment: who did you choose, and why? Who didn’t take part, and how did you find people? What gaps are there likely to be in representing the target group, and how can you compensate for this? Were there any ethical considerations, how was consent gained, and what was the relationship between the participants and the researcher(s)? Did they have any reason to be biased or not truthful?
  4. The data: how did you know that enough had been collected? (Usually when you are starting to hear the same things over and over – saturation) How was it recorded, transcribed, and was it of good quality? Were people willing to give detailed answers?
  5. Analysis: make sure you describe how it was done, and what techniques were used (such as discourse or thematic analysis). How does the report choose which quotes to reproduce, and are there contradictions reported in the data? What was the role of the researcher – should they declare a bias, and were multiple views sought in the interpretation of the data?
  6. Findings: do they meet the aims and research questions? If not, what needs to be done next time? Are there clear findings and action points, appropriate to improving the project?


Then the final step for me is the most important of all: SHARE! Don't let it end up on a dusty shelf! Evaluations are usually seen as a tedious but necessary internal process, but they can be so useful to people as case-studies and learning tools in organisations and groups you might never have thought of. This is especially true if there are things that went wrong, help someone in another local authority not make the same mistakes!


At the moment the best UK repositories of evaluations are based around health and economic benefits but that doesn’t stop you putting the report on your organisation’s website – if someone is looking for a similar project, search engines will do the leg work for you. That evaluation might save someone a lot of time and money, and it goes without saying, look for any similar work before you start a project, you might get some good ideas, and stop yourself falling into the same pit-holes!


Free materials for qualitative workshops

qualitative workshop on laptops with quirkos


We are running more and more workshops helping people learn qualitative analysis and Quirkos. I always feel that the best way to learn is by doing, and the best way to remember is through play. To this end, we have created two sources of qualitative data that anyone can download and use (with any package) to learn how to use software for qualitative data analysis.


These can be found at the workshops folder. There are two different example data sets, which are free for any training use. The first is a basic example project, which is comprised of a set of fictional interviews with people talking about what they generally have for breakfast. This is not really a gripping exposé of a critical social issue, but is short and easy to engage with, and already provides some suprises when it comes to exploring the data. The materials provided include individual transcribed sources of text, in a variety of formats that can be brought into Quirkos. The idea is that users can learn how to bring sources into Quirkos, create a basic coding framework, and get going on coding data.

For the impatient, there is also a 'here's one we created earlier' file, in which all the sources have been added to the project, described age and gender and occupation as source properties, a completed framing codework, and a good amount of coding. This is a good starting point if someone wants to use the various tools to explore coded data and generate outputs. There is also a sample report, demonstrating what a default output looks like when generated by Quirkos, including the 'data' folder, which includes all the pictures for embedding in a report or PowerPoint presentation.


This is the example project we most frequently use in workshops. It allows us to quickly cover all the major steps in qualitative analysis with software, with a fun and easy to understand dataset. It also lets us see some connections in the data, for example how people don't describe coffee as a healthy option, and that women for some reason talk about toast much more than men.


However, the breakfast example is not real qualitative data - it is short, and fictitious, so for people who come along to our more advanced analysis workshops, we are happy to now make available a much more detailed and lively dataset. We have recently completed a project on the impact on voter opinions in Scotland after the 2014 Referendum for independence. This comprises of 12 semi-structured interviews with voters based in Edinburgh, on their views on the referendum process, and how it has changed their outlook on politics and voting in the run-up to the 2015 General Election in the UK.


When we conducted these interviews, we explicitly got consent for them to be made publicly available and used for workshops after they had been transcribed and anonymised. This gives us a much deeper source of data to analyse in workshops, but also allows for anyone to download a rich set of data to use in their own time (again with any qualitative software package) to practice their analytical skills in qualitative research. You can download these interviews and further materials at this link.


We hope you will find these resources useful, please acknowledge their origin (ie Quirkos), let us know if you use them in your training and learning process, and if you have any feedback or suggestions.

Qualitative data in the UK Public Sector

queuing for health services


The last research project I worked on with the NIHR was a close collaboration between several universities, local authorities and NHS trusts. We were looking at evidence use by managers in the NHS, and one of the common stories we heard was how valuable information often ended up on the shelf, and not used to inform service provision or policy.

It was always a real challenge for local groups, researchers and academics to create research outputs that were in a digestible format so that they could be used by decision makers, who often had very short timescales and limited resources. We also were told of the importance of using examples and case studies of other trusts or departments that had successes: it’s all very well making suggestions to improve services, but most of the time, the battle is getting that into practice. It’s one of the reasons we created a mini-case study guide, short one page summaries of ‘best-practice’ – places where a new approach had worked and made changes.

However, the biggest shock for me was how difficult it was to engage qualitative data in decision making. In many public sector organisations, qualitative data is seen as the poor-cousin of quantitative statistics, only used when figures can’t be found, or the interest group is too small for statistical significant findings.

So many wonderful sources of qualitative data seemed to be sitting around, collecting dust: including research from community organisations, consultations, and feedback from service users – data that had already been collected, and was awaiting analysis for a specific purpose. There was also a lack of confidence in some researchers in how to work with qualitative data, and an understandable sense that it was a very time consuming process. At best, qualitative data was just providing quotes to illustrate reports like JSNAs which were mostly filled with quantitative data.


A big part of the problem seemed to be how decision makers, especially from a clinical background, were more comfortable with quantitative data. For managers used to dealing with financial information, RCTs, efficacy trials etc., this is again quite understandable, and they were used to seeing graphs and tests of statistical significance. But there was a real chicken-and-egg problem: because they rarely took into account qualitative data, it was rarely requested, and there was little incentive to improve qualitative analytical skills.

One group we spoke to had produced a lovely report on a health intervention for an ethnic minority group. Their in-depth qualitative interviews and focus groups had revealed exactly why the usual health promotion message wasn’t getting though, and a better approach to engage with this population. However, the first time they went to present their findings to a funding board, the members were confused. The presentation was too long, had too many words, and no graphs. As one of many items on the agenda, they had to make a case in five minutes and a few slides.

So that’s just what they did. They turned all their qualitative data into a few graphs, which supported their case for an intervention in this group. Personally, it was heart-breaking to see all this rich data end up on the cutting-room floor, but evidence is not useful unless it is acted upon. Besides, the knowledge that the team had from this research meant that with their budget now approved, they knew they could craft the right messages for an effective campaign.

This story was often in my mind when we were designing Quirkos – what would the outputs look like that would have an impact on decision makers? It had to produce visual summaries, graphs and quotes that can be put into a PowerPoint presentation. And why couldn’t the interface itself be used to present the data? If the audience asked a question about a particular quote or group, couldn’t the presenter show that to them there and then?


Opening the door to make qualitative data easier to work with and visualise is one thing, but a whole culture of change is needed in many organisations to improve the understanding and use of qualitative data. Until this happens, many evidence based decisions are only being made on the basis of a limited style and depth of data, and valuable insights are being missed.


With the prospect of sustained and continued cuts to public services in the UK, there are fewer chances to get something right. Qualitative engagement here can tell us not only what needs to be done and how to learn from our mistakes, but how to get it right the first time.



Don't share reports with clients, share your data!

When it comes to presenting findings and insight with colleagues and clients, the procedure is usually the same. Create a written summary report, deliver the Powerpoint presentation, field any questions, repeat until everyone is happy.


But this approach tends to produce very static uninspiring reports, and presentations that lack interaction. This often necessitates further sessions, if clients or colleagues have questions that can't be directly answered, want additional clarifications, or the data explored in a different way. And the final reports don't always have the life we'd want for them, ending up on a shelf, or buried in a bulging inbox.


But what if rather than sharing a static report, you could actually share the whole research project with your clients? If rather than sending a Powerpoint deck, you could send them all of the data, and let them explore it for themselves? That way, if one of the clients is interested in looking at results from a particular demographic group, they can see it themselves, rather than asking for a report to be generated. If another client wants to see all the instances of negative words being used to describe their brand, they can see all the quotes in one click, and in another all the positive words.


In many situations, this would seem like an ideal way to engage with clients, but usually it cannot be facilitated. To send clients a copy of all the data in the project, transcripts, nodes, themes and all would be a huge burden for them to process. Researchers would also assume that few clients would be sufficiently versed in qualitative analysis software to be able to navigate the data themselves.


But Quirkos takes a different approach, which opens up new possibilities for sharing data with end users. As it is designed to be usable by complete novices at qualitative research, your project file, and the software interface itself can be used as a feedback tool. Send your clients the project data in a Quirkos file, with a copy of the software that runs live from a USB stick. You can even give them an Android tablet with the data on, which they can explore with a touch interface. They can then quickly filter the data however they like, see all the responses you've coded, or even rearrange your themes or nodes in ways that makes sense for them. The research team have collected the data, transcribed and coded it, but clients can get a real sense of the findings, running searches and queries to explore anything of interest to them.


And even when you are doing a presentation, while Quirkos will generate visual graphs and overviews of the data to include as static image files in Powerpoint, why not bring up Quirkos itself, and show the data as a live demonstration? You can show how themes are related, run queries for particular demographics segments, and start a really interactive discussion about the data, where you can field answers to queries in real time, generating easy to understand graphical displays on the fly. Finally, you can generate those static PDF or Word reports to share and cement your insights, but they will have come as a the result of the discussion and exploration of the project you did as collaborators.


Isn't it time you stopped sharing dry reports, and started sharing answers?