Quirkos update v1.4.1 is here!

Quirkos 1.4.1

Since Quirkos version 1.4 came out last year, we have been gathering feedback from dozens of users who have given us suggestions, or reported problems and bugs. This month we are releasing a small update for Quirkos, which will improve more than a dozen aspects of the software:

 

  • MacOS – Since our last version, a new version of Mac OS X (now called macOS) has been released. This actually caused a few minor glitches in Quirkos, we hope we have fixed them all!
     
  • Tree view – Deleting top-level Quirkos in Tree View no longer causes crashing on Mac.
     
  • Canvas View – In the main canvas view, rearranging Quirks sometimes caused bubbles to become stuck – this has now been addressed.
     
  • Disappearing text fix – On some systems, an occasional glitch would cause the top line of text highlighted in the source column to become invisible (although it was still there, and coded correctly).
     
  • Percentage coding figures – In some circumstances, the 'Source Text Coverage' figures displayed in the bottom right status area were wildly inaccurate, sometimes showing figures over 100%. This has been fixed, figures displayed in the source browser were not affected.
     
  • Percentage coding updates – The 'Source Text Coverage' is now updated quicker when removing a Quirk or performing Undo operations to give a more accurate live picture of how much of the project has been coded.
     
  • Incorrectly closed files – If Quirkos or the computer crashes, no data is lost as Quirkos saves your project after each action. However, when the file is not closed, a message was displayed stating that “The selected file seems to be already opened in another session. Opening file in multiple sessions may result in data inconsitency. Do you still want to open this file?”. This message is intended to make sure that the file is not being used by two users at once, which could cause problems! However, this situation was rare, and the message was causing anxiety in users who feared problems in their projects (when it was safe to keep working). We have improved the wording of the message to “Please check that the project file is not open in another window. If this is not the case, it is safe to continue.”
     
  • Merge – Some Quirk merge operations would remove the highlighting from the last coded section of text in the source. This has been fixed. Please note that if text is coded the same way in two merging Quirks, or the text between two coded sections overlaps, they will become one section of highlighted text in the new merged code. This means that sometimes the number of coded segments in a merged Quirk will be lower than in the two Quirks separately, but does not mean sections got ignored!
     
  • Report generation – We have improved the system used to display reports generated in HTML. This means they now load and display quicker.
     
  • Improved PDF export – The PDF export of the reports is also updated, this should now be quicker, and produce smaller file sizes. Where the old reports had large amounts of text as uneditable images, these are now displayed as text, which can be selected and copied.
     
  • PDF characters – some PDF files contained non-standard formatting characters, which were incorrectly interpreted when imported into Quirkos. Although these were not notable, these sometimes caused CSV exports to have many unnecessary line breaks. This has now been fixed.
     
  • Faster start-up – on most systems Quirkos should now start faster

 

Note on printing long reports – On Windows we have noted a new issue with this release: trying to print very large reports can create a crash on some systems. Unfortunately, this problem is due to the printing system we use in Windows, and we cannot fix this ourselves! However, printing from a PDF file works fine, so a simple workaround is to save the report as a PDF file, and then print from there. This also gives you more flexibility on which pages to print, custom formatting options and the ability to see a preview. We hope this issue will be fixed for the next release...


The new version is available to download now for Mac and Windows, and you can just install over the old version. There is no problem with compatibility, so once again all your projects will work in the new version. Anyone using older versions will not see any difference, but we recommend that people update as soon as possible to get the benefits above! The Linux version will be relased shortly, as requested we are moving to a proper .deb packaging release, which should ease dependency issues some people had. We are changing to a 64bit Linux release, which for most people will require less lib32 compatibility libraries to be installed.

 

We don't charge for updates, and thus they are available for all our users, and even those on the free trial! We think this is the fairest way to do software: I never want to have users stuck using old outdated versions of Quirkos because they (or their department) can't afford to upgrade. We continue our promise to protect forward, backwards and cross-platform compatibility for Quirkos projects so that people never loose access to their own or other's data.


We have already started work on the next major release of Quirkos, which will be version 1.5. This is going to include two major and highly requested new features. First will be the ability to merge project files.

 

I know a lot of people work as a team, especially in multiple locations, and sharing one file back and forwards has been a pain at the moment. We have quite an exciting solution being tested for this, which will allow projects to be merged together from multiple coders, different frameworks, and on different sources. We are confident that this is going to be the most powerful, but also easiest to use project merge function in any qualitative software package.

 

The second addition will be memos! The ability to comment and write memos and reflexive text during the analysis is a fundamental part of creating strong and transparent qualitative analysis, and previously users have had to use Source Properties and write in dedicated Memo Sources to achieve this in Quirkos. However, the next release will create dedicated functionality to allow many different types of commenting, and greatly improve collaborative and reflexive practice in your analysis.


Quirkos v1.5 should be released in the next 6 months, and will include the usual number of small tweaks to operation and work-flow that get requested, so if you have any ideas or things that are bugging you, let us know! More than half of the improvements above were requested by users, so e-mail support@quirkos.com and let us know how we can make the best software for qualitative research!

 

What next? Making the leap from coding to analysis

leap coding to analysis

 

So you spend weeks or months coding all your qualitative data. Maybe you even did it multiple times, using different frameworks and research paradigms. You've followed our introduction guides and everything is neatly (or fairly neatly) organised and inter-related, and you can generate huge reports of all your coding work. Good job! But what happens now?

 

It's a question asked by lot of qualitative researchers: after all this bruising manual and intellectual labour, you hit a brick wall. After doing the coding, what is the next step? How to move the analysis forward?

 

The important thing to remember is that coding is not really analysis. Coding is often a precursor to analysis, in the same way that a good filing system is a good start for doing your accounts: if everything is in the right place, the final product will come together much easier. But coding is usually a reductive and low-level action, and it doesn't always bring you to the big picture. That's what the analysis has to do: match up your data to the research questions and allow you to bring everything together. In the words of Zhang and Wildemuth you need to look for “meanings, themes and patterns”

 


Match up your coding to your research questions

Now is a good time to revisit the research question(s) you originally had when you started your analysis. It's easy during the coding process to get excited by unexpected but fascinating insights coming from the data. However, you usually need to reel yourself in at this stage, and explore how the coded data is illuminating the quandaries you set out to explore at the start of the project.

 

Look at the coded framework, and see which nodes or topics are going to help you answer each research question. Then you can either group these together, or start reading through the coded text by theme, probably more than once with an eye for one research question each time. Don't forget, you can still tag and code at this stage, so you can have a category for 'Answers research question 1' and tag useful quotes there.

 

One way to do this in Quirkos is the 'Levels' function, which allows you to assign codes/themes to more than one grouping. You might have some coded categories which would be helpful in answering more than one research question: you can have a level for each research question, and  Quirks/categories can belong to multiple appropriate levels. That way, you can quickly bring up all responses relevant to each research question, without your grouping being non-exclusive. 

 


Analyse your coding structure!

It seems strange to effectively be analysing your analysis, but looking at the coding framework itself gets you to a higher meta-level of analysis. You can grouping themes together to identify larger themes and coding. It might also be useful to match your themes with theory, or recode them again into higher level insights. How you have coded (especially when using grounded theory or emergent coding) can reveal a lot about the data, and your clusterings and groupings, even if chosen for practical purposes, might illuminate important patterns in the data.

 

In Quirkos, you can also use the overlap view to show relationships between themes. This illustrates in a graphical chart how many times sections of text 'overlap' - in that a piece of text has been coded with both themes. So if you have simple codes like 'happy' or 'disappointed' you can what themes have been most coded with disappointment. This can sometimes quickly show surprises in the correlations, and lets you quickly explore possible significant relationships between all of your codes. However, remember that all these metrics are quantitative, so are dependent on the number of times a particular theme has been coded. You need to keep reading the qualitative text to get the right context and weight, which is why Quirkos shows you all the correlating text on the right of the screen in this view.

 

side comparison view in Quirkos software

 


Compare and contrast

Another good way to make your explorations more analytical is to try and identify and explain differences: in how people describe key words or experiences, what language they use, or how their opinions are converging or diverging from other respondents. Look back at each of the themes, and see how different people are responding, and most importantly, if you can explain the difference through demographics or difference life experiences.

 

In Quirkos this process can be assisted with the query view, which allows you to see responses from particular groups of sources. So you might want to look at differences between the responses of men and women, as shown below. Quirkos provides a side-by-side view to let you read through the quotes, comparing the different responses. This is possible in other software too, but requires a little more time to get different windows set up for comparison.

 

overlap cluster view in Quirkos software

 

Match and re-explore the literature

It's also a good time to revisit the literature. Look back at the key articles you are drawing from, and see how well your data is supporting or contradicting their theory or assumptions. It's a really good idea to do this (not just at the end) because situating your finding in the literature is the hallmark of a well written article or thesis, and will make clear the contribution your study has made to the field. But always be looking for an extra level of analysis, try and grow a hypothesis of why your research differs or comes to the same conclusions – is there something in the focus or methodology that would explain the patterns?

 


Keep asking 'Why'

Just like an inquisitive 6 year old, keep asking 'Why?'! You should have multiple levels of Why, with explanations in qualitative focus usually explaining individual, then group, and all the way up to societal levels of causation. Think of the maxim 'Who said What, and Why?'. The coding shows the 'What', exploring the detail and experiences of the respondents is the 'Who', the Why needs to explore not just their own reasoning, but how this connects to other actors in the system. Sometimes this causation is obvious to the respondent, especially if articulated because they were always asked 'why' in the interview! However analysis sometimes requires a deeper detective type reading, getting to the motivations as well as actions of the participants.

 


Don't panic!

Your work was not in vain. Even if you end up for some reason scrapping your coding framework and starting again, you will have become so much more engaged with your data by reading it through so closely, and this will be a great help knowing how to take the data forward. Some people even discover that coding data was not the right approach for their project, and use it very little in the final analysis process. Instead they may just be able to pull together important findings in their head, the time taken to code the data having made key findings pop out from the page.

 

And if things still seem stuck, take a break, step back and print out your data and try and read it from a fresh angle. Wherever possible, discuss with others, as a different perspective can come not just from other people's ideas, but just the process of having to verbally articulate what you are seeing in the data.

 


Also remember to check out Quirkos, a software tool that helps constantly visualise your qualitative analysis, and thus keep your eye on what is emerging from the data. It's simple to learn, affordably priced, and there is a free trial to download for Windows, Mac and Linux so you can see for yourself if it is the right fit for your qualitative analysis journey. Good luck!

 

 

Comparing qualitative software with spreadsheet and word processor software

word and excel for qualitative analysis

An article was recently posted on the excellent Digital Tools for Qualitative Research blog on how you can use standard spreadsheet software like Excel to do qualitative analysis. There are many other articles describing this kind of approach, for example Susan Eliot or Meyer and Avery (2008). However, it’s also possible to use word processing software as well, see for example this presentation from Jean Scandlyn on the pros and cons of common software for analysing qualitative data.

 

For a lot of researchers, using Word or Excel seems like a good step up from doing qualitative analysis with paper and highlighters. It’s much easier to keep your data together, and you can easily correct, undo and do text searches. You also get the advantage of being able to quickly copy and paste sections from your analysis into research articles or a thesis. It’s also tempting because nearly everyone has access to either Microsoft Office products or free equivalents like OpenOffice (http://www.libreoffice.org) or Google Docs and knows how to use them. In contrast, qualitative analysis software can be difficult to get hold of: not all institutions have licences for them, and they can have a steep learning curve or high upfront cost.

 

However, it is very rare that I recommend people use spreadsheets or word processing software for a qualitative research project. Obviously I have a vested interest here, but I would say the same thing even if I didn’t design qualitative analysis software for a living. I just know too many people who have started out without dedicated software and hit a brick wall.

 

 

Spreadsheet cells are not very good ways to store text.


If you are going to use Excel or an equivalent, you will need to store your qualitative text data in it somehow. The most common method I have seen is to keep quotes or paragraphs as a separate cell in a column for the text. I’ve done this in a large project, and it fiddly to copy and paste the text in the right way. You will also find yourself struggling with formatting (hint – get familiar with the different wrap text and auto column width options). It also becomes a chore to separate out paragraphs into smaller sections to code them differently, or merge them together. Also, if you have data in other formats (like audio or video) it’s not really possible to do anything meaningful with them in Excel.

 


You must master Excel to master your analysis

 

As Excel or other spreadsheets are not really designed for qualitative analysis, you need to use a bit of imagination to sort and categorise themes and sources. With separate columns for source names and your themes, this is possible (although can get a little laborious). However, to be able to find particular quotes, themes and results from sources, you will need to properly understand how to use Pivot Tables and filters. This will allow you some ability to manage and sort your coded data.

 

It’s also a good idea to get to grips with some of the keyboard shortcuts for your spreadsheet software, as these will help take away some of the repetitive data entry you will need to do when coding extracts. There is no quick drag-and-drop way to assign text to a code, so coding will almost always be slower than using dedicated software.

 

For these reasons, although it seems like just using software like Excel you already know will be easier, it can quickly become a false economy in terms of the time required to code and learn advanced sorting techniques.

 


Word makes coding many different themes difficult.

 

I see a lot of people (mostly students) who start out doing line-by-line coding in Word, using highlight colours to show different topics. It’s very easy to fall into this: while reading through a transcript, you highlight with colours bits that are obviously about one topic or another, and before you know it there is a lot of text sorted and coded into themes and you don’t want to loose your structure. Unfortunately, you have already lost it! There is no way in Word or other word processing software to look at all the text highlighted in one colour, so to review everything on one topic you have to look through the text yourself.

 

There is also a hard limit of 15 (garish) colours, which limits the number of themes you can code, and it’s not possible to code a section with more than one colour. Comments and shading (in some word-processors) can get around this, but it is still limited: there is no way to create groups or hierarchies of similar themes.

 

I get a lot of requests from people wanting to bring coded work from a word processor into Quirkos (or other qualitative software) but it is just not possible.

 


No reports, or other outputs


Once you have your coded data – how do you share it, summarise it or print it out to read through away from the glow of the computer? In Word or Excel this is difficult. Spreadsheets can produce summaries of quantitative data, but have very few tools that deal with text. Even getting something as simple as a word count is a pain without a lot of playing around with macros. So getting a summary of your coding framework, or seeing differences between different sources is hard.

 

Also, I have done large coding projects in Excel, and printing off huge sheets and long rows and columns is always a struggle. For meetings and team work, you will almost always need to get something out of a spreadsheet to share, and I have not found a way to do this neatly. Suggestions welcome!

 

 


I’m not trying to say that using Word or Excel is always a bad option, indeed Quirkos lets you export coded data to Word or spreadsheet format to read, print and share with people who don’t have qualitative software, and to do more quantitative analysis. However, be aware that if you start your analysis in Word or Excel it is very hard to bring your codes into anything else to work on further.

 

Quirkos tries to make dedicated qualitative software as easy to learn and use as familiar spreadsheet and word processing tools, but with all the dedicated features that make qualitative analysis simple and more enlightening. It’s also one of the most affordable packages on the market, and there is a free trial so you can see for yourself how much you gain by stepping up to real qualitative analysis software!

 

 

Making the most of bad qualitative data

 

A cardinal rule of most research projects is things don’t always go to plan. Qualitative data collection is no difference, and the variability in approaches and respondents means that there is always the potential for things to go awry. However, the typical small sample sizes can make even one or two frustrating responses difficult to stomach, since they can represent such a high proportion of the whole data set.


Sometimes interviews just don’t go well: the respondent might only give very short answers, or go off on long tangents which aren’t useful to the project. Usually the interviewer can try and facilitate these situations to get better answers, but sometimes people can just be difficult. You can see this in the transcript of the interview with ‘Julie’ in the example referendum project. Despite initially seeming very keen on the topic, perhaps she was tired on the day, but cannot be coaxed into giving more than one or two word answers!


It’s disappointing when something like this happens, but it is not the end of the world. If one interview is not as verbose or complete as some of the others it can look strange, but there is probably still useful information there. And the opinions of this person are just as valid, and should be treated with the same weight. Even if there is no explanation, disagreeing with a question by just saying ‘No’ is still an insight.


You can also have people who come late to data collection sessions, or have to leave early resulting in incomplete data. Ideally you would try and do follow up questions with the respondent, but sometimes this is just not possible. It is up to you to decide whether it is worth including partial responses, and if there is enough data to make inclusion and comparison worthwhile.


Also, you may sometimes come across respondents who seem to be outright lying – their statements contradict, they give ridiculous or obviously false answers, or flat out refuse to answer questions. Usually I would recommend that these data sources are included, as long as there is a note of this in the source properties and a good justification for why the researcher believes the responses may not be trusted. There is usually a good reason that a respondent chooses to behave in such a way, and this can be important context for the study.


In focus group settings there can sometimes be one or two participants who derail the discussion, perhaps by being hostile to other members of the group or only wanting to talk about their pet topics and not the questions on the table. This is another situation where practice at mediating and facilitating data collection can help, but sometimes you just have to try and extract whatever is valuable. But organising focus groups can be very time consuming, and consume so many potentially good respondents in one go, so having poor data quality from one of the sessions can be upsetting. Don’t be afraid to go back to some of the respondents and see if they would do another smaller session, or one-on-ones to get more of their input.


However, the most frustrating situation is when you get disappointing data from a really key informant: someone that is an important figure in the field, is well connected or has just the right experience. These interviews don’t always go to plan, especially with senior people who may not be willing to share, or have their own agenda in how they shape the discussion. In these situations it is usually difficult to find another respondent who will have the same insight or viewpoint, so the data is tricky to replace. It’s best to leave these key interviews until you have done a few others; that way you can be confident in your research questions, and will have some experience in mediating the discussions.


Finally, there is also lost data. Dictaphones that don’t record or get lost. Files gone missing and lost passwords. Crashed computers that take all the data with them to an early and untimely grave! These things happen more often than they should, and careful planning, precautions and backups are the only way to protect against these.


But often the answer to all these problems is to collect more data! Most people using qualitative methodologies should have a certain amount of flexibility in their recruitment strategy, and should always be doing some review and analysis on each source as it is collected. This way you can quickly identify gaps or problems in the data, and make sure forthcoming data collection procedures cover everything.


So don’t leave your analysis too late, get your data into an intuitive tool like Quirkos, and see how it can bring your good and bad research data to light! We have a one month free trial, and lots of support and resources to help you make the most of the qualitative data you have. And don’t forget to share your stories of when things went wrong on Twitter using the hashtag #qualdisasters!

 

Practice projects and learning qualitative data analysis software

image by zaui/Scott Catron

 

Coding and analysing qualitative data is not only a time consuming, it’s a difficult interpretive exercise which, like learning a musical instrument, gets much better with practice. However, lots of students starting their first major qualitative or mixed method research project will benefit from completing a smaller project first, rather than starting by trying to learn a giant symphony. This will allow them to get used to qualitative analysis software, working with qualitative data, developing a coding framework and getting a realistic expectation of what can be done in a fixed time frame. Often people will try and learn all these aspects for the first time when they start a major project like a masters or PhD dissertation, and then struggle to get going and take the most effective approach.

 

Many scholars, including those advocating the 5 Level QDA approach suggest that learning the capabilities of the software and qualitative data separately, since one can effect the other. And a great way to do this is to actually dig in and get started with a separate smaller project. Reading textbooks and the literature can only prepare you so much (see for example this article on coding your first qualitative data), but a practical project to experiment and make mistakes in is a great preparation for the main event.

 

But what should a practice project look like? Where can I find some example qualitative data to play with? A good guideline is to take just a few sources, even just 3 or 4 from a method that is similar to the data collection you will use for your project. For example, if you are going to have focus groups, try and find some already existing focus group transcripts to transcribe. Although this can be daunting, there are actually lots of ways to quickly find qualitative data that will not only make you more familiar with real qualitative data, but also the analysis process and accompanying software tools. This article gives a couple of suggestions for a mini project to hone your skills!

 


News and media

A quick way to practice your basic coding skills is to do a small project using news articles. Just choose a recent (or historical) event, collect a few articles either from different news websites or over a period of time. Looking at conflicts in how events are described can be revealing, and is good for getting the right analytical eye to examine differences from respondents in your main project. It’s easy to go to different major news websites (like the Telegraph, Daily Mail, BBC News or the NYT) and copy and paste articles into Quirkos or other qualitative analysis software. All these sites have searchable archives, so you can look for a particular topic and find older articles.

 

Set yourself a good research question (or two), and use this project to practice generating a coding framework and exploring connections and differences across sources.

 

 

Qualitative Data Archives

If you want some more involved experience, browse some of the online repositories of qualitative data. These allow you to download the complete data set from research projects large and small. Since much government (or funding board) funded research requires data to be made publicly available, there are increasing numbers of data sets available to download which make a great way to look at real qualitative data, and practice your analysing skills. I’ll share two examples here, the first is the UK Data Archive and the second the Syracuse Qualitative Data Repository.

 

Regardless of where you are based, these resources offer an amazing variety of data types and topics. This can make your practice fun – there are data sets on some fascinating and obscure areas, so choose something interesting to you, or even something completely new and different as a challenge. You also don’t have to use all the sources from a large project – just choose three or four to start with, you can always add more later if you need extra coding experience.

 

 

Literature reviews

Actually, qualitative analysis software is a great way to get to grips with articles, books and policy documents relating to your research project. Since most people will want to do a systematic or literature review before they start a project, bringing your literature into qualitative software is a good way to learn the software while also working towards your project. While reading through your literature, you can create codes/themes to describe key points in theory or previous studies, and compare findings from different research projects.

In Quirkos it is easy to bring in PDF articles from journals or ebooks, and then you will have not only a good reference management system, but somewhere you can keep the full text of relevant articles, tagged and coded so you can find key quotes quickly for writing up. Our article here gives some good advice on using qualitative software for systematic and literature reviews.

 

 


Our own projects

Quirkos also has made two example projects freely available for learning qualitative analysis with any software package. The first is great for beginners, a fictional project about healthy eating options for breakfast. These 6 sources are short, but with rich information, so can be fully coded in less than an hour. Secondly, we conduded a real research project on the Scottish Referendum for Independence, and 12 transcribed semi-structured interviews are made available for your own practice and interpretation.

 

The advantage of these projects is that they both have fully coded project files to use as examples and comparison. It’s rare to find people sharing their coding (especially as an accessible project file) but can be a useful guide or point of comparison to your own framework and coding decisions.

 

download Quirkos qualitative research software

 

Ask around

Talk to faculty in your department and see if they have any example data sets you can use. Some academics will already have these for teaching purposes or taken from other research projects they are able to share.

 

It can also be a good exercise to do a coding project with someone else. Regardless of which option you choose from the example qualitative data sources above, share the data with another student or colleague, and go and do your own coding  separately. Once you are both done, meet up and compare your results – it will be really revealing to see how different people interpreted the data, how their coding framework looked, and how they found working with the software. It’s also good motivation and time management to have to work to a mutually set deadline!

 

 

The great thing about starting a small learning project is that it can be a perfect opportunity to experiment with different qualitative analysis software. It may be that you only have access to one option like Nvivo, MAXQDA, or Atlas.Ti at your institution, but student licences are very cheap and affordable, so make a great option for learning qualitative analysis. All the major packages have a free trial, so you can try several (or them all!) and find out which one works best for you. Doing this with a small example project lets you practice key techniques and qualitative methods, and also think through how best to collect and structure your data for analysis.

 

Quirkos probably has the best deal for qualitative research software, for example our student licences are cheap at just $59 (£49 or €58) and don’t expire. Most of the other packages only give you six months or a year but we let you use Quirkos as long as you need, so you will always be able to access your data – even after you graduate. Even academics and supervisors will find that Quirkos is much more affordable and easier to learn. Of course, there is a no obligation or registration trial, and all our support and training materials are free as well. So make sure you make the most informed decision before you start your research, and we hope that Quirkos becomes your instrument of choice for qualitative analysis!

 

 

Looking back and looking forward to qualitative analysis in 2017

2017 in qualitative analysis software - Janus

 

In the month named for Janus, it’s a good time to look back at the last year for Quirkos and qualitative analysis software and look forward to new developments for 2017.

 

It’s been a good year of growth for Quirkos, we now can boast of users in more than 100 universities across the world. But we can see how many more people are using Quirkos in these institutions as the word grows. There is no greater complement than when researchers recommend Quirkos to their peers, and this has been my favourite thing to see this year.

 

We also we honoured to take part in many international conferences and events, including TQR in January, ICQI in May, KWALON in August and QDR in October. Next year already has many international events on the calendar, and we hope to be in your neck of the woods soon! We have also run training workshops in many universities across the UK, and demand ensures these will continue in 2017.

 

Our decision to offer a 25% discount to researchers in developing countries has opened the door to a lot of interest and we are helping many researchers use qualitative analysis software for the first time.

 

The blog has also become a major resource for qualitative researchers, with more than 110 posts and counting now archived on the site, attracting thousands of visitors a month. In the next year we will be adding some new experimental formats and training resources to complement our methodology articles.

 

In terms of the Quirkos software itself, 2016 saw our most major upgrade to date, v1.4 which brought huge improvements in speed for larger projects. In early 2017 we will release a minor update (v1.4.1) which will provide a few bug fixes. We are already working towards v1.5 which will be released later in the year and add some major new requested features and refinements, but keep the same simple interface and workflow that people love.

 

We also have a couple of major announcements in the next month about the future of qualitative analysis software, Quirkos and the new skills we will be bringing on board. Watch this space!

 

How Quirkos can change the way you look at your qualitative data

Quirkos qualitative software seeing data

We always get a lot of inquiries in December from departments and projects who are thinking of spending some left-over money at the end of the financial year on a few Quirkos licences. A great early Christmas present for yourself the team! It’s also a good long term investment, since our licences don’t expire and can be used year after year. They are transferable to new computers, and we’ve committed to provide free updates for the current version. We don’t want the situation where different teams are using different versions, and so can’t share projects and data. Our licences are often a fraction of the cost of other qualitative software packages, but for the above reasons we think that we offer much more value than just the comparative sticker price.

 

But since Quirkos also has a different ethos (accessibility) and some unique features, it also helps you approach your qualitative research data in a different way to other software. In the two short years that Quirkos has been available, it’s become used by more than 100 universities across the world, as well as market research firms and public sector organisations. That has given me a lot of feedback that helps us improve the software, but also a sense of what things people love the most about it. So following is the list of things I hear most about the software in workshops and e-mails.

 

It’s much more visual and colourful

quirkos qualitative coding bubbles

Experienced researchers who have used other software are immediately struck by how colourful and visual the Quirkos approach is. The interface shows growing bubbles that dynamically show the coding in each theme (or node), and colours all over the screen. For many, the Quirkos design allows people to think in colours, spatially, and in layers, improving the amount of information they can digest and work with. Since the whole screen is a live window into the data, there is less need to generate separate reports, and coding and reviewing is a constant (and addictive) feedback process.


This doesn’t appeal to everyone, so we still have a more traditional ‘tree’ list structure for the themes which users can switch between at any time.

 

 

I can get started with my project quicker


We designed Quirkos so it could be learnt in 20 minutes for use in participatory analysis, so the learning curve is much lower than other qualitative software. Some packages can be intimidating to the first-time user, and often have 2 day training courses. All the training and support materials for Quirkos are available for free on our website, without registration. We increasingly hear that students want self-taught options, which we provide in many different formats. This means that not only can you start using Quirkos quickly, setting up and putting data into a new project is a lot quicker as well, making Quirkos useful for smaller qualitative projects which might just have a few sources.

 

 

I’m kept closer to my data

qualitative software comparison view


It’s not just the live growing bubbles that mean researchers can see themes evolve in their analysis, there are a suite of visualisations that let you quickly explore and play with the data. The cluster views generate instant Venn diagrams of connection and co-occurrences between themes, and the query views show side-by-side comparisons for any groups of your data you want to compare and contrast. Our mantra has been to make sure that no output is more than one click away, and this keeps users close to their data, not hidden away behind long lists and sub-menus.

 

 

It’s easier to share with others

qualitative word export


Quirkos provides some unique options that make showing your coded qualitative data to other people easier and more accessible. The favourite feature is the Word export, which creates a standard Word document of your coded transcripts, with all the coding shown as colour coded comments and highlights. Anyone with a word processor can see the complete annotated data, and print it out to read away from the computer.


If you need a detailed summary, the reports can be created as an interactive webpage, or a PDF which anyone can open. For advanced users you can also export your data as a standard spreadsheet CSV file, or get deep into the standard SQLite database using any tool (such as http://sqlitebrowser.org/) or even a browser extension.

 

 

I couldn’t get to grips with other qualitative software

quirkos spreadsheet comparison


It is very common for researchers to come along to our workshops having been to training for other qualitative analysis software, and saying they just ‘didn’t get it’ before. While very powerful, other tools can be intimidating, and unless you are using them on a regular basis, difficult to remember all the operations. We love how people can just come back to Quirkos after 6 months and get going again.


We also see a lot of people who tried other specialist qualitative software and found it didn’t fit for them. A lot of researchers go back to paper and highlighters, or even use Word or Excel, but get excited by how intuitive Quirkos makes the analysis process.

 

 

Just the basics, but everything you need


I always try and be honest in my workshops and list the limitations of Quirkos. It can’t work with multimedia data, can’t provide quantitative statistical analysis, and has limited memo functionality at the moment. But I am always surprised at how the benefits outweigh the limitations for most people: a huge majority of qualitative researchers only work with text data, and share my belief that if quantiatitve statistics are needed, they should be done in dedicated software. The idea has always been to focus on making the core actions that researchers do all the time (coding, searching, developing frameworks and exploring data) and make them as smooth and quick as possible.

 


If you have comments of your own, good or bad, we love to hear them, it’s what keeps us focused on the diverse needs of qualitative researchers.


Get in touch and we can help explain the different licence options, including ‘seat’ based licences for departments or teams, as well as the static licences which can be purchased immediately through our website. There are also discounts for buying more than 3 licences, for different sectors, and developing countries.


Of course, we can also provide formal quotes, invoices and respond to purchase orders as your institution requires. We know that some departments take time to get things through finances, and so we can always provide extensions to the trial until the orders come through – we never want to see researchers unable to get at their data and continue their research!


So if you are thinking about buying a licence for Quirkos, you can download the full version to try for free for one month, and ask us any questions by email (sales@quirkos.com), Skype ‘quirkos’ or a good old 9-to-5 phone call on (+44) 0131 555 3736. We are here for qualitative researchers of all (coding) stripes and spots (bubbles)!

 

Snapshot data and longitudinal qualitative studies

longitudinal qualitative data


In the last blog post, we looked at creating archives of qualitative data that can be used by other researchers (or yourself in the future) for secondary analysis. In that article I postulated that secondary data analysis could make collecting new data a rarer, and expensive event. However, some (including Dr Susanne Friese) pointed out that as the social world is always changing, there is a constant need to collect new data. I totally agree with this, but it made me think about another problem with research studies – they are time-limited collection events that end up capturing a snapshot of the world when they were recorded.


Most qualitative research collects data as a series of one-off dives into the lives and experience of participants. These might be a single interview or focus group, or the results of a survey that captures opinions at a particular point in time. This might not be a certain date, it might also be at a key point in a participant’s journey – such as when people are discharged from hospital, or after they attend a training event. However, even within a small qualitative research project it is possible to measure change, by surveying the people at different stages in their individual or collective journeys.


This is sometimes called ‘Qualitative Longitudinal Research’ (Farrall et al. 2006), and often involves contacting people at regular intervals, possibly even after years or decades. Examples of this type of project include the five year ‘Timescapes’ project in the UK which looked at changes in personal and family relationships (Holland 2011), or a mutli-year look at homelessness and low-income families in Worcester, USA for which the archived data is available (yay!).


However, such projects tend to be expensive, as they require having researchers working on a project for many years. They also need quite a high sample size because there is an inevitable annual attrition of people who move, fall ill or stop responding to the researchers. Most projects don’t have the luxury of this sort of scale and resources, but there is another possibility to consider: if it is appropriate to collect data over a shorter timescale. This is often collecting data at ‘before and after’ data points – for example bookending a treatment or event, so are often used in well planned evaluations. Researchers can ask questions about expectations before the occasion, and how they felt afterwards. This is useful in a lot of user experience research, but also to understand motivations of actors, and improve delivery of key services.


But there are other reasons to do more than one interview with a participant. Even having just two data points from a participant can show changes in lives, circumstances and opinions, even when there isn’t an obvious event distinguishing the two snapshots. It also gets people to reflect on  answers they gave in the first interview, and see if their feelings or opinions have changed. It can also give an opportunity to further explore topics that were interesting or not covered in the first interview, and allow for some checking – do people answer the questions in a consistent way? Sometimes respondents (or even interviewers) are having a bad day, and this doubles the chance of getting high quality data.


In qualitative analysis it is also an opportunity to look through all the data from a number of respondents and go back to ask new questions that are revealed from the data. In a grounded theory approach this is very valuable, but it can also be used to check researcher’s own interpretations and hypothesis about the research topic.


There are a number of research methods which are particularly suitable to longitudinal qualitative data studies. Participant diaries for example can be very illuminating, and can work over long or short periods of time. Survey methods also work well for multiple data points, and it is fairly easy to administer these at regular intervals by collecting data via e-mail, telephone or online questionnaires. These don’t even have to have very fixed questions, it is possible to do informal interviews via e-mail as well (eg Meho 2006), and this works well over long and medium periods of time.


So when planning a research project it’s worth thinking about whether your research question could be better answered with a longitudinal or multiple contact approach. If you decide this is the case, just be aware of the possibility that you won’t be able to contact everyone multiple times, and if means that some people’s data can’t be used, you need to over-recruit at the initial stages. However, the richer data and greater potential for reliability can often outweigh the extra work required. I also find it very illuminating as a researcher, to talk to participants after analysing some of their data. Qualitative data can become very abstract from participants (especially once it is transcribed, analysed and dissected) and meeting the person again and talking through some of the issues raised can help reconnect with the person and story.


Researchers should also consider how they will analyse such data. In qualitative analysis software like Quirkos, it is usually fairly easy to categorise the different sources you have by participant or time scale, so that you can look at just 1st or 2nd interviews or all together. I have had many questions from users into how best to organise their projects for such studies: and my advice is generally to keep everything in one project file. Quirkos makes it easy to do analysis on just certain data sources, and show results and reports from everything, or just one set of data.


So consider giving Quirkos a try with a free download for Windows, Mac or Linux, and read about how to build complex queries to explore your data over time, or between different groups of respondents.

 

 

Archiving qualitative data: will secondary analysis become the norm?

archive secondary data

 

Last month, Quirkos was invited to a one day workshop in New York on archiving qualitative data. The event was hosted by Syracuse University, and you can read a short summary of the event here. This links neatly into the KWALON led initiative to create a common standard for interchange of coded data between qualitative software packages.


The eventual aim is to develop a standardised file format for qualitative data, which not only allows use of data on any qualitative analysis software, but also for coded qualitative data sets to be available in online archives for researchers to access and explore. There are several such initiatives around the world, for example QDR in the United States, and the UK Data Archive in the United Kingdom. Increasingly, research funders are requiring that data from research is deposited in such public archives.


A qualitative archive should be a dynamic and accessible resource. It is of little use creating something like the warehouse at the end of ‘Raiders of the Lost Ark’: a giant safehouse of boxed up data which is difficult to get to. Just like the UK Data Archive it must be searchable, easy to download and display, and have enough detail and metadata to make sure data is discoverable and properly catalogued. While there is a direct benefit to having data from previous projects archived, the real value comes from the reuse of that data: to answer new questions or provide historical context. To do this, we need a change in practice from researchers, participants and funding bodies.


In some disciplines, secondary data analysis of archival data is common place, think for example of history research that looks at letters and news from the Jacobean period (eg Coast 2014). The ‘Digital Humanities’ movement in academia is a cross-dicipline look at how digital archives of data (often qualitative and text based) can be better utilised by researchers. However, there is a missed opportunity here, as many researchers in digital humanities don’t use standard qualitative analysis software, preferring instead their own bespoke solutions. Most of the time this is because the archived data is in such a variety of different formats and formatting.


However, there is a real benefit to making not just historical data like letters and newspapers, but contemporary qualitative research projects (of the typical semi-structured interview / survey / focus-group type) openly available. First, it allows others to examine the full dataset, and check or challenge conclusions drawn from the data. This allows for a higher level of rigour in research - a benefit not just restricted to qualitative data, but which can help challenge the view that qualitative research is subjective, by making the full data and analysis process available to everyone.

 

Second, there is huge potential for secondary analysis of qualitative data. Researchers from different parts of the country working on similar themes can examine other data sources to compare and contrast differences in social trends regionally or historically. Data that was collected for a particular research question may also have valuable insights about other issues, something especially applicable to rich qualitative data. Asking people about healthy eating for example may also record answers which cover related topics like attitudes to health fads in the media, or diet and exercise regimes.

 

At the QDR meeting last month, it was postulated that the generation of new data might become a rare activity for researchers: it is expensive, time consuming, and often the questions can be answered by examining existing data. With research funding facing an increasing squeeze, many funding bodies are taking the opinion that they get better value from grants when the research has impact beyond one project, when the data can be reused again and again.

 

There is still an element of elitism about generating new data in research – it is an important career pathway, and the prestige given to PIs running their own large research projects are not shared with those doing ‘desk-based’ secondary analysis of someone else’s data. However, these attitudes are mostly irrational and institutionally driven: they need to change. Those undertaking new data collection will increasingly need to make sure that they design research projects to maximise secondary analysis of their data, by providing good documentation on the collection process, research questions and detailed metadata of the sources.

 

The UK now has a very popular research grant programme specifically to fund researchers to do secondary data analysis. Loiuse Corti told the group that despite uptake being slow in the first year, the call has become very competitive (like the rest of grant funding). These initiatives will really help raise the profile and acceptability of doing secondary analysis, and make sure that the most value is being gained from existing data.


However, making qualitative data available for secondary analysis does not necessarily mean that it is publicly available. Consent and ethics may not allow for the release of the complete data, making it difficult to archive. While researchers should increasingly start planning research projects and consent forms to make data archivable and shareable, it is not always appropriate. Sometimes, despite attempts to anonymise qualitative data, the detail of life stories and circumstances that participants share can make them identifiable. However, the UK Data Archive has a sort of ‘vetting’ scheme, where more sensitive datasets can only be accessed by verified researchers who have signed appropriate confidentiality agreements. There are many levels of access so that the maximum amount of data can be made available, with appropriate safeguards to protect participants.


I also think that it is a fallacy to claim that participants wouldn’t want their data to be shared with other researchers – in fact I think many respondents assume this will happen. If they are going to give their time to take part in a research project, they expect this to have maximum value and impact, many would be outraged to think of it sitting on the shelf unused for many years.


But these data archives still don’t have a good way to share coded qualitative data or the project files generated by qualitative software. Again, there was much discussion on this at the meeting, and Louise Corti gave a talk about her attempts to produce a standard for qualitative data (QuDex). But such a format can become very complicated, if we wish to support and share all the detailed aspects of qualitative research projects. These include not just multimedia sources and coding, but date and metadata for sources, information about the researchers and research questions, and possibly even the researcher’s journals and notes, which could be argued to be part of the research itself.

 

The QuDex format is extensive, but complicated to implement – especially as it requires researchers to spend a lot of time entering metadata to be worthwhile. Really, this requires a behaviour change for researchers: they need to record and document more aspects of their research projects. However, in some ways the standard was not comprehensive enough – it could not capture the different ways that notes and memos were recorded in Atlas.ti for example. This standard has yet to gain traction as there was little support from the qualitative software developers (for commercial and other reasons). However, the software landscape and attitudes to open data are changing, and major agreements from qualitative software developers (including Quirkos) mean that work is underway to create a standard that should eventually create allow not just for the interchange of coded qualitative data, but hopefully for easy archival storage as well.


So the future of archived qualitative archive requires behaviour change from researchers, ethics boards, participants, funding bodies and the designers of qualitative analysis software. However, I would say that these changes, while not necessarily fast enough, are already in motion, and the word is moving towards a more open world for qualitative (and quantitative) research.


For things to be aware of when considering using secondary sources of qualitative data and social media, read this post. And while you are at it, why not give Quirkos a try and see if it would be a good fit for your next qualitative research project – be it primary or secondary data. There is a free trial of the full version to download, with no registration or obligations. Quirkos is the simplest and most visual software tool for qualitative research, so see if it is right for you today!

 

 

Stepping back from coding software and reading qualitative data

printing and reading qualitative data

There is a lot of concern that qualitative analysis software distances people from their data. Some say that it encourages reductive behaviour, prevents deep reading of the data, and leads to a very quantified type of qualitative analysis (eg Savin-Baden and Major 2013).

 

I generally don’t agree with these statements, and other qualitative bloggers such as Christina Silver and Kristi Jackson have written responses to critics of qualitative analysis software recently. However, I want to counter this a little with a suggestion that it is also possible to be too close to your data, and in fact this is a considerable risk when using any software approach.

 

I know this is starting to sound contradictory, but it is important to strike a happy balance so you can see the wood from the trees. It’s best to have both a close, detailed reading and analysis of your data, as well as a sense of the bigger picture emerging across all your sources and themes. That was the impetus behind the design of Quirkos: that the canvas view of your themes, where the size of each bubble shows the amount of data coded to it, gives you a live birds-eye overview of your data at all times. It’s also why we designed the cluster view, to graphically show you the connections between themes and nodes in your qualitative data analysis.

 

It is very easy to treat analysis as a close reading exercise, taking each source in turn, reading it through and assigning sections to codes or themes as you go. This is a valid first step, but only part of what should be an iterative, cyclical process. There are also lots of ways to challenge your coding strategy to keep you alert to new things coming from the data, and seeing trends in different ways.

 

However, I have a confession. I am a bit of a Luddite in some ways: I still prefer to print out and read transcripts of data from qualitative projects away from the computer. This may sound shocking coming from the director of a qualitative analysis software company, but for me there is something about both the physicality of reading from paper, and the process of stepping away from the analysis process that still endears paper-based reading to me. This is not just at the start of the analysis process either, but during. I force myself to stop reading line-by-line, put myself in an environment where it is difficult to code, and try and read the corpus of data at more of a holistic scale.
I waste a lot of trees this way (even with recycled paper), but always return to the qualitative software with a fresh perspective, finish my coding and analysis there, but having made the best of both worlds. Yes, it is time consuming to have so many readings of the data, but I think good qualitative analysis deserves this time.

 

I know I am not the only researcher who likes to work in this way, and we designed Quirkos to make this easy to do. One of the most unique and ‘wow’ features of Quirkos is how you can create a standard Word document of all the data from your project, with all the coding preserved as colour-coded highlights. This makes it easy to printout, take away and read at your leisure, but still see how you have defined and analysed your data so far.

word export qualitative data

 

There are also some other really useful things you can do with the Word export, like share your coded data with a supervisor, colleague or even some of your participants. Even if you don’t have Microsoft Office, you can use free alternatives like LibreOffice or Google Docs, so pretty much everyone can see your coded data. But my favourite way to read away from the computer is to make a mini booklet, with turn-able pages – I find this much more engaging than just a large stack of A4/Letter pages stapled in the top corner. If you have a duplex printer that can print on both sides of the page, generate a PDF from the Word file (just use Save As…) and even the free version of Adobe Reader has an awesome setting in Print to automatically create and format a little booklet:

word booklet

 

 

I always get a fresh look at the data like this, and although I am trying not to be too micro-analytical and do a lot of coding, I am always able to scribble notes in the margin. Of course, there is nothing to stop you stepping back and doing a reading like this in the software itself, but I don’t like staring at a screen all day, and I am not disciplined enough to work on the computer and not get sucked into a little more coding. Coding can be a very satisfying and addictive process, but at the time I have to define higher-level themes in the coding framework, I need to step back and think about the bigger picture, before I dive into creating something based on the last source or theme I looked at. It’s also important to get the flow and causality of the sources sometimes, especially when doing narrative and temporal analysis. It’s difficult to read the direction of an interview or series of stories just from looking at isolated coded snippets.

 

Of course, you can also print out a report from Quirkos, containing all the coded data, and the list of codes and their relations. This is sometimes handy as a key on the side, especially if there are codes you think you are underusing. Normally at this stage in the blog I point out how you can do this with other software as well, but actually, for such a commonly required step, I find this very hard to do in other software packages. It is very difficult to get all the ‘coding stripes’ to display properly in Nvivo text outputs, and MaxQDA has lots of options to export coded data, but not whole coded sources that I can see. Atlas.ti does better here with the Print with Margin feature, which shows stripes and code names in the margin – however this only generates a PDF file, so is not editable.

 

So download the trial of Quirkos today, and every now and then step back and make sure you don’t get too close to your qualitative data…