Word clouds and word frequency analysis in qualitative data

word clouds quirkos

 

What’s this blog post about? Well, it’s visualised in the graphic above!

 

In the latest update for Quirkos, we have added a new and much requested feature, word clouds! I'm sure you've used these pretty tools before, they show a random display of all the words in a source of text, where the size of each word is proportional to the number of times it has been counted in the text. There are several free online tools that will generate word clouds for you, Wordle.net being one of the first and most popular.

 

These visualisations are fun, and can be a quick way to give an overview of what your respondents are talking about. They also can reveal some surprises in the data that prompt further investigation. However, there are also some limitations to tools based on word frequency analysis, and these tend to be the reason that you rarely see word clouds used in academic papers. They are a nice start, but no replacement for good, deep qualitative analysis!

 

We've put together some tips for making sure your word clouds present meaningful information, and also some cautions about how they work and their limitations.

 


1. Tweak your stop list!

As these tools count every word in the data, results would normally be dominated by basic words that occur most often, 'the', 'of, 'and' and similar small and usually meaningless words. To make sure that this doesn't swamp the data, most tools will have a list of 'stop' words which should be ignored when displaying the word cloud. That way, more interesting words should be the largest. However, there is always a great deal of variation in what these common words are. They differ greatly between verbal and written language for example (just think how often people might say 'like' or 'um' in speech but not in a typed answer). Each language will also need a corresponding stop list!

 

So Quirkos (and many other tools) offer ways to add or remove words from the stop list when you generate a word cloud. By default, Quirkos takes the most 50 frequent words from the verbal and written British National Corpus of words, but 50 is actually a very small stop list. You will still get very common words like 'think' and 'she' which might be useful to certain projects looking at expressions of opinions or depictions of gender. So it's a good idea to look at the word cloud, and remove words that aren't important to you by adding them to the stop list. Just make sure you record what has been removed for writing up, and what your justification was for excluding it!

 


2. There is no weighting or significance

Since word frequency tools just count the occurrence of each word (one point for each utterance) they really only show one thing: how often a word was said. This sounds obvious, but it doesn't give any indication of how important the use of a word was for each event. So if one person says 'it was a little scary', another says 'it was horrifyingly scary' and another 'it was not scary' the corresponding word count doesn't have any context or weight. So this can look deceptive in something like a word cloud, where the above examples count the negative (not scary) and the minor (little scary) the same way, and 'scary' could look like a significant trend. So remember to always go back and read the data carefully to understand why specific words are being used.

 


3. Derivations don't get counted together

Remember that most word cloud tools are not even really counting words, only combinations of letters. So 'fish', 'fishy' and 'fishes' will all get counted as separate words (as will any typos or mis-spellings). This might not sound important, but if you are trying to draw conclusions just from a word cloud, you could miss the importance of fish to your participants, because the different derivations weren't put together. Yet, sometimes these distinctions in vocabulary are important – obviously 'fishy' can have a negative connotation in terms of something feeling off, or smelling bad – and you don't want to put this in the same category as things that swim. So a researcher is still needed to craft these visualisations, and make decisions about what should be shown and grouped. Speaking of which...

 


4. They won't amalgamate different terms used by participants

It's fascinating how different people have their own terms and language to describe the same thing, and illuminating this can bring colour to qualitative data or show important subtle differences that are important for IPA[[]] or discourse analysis. But when doing any kind of word count analysis, this richness is a problem – as the words are counted separately. Thus neither term 'shiny', 'bright' or 'blinding' may show up often, but if grouped together they could show a significant theme. Whether you want to treat certain synonyms in the same way is up to the researcher, but in a word cloud these distinctions can be masked.

 

Also, don’t forget that unless told otherwise (or sometimes hyphenated), word clouds won’t pick up multiple word phrases like ‘word cloud’ and ‘hot topic’.

 

 

5. Don’t focus on just the large trends


Word clouds tend to make the big language trends very obvious, but this is usually only part of the story. Just as important are words that aren’t there – things you thought would come up, topics people might be hesitant to speak about. A series of word clouds can be a good way to show changes in popular themes over time, like what terms are being used in political speeches or in newspaper headlines. In these cases words dropping out of use are probably just as interesting as the new trends.

 

Download a free trial

 


6. This isn't qualitative analysis

At best, this is quantification of qualitative data, presenting only counting. Since word frequency tools are just count sequences of letters, not even words and their meanings, they are a basic supplemental numerical tool to deep qualitative interpretation (McNaught and Lam 2010). And as with all statistical tools, they are easy to misapply and poorly interpret. You need to know what is being counted, what is being missed (see above), and before drawing any conclusions, make sure you understand the underlying data and how it was collected. However…

 

 

7. Word clouds work best as summaries or discussion pieces


If you need to get across what’s coming out of your research quickly, showing the lexicon of your data in word clouds can be a fun starting point. When they show a clear and surprising trend, the ubiquity and familiarity most audiences have with word clouds make these visualisations engaging and insightful. They should also start triggering questions – why does this phrase appear more? These can be good points to start guiding your audience through the story of your data, and creating interesting discussions.

 

As a final point, word clouds often have a level of authority that you need to be careful about. As the counting of words is seen as non-interpretive and non-subjective, some people may feel they ‘trust’ what is shown by them more than the verbose interpretation of the full rich data. Hopefully with the guidance above, you can persuade your audience that while colourful, word clouds are only a one-dimensional dive into the data. Knowing your data and reading the nuance will be what separates your analysis from a one click feature into a well communicated ‘aha’ moment for your field.

 

 

If you'd like to play with word clouds, why not download a free trial of Quirkos? It also has raw word frequency data, and an easy to use interface to manage, code and explore your qualitative data.

 

 

 

 

Quirkos v1.5 is here

Quirkos 1.5 word cloud

 

We are happy to announce the immediate availability of Quirkos version 1.5! As always, this update is a free upgrade for everyone who has ever brought a licence of Quirkos, so download now and enjoy the new features and improvements.

 

Here’s a summary of the main improvements in this release:

 

Project Merge


You can now bring together multiple projects in Quirkos, merge sources, Quirks and coding from many authors at once. This makes team work much easier, and allows you to bring in coding frameworks or sources from other projects.

 

Word Frequency tools including:
 

Word-clouds! You can now generate customisable Word Clouds, (click on the Report button). Change the shape, word size, rotation, and cut-off for minimum words, or choose which sources to include. There is also a default ‘stop list’ (a, the, of, and) of the most frequent 50 words from the British National Corpus, but this can be completely customised. Save the word-clouds to a standard image file, or as an interactive webpage.referednum wordcloud
A complete word frequency list of the words occurring across all the sources in your project is also generated in this view.

  • Improved Tree view – now shows longer titles, descriptions and fits more Quirks on the screen
  • Tree view now has complete duplicate / merge options
  • Query results by source name – ability to see results from single or multiple sources
  • Query results now show number of quotes returned
  • Query view now has ‘Copy All’ option
  • Improved CSV spreadsheet export – now clearly shows Source Title, and Quirk Name
  • Merge functions now more logical – default behaviour changed so that you select the Quirk you want to be absorbed into a second.
  • Can now merge parent and child Quirks to all levels
  • Hovering mouse over Quirks now shows description, and coding summary across sources
  • Reports now generate MUCH faster, no more crashes for projects with hundreds of Quirks. Image generation of hierarchy and overlap views now off by default, turn on in Project Settings if needed
  • Improved overlap view, with rings indicating number of overlaps
  • Neater pop-up password entry for existing projects
  • Copy and pasting quotes to external programmes now shows source title after each quote
  • Individually imported sources now take file name as source name by default

 

Bug fixes

  • Fixed a bug where Quirks would suddenly grow huge!
  • Fixed a rare crash on Windows when rearranging / merging Quirks in tree view
  • Fixed a rare bug where a Quirk was invisible after being re-arranged
  • Fixed an even rarer bug where deleting a source would stop new coding
  • Save As project now opens the new file after saving, and no longer shows blank screen
  • Reports can now overwrite if saved to the same folder as an earlier export
  • Upgrading to new versions on Windows only creates a backup of the last version, not all previous versions, lots of space savings. (It’s safe to delete these old versions once you are happy with the latest one)

 

Watch the new features demonstrated in the video below:

 

 

There are a few other minor tweaks and improvements, so we do recommend you update straight away. Everyone is eligible, and once again there are no changes to project files, so you can keep going with your work without missing a beat. Do let us know if you have any feedback or suggestions (support@quirkos.com)

 

Download quirkos free qualitative analysis software

 

If you've not tried Quirkos before, it's a perfect time to get started. Just download the full version and you'll get a whole month to play with it for free!

 

Archaeologies of coding qualitative data

recoding qualitative data

 

In the last blog post I referenced a workshop session at the International Conference of Qualitative Inquiry entitled the ‘Archaeology of Coding’. Personally I interpreted archaeology of qualitative analysis as being a process of revisiting and examining an older project. Much of the interpretation in the conference panel was around revisiting and iterating coding within a single analytical attempt, and this is very important.


In qualitative analysis it is rarely sufficient to only read through and code your data once. An iterative and cyclical process is preferable, often building on and reconsidering previous rounds of coding to get to higher levels of interpretation. This is one of the ways to interpret an ‘archaeology’ of coding – like Jerusalem, the foundations of each successive city is built on the groundwork of the old. And it does not necessarily involve demolishing the old coding cycle to create a new one – some codes and themes (just like significant buildings in a restructured city) may survive into the new interpretation.


But perhaps there is also a way in which coding archaeology can be more like a dig site: going back down through older layers to uncover something revealing. I allude to this more in the blog post on ‘Top down or bottom up’ coding approaches, because of course you can start your qualitative analysis by identifying large common themes, and then breaking these up into more specific and nuanced insight into the data.

 

But both these iterative techniques are envisaged as part of a single (if long) process of coding. But what about revisiting older research projects? If you get the opportunity to go back and re-examine old qualitative data and analysis?


Secondary data analysis can be very useful, especially when you have additional questions to ask of the data, such as in this example by Notz (2005). But it can also be useful to revisit the same data, question or topic when the context around them changes, for example due to a major event or change in policy.


A good example is our teaching dataset conducted after the referendum for Scottish Independence a few years ago. This looked to see how the debate had influenced voters interpretations of the different political parties and how they would vote in the general election that year. Since then, there has been a referendum on the UK leaving the EU, and another general election. Revisiting this data would be very interesting in retrospect of these events. It is easy to see from the qualitative interviews that voters in Scotland would overwhelmingly vote to stay in the EU. However, it would not be up-to-date enough to show the ‘referendum fatigue’ that was interpreted as a major factor reducing support for the Scottish National Party in the most recent election. Yet examining the historical data in this context can be revealing, and perhaps explain variance in voting patterns in the changing winds of politics and policy in Scotland.

 

While the research questions and analysis framework devised for the original research project would not answer the new questions we have of the data, creating new or appended analytical categories would be insightful. For example, many of the original codes (or Quirks) identifying political parties people were talking about will still be useful, but how they map to policies might be reinterpreted, or higher level themes such as the extent that people perceive a necessity for referendum, or value of remaining part of the EU (which was a big question if Scotland became independent). Actually, if this sounds interesting to you, feel free to re-examine the data – it is freely available in raw and coded formats.

 

Of course, it would be even more valuable to complement the existing qualitative data with new interviews, perhaps even from the same participants to see how their opinions and voting intentions have changed. Longitudinal case studies like this can be very insightful and while difficult to design specifically for this purpose (Calman, Brunton, Molassiotis 2013), can be retroactively extended in some situations.


And of course, this is the real power of archaeology: when it connects patterns and behaviours of the old with the new. This is true whether we are talking about the use of historical buildings, or interpretations of qualitative data. So there can be great, and often unexpected value in revisiting some of your old data. For many people it’s something that the pressures of marking, research grants and the like push to the back burner. But if you get a chance this summer, why not download some quick to learn qualitative analysis software like Quirkos and do a bit of data archaeology of your own?

 

Teaching Qualitative Methods via Social Media

teaching qualitative methods social media

 

This blog now has nearly 120 posts about all different kinds of qualitative methods, and has grown to hosting thousands of visitors a month. There are lots of other great qualitative blogs around, including Margaret Roller’s Research Design Review and the Digital Tools for Qualitative Research group and the newly relaunched Qual Page.


But these are only one part of the online qualitative landscape, and there are an increasing number of people engaged in teaching, commenting and exploring qualitative methods and analysis on social media. By this I mean popular platforms like Twitter, Facebook, Linkedin, Academia.net, Researchgate and even Instagram and Snapchat. And yes, people are even using Instagram to share pictures and engage with others doing qualitative research.


So the call for a talk at the International Conference of Qualitative Inquiry (ICQI 2017) asked: How can educators reach out and effectively use social media as a way to teach and engage students with qualitative methodologies?


Well, a frequent concern of teachers is how you teach the richness and complexity of qualitative methods in something like a Tweet which has a 140 character limit? Even the previous sentence would be too long for a Tweet! While other platforms such as comments on Facebook don’t have such tight limits, they are still geared towards short statements. Obviously, detailing the nuances of grounded theory in this way is not realistic. But it can be a great way to start a conversation or debate, to link and draw attention to other longer sources of media.


For example the very popular ‘Write That PhD’ Twitter feed by Dr Melanie Haines of the University of Canberra has nearly 20 thousand followers. The feed offers advice on writing and designing a PhD and often posts or retweets pictures which contain a lot more detailed tips on writing a thesis. This is a good way of getting around the character limit, and pictures, especially when not just of a long block of text are a good way to draw the eye. Social media accounts can also be used to link to other places (such as a blog) where you can write much longer materials – and this is an approach we use a lot.


But to use social media effectively for outreach and engagement, it is also important to understand the different audiences which each platform has, and the subsets within each site. For example, Snapchat has a much younger audience than Facebook, and academic focused platforms might be a good place to network with other academics, but doesn’t tend to have many active undergraduates.


It’s also important to think how students will be looking and searching for information, and how to get into the feeds that they look at on a daily basis. On Facebook and especially Twitter, hashtags are a big part of this, and it’s worth researching the popular terms that people are searching for which are relevant to your research or teaching. For example the #phdlife and #phdchat tags are two of the most popular ones, #profchat and #research have their own niches and audiences too. While it can seem like a good idea to start a new hashtag for yourself like #lovequalitiative, it takes a lot of work and influential followers to get them off the ground.

 

Don’t forget that hashtags and keywords are just one way to target different audiences. Twitter also has ‘lists’ of users with particular interests, and Linkedin and Facebook have groups and pages with followers which it can be worth joining and contributing to. On Researchgate and Academia.net the question forums are very active, and there are great discussions about all aspects of qualitative research.


But the most exciting part of social media for teaching qualitative research is the conversations and discussions that you can have. Since there are so many pluralities of theory and method, online conversations can challenge and promote the diversity of qualitative approaches. This is a challenge as well, as it requires a lot of time, ideally over a long period of time, to keep replying to comments and questions that pop up. However, the beauty of all these platforms is that they effectively create archives for you, so if there was a discussion about qualitative diary methodologies on a Facebook group a year ago, it will still be there, and others can read and learn from it. Conversely, new discussions can pop up at any time (and on any of the different social media sites) so keeping on top of them all can be time consuming.


In short, there is a key rule for digital engagement, be it for teaching or promoting a piece of research: write once, promote often. Get a digital presence on a blog or long form platform (like Medium) and then promote what you’ve written on as many social media platforms as you can. The more you promote, the more visible and the higher rated your content will become, and the greater audience you can engage with. And the best part of all is how measurable it is. You can record the hits, follows and likes of your teaching or research and show your REF committee or department the extent of your outreach. So social media can be a great feather to add to your teaching cap!

 

Writing qualitative research papers

writing qualitative research articles papers

We’ve actually talked about communicating qualitative research and data to the public before, but never covered writing journal articles based on qualitative research. This can often seem daunting, as the prospect of converting dense, information rich studies into a fairly brief and tightly structured paper takes a lot of work and refinement. However, we’ve got some tips below that should help demystify the process, and let you break it down into manageable steps.

 

Choose your journal

The first thing to do is often what left till last: choose the journal you want to submit your article to. Since each journal will have different style guidelines, types of research they publish and acceptable lengths, you should actually have a list of a few journals you want to publish with BEFORE you start writing.

 

To make this choice, there are a few classic pointers. First, make sure your journal will publish qualitative research. Many are not interested in qualitative methodologies, see debates about the BMJ recently to see how contested this continues to be. It’s a good idea to choose a journal that has other articles you have referenced, or are on a similar topic. This is a good sign that the editors (and reviewers) are interested in, and understand this area.

 

Secondly, there are some practical considerations. For those looking for tenure or to one day be part of schemes that assess the quality of academic institutions by their published work such as the REF (in the UK) or PBRF (in New Zealand) you should consider ‘high impact’ or ‘high tier’ journals. These are considered to be the most popular journals in certain areas, but will also be the most competitive to get into.

 

Before you start writing, you should also read the guidance for authors from the journal, which will give you information about length, required sections, how they want the summary and keywords formatted, and the type of referencing. Many are based on the APA style guidelines, so it is a good idea to get familiar with these.

 


Describing your methodology, literature review, theoretical underpinnings

When I am reviewing qualitative articles, the best ones describe why the research is important, and how it fits in with the existing literature. They then make it clear how the researcher(s) chose their methods, who they spoke to and why they were chosen. It’s then clear throughout the paper which insights came from respondent data, and when claims are made how common they were across respondents.

 

To make sure you do this, make sure you have a separate section to detail your methods, recruitment aims and detail the people you spoke to – not just how many, but what their background was, how they were chosen, as well as eventually noting any gaps and what impact that could have on your conclusion. Just because this is a qualitative paper doesn’t mean you don’t have to say the number of people you spoke to, but there is no shame in that number being as low as one for a case study or autoethnography!

 

Secondly, you must situate your paper in the existing literature. Read what has come before, critique it, and make it clear how your article contributes to the debate. This is the thing that editors are looking for most – make the significance of your research and paper clear, and why other people will want to read it.

 

Finally, it’s very important in qualitative research papers to clearly state your theoretical background and assumptions. So you need to reference literature that describes your approach to understanding the world, and be specific about the interpretation you have taken. Just saying ‘grounded theory’ for example is not enough – there are a dozen different conceptualisations of this one approach.
 

 

Reflexivity

It’s not something that all journals ask for, but if you are adopting many qualitative epistemologies, you are usually taking a stance on positivism, impartibility, and the impact of the researcher on the collection and interpretation of the data. This sometimes leads to the need for the person(s) who conducted the research to describe themselves and their backgrounds to the reader, so they can understand the world view, experience and privilege that might influence how the data was interpreted. There is a lot more on reflexivity in this blog post.


How to use quotations

Including quotations and extracts from your qualitative data is a great feature, and a common way to make sure that you back up your description of the data with quotes that support your findings. However, it’s important not to make the text too dense with quotations. Try and keep to just a few per section, and integrate them into your prose as much as possible rather than starting every one with ‘participant x said’. I also like to try and show divergence in the respondents, so have a couple of quotes that show alternative view points.

 

On a practical note, make sure any quotations are formatted according to the journal’s specifications. However, if they don’t have specific guidelines, try and make them clear by always giving them their own indented paragraph (if more than a sentence) and clearly label them with a participant identifier, or significant anonymised characteristic (for example School Administrator or Business Leader). Don’t be afraid to shorten the quotation to keep it relevant to the point you are trying to make, while keeping it an accurate reflection of the participant’s contribution. Use ellipsis (…) to show where you have removed a section, and insert square brackets to clarify what the respondent is talking about if they refer to ‘it’ or ‘they’, for example [the school] or [Angela Merkel].

 


Don’t forget visualisations

If you are using qualitative analysis software, make sure you don’t just use it as a quotation finder. The software will also help you do visualisations and sub-set analysis, and these can be useful and enlightening to include in the paper. I see a lot of people use an image of their coding structure from Quirkos, as this quickly shows the relative importance of each code in the size of the bubble, as well as the relationships between quotes. Visual outputs like this can get across messages quickly, and really help to break up text heavy qualitative papers!

 


Describe your software process!

No, it’s not enough to just say ‘We used Nvivo’. There are a huge number of ways you could have used qualitative analysis software, and you need to be more specific about what you used the software for, how you did the analysis (for example framework / emergent) and how you got outputs from the software. If you did coding with other people, how did this work? Did you sit together and code at one time? Did you each code different sources or go over the same ones? Did you do some form of inter-rater reliability, even if it was not a quantitative assessment? Finally, make sure you include your software in the references – see the APA guides for how to format this. For Quirkos this would look something like:

 

Quirkos Software (2017). Quirkos version 1.4.1 [Computer software]. Edinburgh: Quirkos Limited.

 

Quirkos - qualitative analysis software

 


Be persistent!

Journal publication is a slow process. Unless you get a ‘desk rejection’, where the editor immediately decides that the article is not the right fit for the journal, hearing back from the reviewers could take months or even a year. Ask colleagues and look at the journal information to get an idea of how long the review process takes for each journal. Finally, when you get some feedback it might be negative (a rejection) or unhelpful (when the reviewers don’t give constructive feedback). This can be frustrating, especially when it is not clear how the article can be made better. However, there are excellent journals such as The Qualitative Report that take a collaborative rather than combatitative approach to reviewing articles. This can be really helpful for new authors.

 

Remember that a majority of articles are rejected at any paper, and some top-tier journals have acceptance rates of 10% or less. Don’t be disheartened; try and read the comments, keep on a cycle of quickly improving your paper based on the feedback you can get, and either send it back to the journal or find a more appropriate home for it.

 

Good luck, and don’t forget to try out Quirkos for your qualitative analysis. Our software is easy to use, and makes it really easy to get quotes into Word or other software for writing up your research. Learn more about the features, and download a free, no-obligation trial.

 

 

Quirkos v1.4.1 is now available for Linux

quirkos for linux

 

A little later than our Windows and Mac version, we are happy to announce that we have just released Quirkos 1.4.1 for Linux. There are some major changes to the way we release and package our Linux version, so we want to provide some technical details of these, and installation instructions.


Previously our releases had a binary-based and distro independent installer. However, this was based on 32 bit libraries to provide backwards compatibility, and required a long list of dependencies to work on many systems.


From this release forward, we are releasing Quirkos as an AppImage – a single file which contains a complete image of the software. This should improve compatibility across different distros, and also remove some of the dependency hell involved in the previous installer.


Once you download the .AppImage file, you will need to give the file executable permissions (a standard procedure when downloading binaries). You can do this at the command-line just by typing ‘chmod +x Quirkos-1.4.1-x86_64.AppImage’. This step can also be done with a File Manager GUI like Nautilus (the default in Gnome and Ubuntu) by right clicking on the downloaded file, selecting the Permissions tab, and ticking the ‘Allow executing file as program’ box. Then you can start Quirkos from the command-line, or by double clicking on the file.


Since an AppImage is essentially a ‘live’ filesystem contained in a single file, there is no installation needed, and if you want to create a Desktop shortcut to the software stored in a different location, you will have to create one yourself.
 

Secondly, we have also moved to a 64 bit release for this version of Quirkos. While we initially wanted to provide maximum compatibility with older computers, this actually creates a headache for the vast majority of Linux users with 64 bit installations. They were required to install 32 bit libraries for many common packages (if they did not have them already), creating duplication and huge install requirements. Now Quirkos should run out-of-the-box for a vast majority of users.


Should you prefer the older 32 bit installer package, you can still download the old version from here:
https://www.quirkos.com/quirkos-1.4-linux-installer.run


Supporting Linux is really important to us, and we are proud to be the only major commercial qualitative software company creating a Linux version, let alone one that is fully feature and project compatible with the Windows and Mac builds. While there are great projects like RQDA which are still supported, TAMS Analyzer and Weft QDA have not been updated for Linux in many years, and are pretty much impossible to build these days. Dedoose is an option in Linux since it is browser based, but sometimes requires some tweaking to get Flash running properly. Adobe AIR for Linux is now no longer supported, so the Dedoose desktop App is sadly no longer an option.
 

But Quirkos will keep supporting Linux, and provide a real option for qualitative researchers wanting to use free and open platforms.


We REALLY would love to have your feedback on our new Linux release, positive, negative or neutral! We still have a relatively small number of users on Linux, so your experiences are extra important to us. Is the AppImage more convenient? Have you had any dependency problems? Would you prefer we kept providing 32bit packages? E-mail us at support@quirkos.com and let us know!

 

What next? Making the leap from coding to analysis

leap coding to analysis

 

So you spend weeks or months coding all your qualitative data. Maybe you even did it multiple times, using different frameworks and research paradigms. You've followed our introduction guides and everything is neatly (or fairly neatly) organised and inter-related, and you can generate huge reports of all your coding work. Good job! But what happens now?

 

It's a question asked by lot of qualitative researchers: after all this bruising manual and intellectual labour, you hit a brick wall. After doing the coding, what is the next step? How to move the analysis forward?

 

The important thing to remember is that coding is not really analysis. Coding is often a precursor to analysis, in the same way that a good filing system is a good start for doing your accounts: if everything is in the right place, the final product will come together much easier. But coding is usually a reductive and low-level action, and it doesn't always bring you to the big picture. That's what the analysis has to do: match up your data to the research questions and allow you to bring everything together. In the words of Zhang and Wildemuth you need to look for “meanings, themes and patterns”

 


Match up your coding to your research questions

Now is a good time to revisit the research question(s) you originally had when you started your analysis. It's easy during the coding process to get excited by unexpected but fascinating insights coming from the data. However, you usually need to reel yourself in at this stage, and explore how the coded data is illuminating the quandaries you set out to explore at the start of the project.

 

Look at the coded framework, and see which nodes or topics are going to help you answer each research question. Then you can either group these together, or start reading through the coded text by theme, probably more than once with an eye for one research question each time. Don't forget, you can still tag and code at this stage, so you can have a category for 'Answers research question 1' and tag useful quotes there.

 

One way to do this in Quirkos is the 'Levels' function, which allows you to assign codes/themes to more than one grouping. You might have some coded categories which would be helpful in answering more than one research question: you can have a level for each research question, and  Quirks/categories can belong to multiple appropriate levels. That way, you can quickly bring up all responses relevant to each research question, without your grouping being non-exclusive. 

 


Analyse your coding structure!

It seems strange to effectively be analysing your analysis, but looking at the coding framework itself gets you to a higher meta-level of analysis. You can grouping themes together to identify larger themes and coding. It might also be useful to match your themes with theory, or recode them again into higher level insights. How you have coded (especially when using grounded theory or emergent coding) can reveal a lot about the data, and your clusterings and groupings, even if chosen for practical purposes, might illuminate important patterns in the data.

 

In Quirkos, you can also use the overlap view to show relationships between themes. This illustrates in a graphical chart how many times sections of text 'overlap' - in that a piece of text has been coded with both themes. So if you have simple codes like 'happy' or 'disappointed' you can what themes have been most coded with disappointment. This can sometimes quickly show surprises in the correlations, and lets you quickly explore possible significant relationships between all of your codes. However, remember that all these metrics are quantitative, so are dependent on the number of times a particular theme has been coded. You need to keep reading the qualitative text to get the right context and weight, which is why Quirkos shows you all the correlating text on the right of the screen in this view.

 

side comparison view in Quirkos software

 


Compare and contrast

Another good way to make your explorations more analytical is to try and identify and explain differences: in how people describe key words or experiences, what language they use, or how their opinions are converging or diverging from other respondents. Look back at each of the themes, and see how different people are responding, and most importantly, if you can explain the difference through demographics or difference life experiences.

 

In Quirkos this process can be assisted with the query view, which allows you to see responses from particular groups of sources. So you might want to look at differences between the responses of men and women, as shown below. Quirkos provides a side-by-side view to let you read through the quotes, comparing the different responses. This is possible in other software too, but requires a little more time to get different windows set up for comparison.

 

overlap cluster view in Quirkos software

 

Match and re-explore the literature

It's also a good time to revisit the literature. Look back at the key articles you are drawing from, and see how well your data is supporting or contradicting their theory or assumptions. It's a really good idea to do this (not just at the end) because situating your finding in the literature is the hallmark of a well written article or thesis, and will make clear the contribution your study has made to the field. But always be looking for an extra level of analysis, try and grow a hypothesis of why your research differs or comes to the same conclusions – is there something in the focus or methodology that would explain the patterns?

 


Keep asking 'Why'

Just like an inquisitive 6 year old, keep asking 'Why?'! You should have multiple levels of Why, with explanations in qualitative focus usually explaining individual, then group, and all the way up to societal levels of causation. Think of the maxim 'Who said What, and Why?'. The coding shows the 'What', exploring the detail and experiences of the respondents is the 'Who', the Why needs to explore not just their own reasoning, but how this connects to other actors in the system. Sometimes this causation is obvious to the respondent, especially if articulated because they were always asked 'why' in the interview! However analysis sometimes requires a deeper detective type reading, getting to the motivations as well as actions of the participants.

 


Don't panic!

Your work was not in vain. Even if you end up for some reason scrapping your coding framework and starting again, you will have become so much more engaged with your data by reading it through so closely, and this will be a great help knowing how to take the data forward. Some people even discover that coding data was not the right approach for their project, and use it very little in the final analysis process. Instead they may just be able to pull together important findings in their head, the time taken to code the data having made key findings pop out from the page.

 

And if things still seem stuck, take a break, step back and print out your data and try and read it from a fresh angle. Wherever possible, discuss with others, as a different perspective can come not just from other people's ideas, but just the process of having to verbally articulate what you are seeing in the data.

 


Also remember to check out Quirkos, a software tool that helps constantly visualise your qualitative analysis, and thus keep your eye on what is emerging from the data. It's simple to learn, affordably priced, and there is a free trial to download for Windows, Mac and Linux so you can see for yourself if it is the right fit for your qualitative analysis journey. Good luck!

 

 

Making the most of bad qualitative data

 

A cardinal rule of most research projects is things don’t always go to plan. Qualitative data collection is no difference, and the variability in approaches and respondents means that there is always the potential for things to go awry. However, the typical small sample sizes can make even one or two frustrating responses difficult to stomach, since they can represent such a high proportion of the whole data set.


Sometimes interviews just don’t go well: the respondent might only give very short answers, or go off on long tangents which aren’t useful to the project. Usually the interviewer can try and facilitate these situations to get better answers, but sometimes people can just be difficult. You can see this in the transcript of the interview with ‘Julie’ in the example referendum project. Despite initially seeming very keen on the topic, perhaps she was tired on the day, but cannot be coaxed into giving more than one or two word answers!


It’s disappointing when something like this happens, but it is not the end of the world. If one interview is not as verbose or complete as some of the others it can look strange, but there is probably still useful information there. And the opinions of this person are just as valid, and should be treated with the same weight. Even if there is no explanation, disagreeing with a question by just saying ‘No’ is still an insight.


You can also have people who come late to data collection sessions, or have to leave early resulting in incomplete data. Ideally you would try and do follow up questions with the respondent, but sometimes this is just not possible. It is up to you to decide whether it is worth including partial responses, and if there is enough data to make inclusion and comparison worthwhile.


Also, you may sometimes come across respondents who seem to be outright lying – their statements contradict, they give ridiculous or obviously false answers, or flat out refuse to answer questions. Usually I would recommend that these data sources are included, as long as there is a note of this in the source properties and a good justification for why the researcher believes the responses may not be trusted. There is usually a good reason that a respondent chooses to behave in such a way, and this can be important context for the study.


In focus group settings there can sometimes be one or two participants who derail the discussion, perhaps by being hostile to other members of the group or only wanting to talk about their pet topics and not the questions on the table. This is another situation where practice at mediating and facilitating data collection can help, but sometimes you just have to try and extract whatever is valuable. But organising focus groups can be very time consuming, and consume so many potentially good respondents in one go, so having poor data quality from one of the sessions can be upsetting. Don’t be afraid to go back to some of the respondents and see if they would do another smaller session, or one-on-ones to get more of their input.


However, the most frustrating situation is when you get disappointing data from a really key informant: someone that is an important figure in the field, is well connected or has just the right experience. These interviews don’t always go to plan, especially with senior people who may not be willing to share, or have their own agenda in how they shape the discussion. In these situations it is usually difficult to find another respondent who will have the same insight or viewpoint, so the data is tricky to replace. It’s best to leave these key interviews until you have done a few others; that way you can be confident in your research questions, and will have some experience in mediating the discussions.


Finally, there is also lost data. Dictaphones that don’t record or get lost. Files gone missing and lost passwords. Crashed computers that take all the data with them to an early and untimely grave! These things happen more often than they should, and careful planning, precautions and backups are the only way to protect against these.


But often the answer to all these problems is to collect more data! Most people using qualitative methodologies should have a certain amount of flexibility in their recruitment strategy, and should always be doing some review and analysis on each source as it is collected. This way you can quickly identify gaps or problems in the data, and make sure forthcoming data collection procedures cover everything.


So don’t leave your analysis too late, get your data into an intuitive tool like Quirkos, and see how it can bring your good and bad research data to light! We have a one month free trial, and lots of support and resources to help you make the most of the qualitative data you have. And don’t forget to share your stories of when things went wrong on Twitter using the hashtag #qualdisasters!

 

Practice projects and learning qualitative data analysis software

image by zaui/Scott Catron

 

Coding and analysing qualitative data is not only a time consuming, it’s a difficult interpretive exercise which, like learning a musical instrument, gets much better with practice. However, lots of students starting their first major qualitative or mixed method research project will benefit from completing a smaller project first, rather than starting by trying to learn a giant symphony. This will allow them to get used to qualitative analysis software, working with qualitative data, developing a coding framework and getting a realistic expectation of what can be done in a fixed time frame. Often people will try and learn all these aspects for the first time when they start a major project like a masters or PhD dissertation, and then struggle to get going and take the most effective approach.

 

Many scholars, including those advocating the 5 Level QDA approach suggest that learning the capabilities of the software and qualitative data separately, since one can effect the other. And a great way to do this is to actually dig in and get started with a separate smaller project. Reading textbooks and the literature can only prepare you so much (see for example this article on coding your first qualitative data), but a practical project to experiment and make mistakes in is a great preparation for the main event.

 

But what should a practice project look like? Where can I find some example qualitative data to play with? A good guideline is to take just a few sources, even just 3 or 4 from a method that is similar to the data collection you will use for your project. For example, if you are going to have focus groups, try and find some already existing focus group transcripts to transcribe. Although this can be daunting, there are actually lots of ways to quickly find qualitative data that will not only make you more familiar with real qualitative data, but also the analysis process and accompanying software tools. This article gives a couple of suggestions for a mini project to hone your skills!

 


News and media

A quick way to practice your basic coding skills is to do a small project using news articles. Just choose a recent (or historical) event, collect a few articles either from different news websites or over a period of time. Looking at conflicts in how events are described can be revealing, and is good for getting the right analytical eye to examine differences from respondents in your main project. It’s easy to go to different major news websites (like the Telegraph, Daily Mail, BBC News or the NYT) and copy and paste articles into Quirkos or other qualitative analysis software. All these sites have searchable archives, so you can look for a particular topic and find older articles.

 

Set yourself a good research question (or two), and use this project to practice generating a coding framework and exploring connections and differences across sources.

 

 

Qualitative Data Archives

If you want some more involved experience, browse some of the online repositories of qualitative data. These allow you to download the complete data set from research projects large and small. Since much government (or funding board) funded research requires data to be made publicly available, there are increasing numbers of data sets available to download which make a great way to look at real qualitative data, and practice your analysing skills. I’ll share two examples here, the first is the UK Data Archive and the second the Syracuse Qualitative Data Repository.

 

Regardless of where you are based, these resources offer an amazing variety of data types and topics. This can make your practice fun – there are data sets on some fascinating and obscure areas, so choose something interesting to you, or even something completely new and different as a challenge. You also don’t have to use all the sources from a large project – just choose three or four to start with, you can always add more later if you need extra coding experience.

 

 

Literature reviews

Actually, qualitative analysis software is a great way to get to grips with articles, books and policy documents relating to your research project. Since most people will want to do a systematic or literature review before they start a project, bringing your literature into qualitative software is a good way to learn the software while also working towards your project. While reading through your literature, you can create codes/themes to describe key points in theory or previous studies, and compare findings from different research projects.

In Quirkos it is easy to bring in PDF articles from journals or ebooks, and then you will have not only a good reference management system, but somewhere you can keep the full text of relevant articles, tagged and coded so you can find key quotes quickly for writing up. Our article here gives some good advice on using qualitative software for systematic and literature reviews.

 

 


Our own projects

Quirkos also has made two example projects freely available for learning qualitative analysis with any software package. The first is great for beginners, a fictional project about healthy eating options for breakfast. These 6 sources are short, but with rich information, so can be fully coded in less than an hour. Secondly, we conduded a real research project on the Scottish Referendum for Independence, and 12 transcribed semi-structured interviews are made available for your own practice and interpretation.

 

The advantage of these projects is that they both have fully coded project files to use as examples and comparison. It’s rare to find people sharing their coding (especially as an accessible project file) but can be a useful guide or point of comparison to your own framework and coding decisions.

 

download Quirkos qualitative research software

 

Ask around

Talk to faculty in your department and see if they have any example data sets you can use. Some academics will already have these for teaching purposes or taken from other research projects they are able to share.

 

It can also be a good exercise to do a coding project with someone else. Regardless of which option you choose from the example qualitative data sources above, share the data with another student or colleague, and go and do your own coding  separately. Once you are both done, meet up and compare your results – it will be really revealing to see how different people interpreted the data, how their coding framework looked, and how they found working with the software. It’s also good motivation and time management to have to work to a mutually set deadline!

 

 

The great thing about starting a small learning project is that it can be a perfect opportunity to experiment with different qualitative analysis software. It may be that you only have access to one option like Nvivo, MAXQDA, or Atlas.Ti at your institution, but student licences are very cheap and affordable, so make a great option for learning qualitative analysis. All the major packages have a free trial, so you can try several (or them all!) and find out which one works best for you. Doing this with a small example project lets you practice key techniques and qualitative methods, and also think through how best to collect and structure your data for analysis.

 

Quirkos probably has the best deal for qualitative research software, for example our student licences are cheap at just $59 (£49 or €58) and don’t expire. Most of the other packages only give you six months or a year but we let you use Quirkos as long as you need, so you will always be able to access your data – even after you graduate. Even academics and supervisors will find that Quirkos is much more affordable and easier to learn. Of course, there is a no obligation or registration trial, and all our support and training materials are free as well. So make sure you make the most informed decision before you start your research, and we hope that Quirkos becomes your instrument of choice for qualitative analysis!

 

 

How Quirkos can change the way you look at your qualitative data

Quirkos qualitative software seeing data

We always get a lot of inquiries in December from departments and projects who are thinking of spending some left-over money at the end of the financial year on a few Quirkos licences. A great early Christmas present for yourself the team! It’s also a good long term investment, since our licences don’t expire and can be used year after year. They are transferable to new computers, and we’ve committed to provide free updates for the current version. We don’t want the situation where different teams are using different versions, and so can’t share projects and data. Our licences are often a fraction of the cost of other qualitative software packages, but for the above reasons we think that we offer much more value than just the comparative sticker price.

 

But since Quirkos also has a different ethos (accessibility) and some unique features, it also helps you approach your qualitative research data in a different way to other software. In the two short years that Quirkos has been available, it’s become used by more than 100 universities across the world, as well as market research firms and public sector organisations. That has given me a lot of feedback that helps us improve the software, but also a sense of what things people love the most about it. So following is the list of things I hear most about the software in workshops and e-mails.

 

It’s much more visual and colourful

quirkos qualitative coding bubbles

Experienced researchers who have used other software are immediately struck by how colourful and visual the Quirkos approach is. The interface shows growing bubbles that dynamically show the coding in each theme (or node), and colours all over the screen. For many, the Quirkos design allows people to think in colours, spatially, and in layers, improving the amount of information they can digest and work with. Since the whole screen is a live window into the data, there is less need to generate separate reports, and coding and reviewing is a constant (and addictive) feedback process.


This doesn’t appeal to everyone, so we still have a more traditional ‘tree’ list structure for the themes which users can switch between at any time.

 

 

I can get started with my project quicker


We designed Quirkos so it could be learnt in 20 minutes for use in participatory analysis, so the learning curve is much lower than other qualitative software. Some packages can be intimidating to the first-time user, and often have 2 day training courses. All the training and support materials for Quirkos are available for free on our website, without registration. We increasingly hear that students want self-taught options, which we provide in many different formats. This means that not only can you start using Quirkos quickly, setting up and putting data into a new project is a lot quicker as well, making Quirkos useful for smaller qualitative projects which might just have a few sources.

 

 

I’m kept closer to my data

qualitative software comparison view


It’s not just the live growing bubbles that mean researchers can see themes evolve in their analysis, there are a suite of visualisations that let you quickly explore and play with the data. The cluster views generate instant Venn diagrams of connection and co-occurrences between themes, and the query views show side-by-side comparisons for any groups of your data you want to compare and contrast. Our mantra has been to make sure that no output is more than one click away, and this keeps users close to their data, not hidden away behind long lists and sub-menus.

 

 

It’s easier to share with others

qualitative word export


Quirkos provides some unique options that make showing your coded qualitative data to other people easier and more accessible. The favourite feature is the Word export, which creates a standard Word document of your coded transcripts, with all the coding shown as colour coded comments and highlights. Anyone with a word processor can see the complete annotated data, and print it out to read away from the computer.


If you need a detailed summary, the reports can be created as an interactive webpage, or a PDF which anyone can open. For advanced users you can also export your data as a standard spreadsheet CSV file, or get deep into the standard SQLite database using any tool (such as http://sqlitebrowser.org/) or even a browser extension.

 

 

I couldn’t get to grips with other qualitative software

quirkos spreadsheet comparison


It is very common for researchers to come along to our workshops having been to training for other qualitative analysis software, and saying they just ‘didn’t get it’ before. While very powerful, other tools can be intimidating, and unless you are using them on a regular basis, difficult to remember all the operations. We love how people can just come back to Quirkos after 6 months and get going again.


We also see a lot of people who tried other specialist qualitative software and found it didn’t fit for them. A lot of researchers go back to paper and highlighters, or even use Word or Excel, but get excited by how intuitive Quirkos makes the analysis process.

 

 

Just the basics, but everything you need


I always try and be honest in my workshops and list the limitations of Quirkos. It can’t work with multimedia data, can’t provide quantitative statistical analysis, and has limited memo functionality at the moment. But I am always surprised at how the benefits outweigh the limitations for most people: a huge majority of qualitative researchers only work with text data, and share my belief that if quantiatitve statistics are needed, they should be done in dedicated software. The idea has always been to focus on making the core actions that researchers do all the time (coding, searching, developing frameworks and exploring data) and make them as smooth and quick as possible.

 


If you have comments of your own, good or bad, we love to hear them, it’s what keeps us focused on the diverse needs of qualitative researchers.


Get in touch and we can help explain the different licence options, including ‘seat’ based licences for departments or teams, as well as the static licences which can be purchased immediately through our website. There are also discounts for buying more than 3 licences, for different sectors, and developing countries.


Of course, we can also provide formal quotes, invoices and respond to purchase orders as your institution requires. We know that some departments take time to get things through finances, and so we can always provide extensions to the trial until the orders come through – we never want to see researchers unable to get at their data and continue their research!


So if you are thinking about buying a licence for Quirkos, you can download the full version to try for free for one month, and ask us any questions by email (sales@quirkos.com), Skype ‘quirkos’ or a good old 9-to-5 phone call on (+44) 0131 555 3736. We are here for qualitative researchers of all (coding) stripes and spots (bubbles)!

 

Snapshot data and longitudinal qualitative studies

longitudinal qualitative data


In the last blog post, we looked at creating archives of qualitative data that can be used by other researchers (or yourself in the future) for secondary analysis. In that article I postulated that secondary data analysis could make collecting new data a rarer, and expensive event. However, some (including Dr Susanne Friese) pointed out that as the social world is always changing, there is a constant need to collect new data. I totally agree with this, but it made me think about another problem with research studies – they are time-limited collection events that end up capturing a snapshot of the world when they were recorded.


Most qualitative research collects data as a series of one-off dives into the lives and experience of participants. These might be a single interview or focus group, or the results of a survey that captures opinions at a particular point in time. This might not be a certain date, it might also be at a key point in a participant’s journey – such as when people are discharged from hospital, or after they attend a training event. However, even within a small qualitative research project it is possible to measure change, by surveying the people at different stages in their individual or collective journeys.


This is sometimes called ‘Qualitative Longitudinal Research’ (Farrall et al. 2006), and often involves contacting people at regular intervals, possibly even after years or decades. Examples of this type of project include the five year ‘Timescapes’ project in the UK which looked at changes in personal and family relationships (Holland 2011), or a mutli-year look at homelessness and low-income families in Worcester, USA for which the archived data is available (yay!).


However, such projects tend to be expensive, as they require having researchers working on a project for many years. They also need quite a high sample size because there is an inevitable annual attrition of people who move, fall ill or stop responding to the researchers. Most projects don’t have the luxury of this sort of scale and resources, but there is another possibility to consider: if it is appropriate to collect data over a shorter timescale. This is often collecting data at ‘before and after’ data points – for example bookending a treatment or event, so are often used in well planned evaluations. Researchers can ask questions about expectations before the occasion, and how they felt afterwards. This is useful in a lot of user experience research, but also to understand motivations of actors, and improve delivery of key services.


But there are other reasons to do more than one interview with a participant. Even having just two data points from a participant can show changes in lives, circumstances and opinions, even when there isn’t an obvious event distinguishing the two snapshots. It also gets people to reflect on  answers they gave in the first interview, and see if their feelings or opinions have changed. It can also give an opportunity to further explore topics that were interesting or not covered in the first interview, and allow for some checking – do people answer the questions in a consistent way? Sometimes respondents (or even interviewers) are having a bad day, and this doubles the chance of getting high quality data.


In qualitative analysis it is also an opportunity to look through all the data from a number of respondents and go back to ask new questions that are revealed from the data. In a grounded theory approach this is very valuable, but it can also be used to check researcher’s own interpretations and hypothesis about the research topic.


There are a number of research methods which are particularly suitable to longitudinal qualitative data studies. Participant diaries for example can be very illuminating, and can work over long or short periods of time. Survey methods also work well for multiple data points, and it is fairly easy to administer these at regular intervals by collecting data via e-mail, telephone or online questionnaires. These don’t even have to have very fixed questions, it is possible to do informal interviews via e-mail as well (eg Meho 2006), and this works well over long and medium periods of time.


So when planning a research project it’s worth thinking about whether your research question could be better answered with a longitudinal or multiple contact approach. If you decide this is the case, just be aware of the possibility that you won’t be able to contact everyone multiple times, and if means that some people’s data can’t be used, you need to over-recruit at the initial stages. However, the richer data and greater potential for reliability can often outweigh the extra work required. I also find it very illuminating as a researcher, to talk to participants after analysing some of their data. Qualitative data can become very abstract from participants (especially once it is transcribed, analysed and dissected) and meeting the person again and talking through some of the issues raised can help reconnect with the person and story.


Researchers should also consider how they will analyse such data. In qualitative analysis software like Quirkos, it is usually fairly easy to categorise the different sources you have by participant or time scale, so that you can look at just 1st or 2nd interviews or all together. I have had many questions from users into how best to organise their projects for such studies: and my advice is generally to keep everything in one project file. Quirkos makes it easy to do analysis on just certain data sources, and show results and reports from everything, or just one set of data.


So consider giving Quirkos a try with a free download for Windows, Mac or Linux, and read about how to build complex queries to explore your data over time, or between different groups of respondents.

 

 

Archiving qualitative data: will secondary analysis become the norm?

archive secondary data

 

Last month, Quirkos was invited to a one day workshop in New York on archiving qualitative data. The event was hosted by Syracuse University, and you can read a short summary of the event here. This links neatly into the KWALON led initiative to create a common standard for interchange of coded data between qualitative software packages.


The eventual aim is to develop a standardised file format for qualitative data, which not only allows use of data on any qualitative analysis software, but also for coded qualitative data sets to be available in online archives for researchers to access and explore. There are several such initiatives around the world, for example QDR in the United States, and the UK Data Archive in the United Kingdom. Increasingly, research funders are requiring that data from research is deposited in such public archives.


A qualitative archive should be a dynamic and accessible resource. It is of little use creating something like the warehouse at the end of ‘Raiders of the Lost Ark’: a giant safehouse of boxed up data which is difficult to get to. Just like the UK Data Archive it must be searchable, easy to download and display, and have enough detail and metadata to make sure data is discoverable and properly catalogued. While there is a direct benefit to having data from previous projects archived, the real value comes from the reuse of that data: to answer new questions or provide historical context. To do this, we need a change in practice from researchers, participants and funding bodies.


In some disciplines, secondary data analysis of archival data is common place, think for example of history research that looks at letters and news from the Jacobean period (eg Coast 2014). The ‘Digital Humanities’ movement in academia is a cross-dicipline look at how digital archives of data (often qualitative and text based) can be better utilised by researchers. However, there is a missed opportunity here, as many researchers in digital humanities don’t use standard qualitative analysis software, preferring instead their own bespoke solutions. Most of the time this is because the archived data is in such a variety of different formats and formatting.


However, there is a real benefit to making not just historical data like letters and newspapers, but contemporary qualitative research projects (of the typical semi-structured interview / survey / focus-group type) openly available. First, it allows others to examine the full dataset, and check or challenge conclusions drawn from the data. This allows for a higher level of rigour in research - a benefit not just restricted to qualitative data, but which can help challenge the view that qualitative research is subjective, by making the full data and analysis process available to everyone.

 

Second, there is huge potential for secondary analysis of qualitative data. Researchers from different parts of the country working on similar themes can examine other data sources to compare and contrast differences in social trends regionally or historically. Data that was collected for a particular research question may also have valuable insights about other issues, something especially applicable to rich qualitative data. Asking people about healthy eating for example may also record answers which cover related topics like attitudes to health fads in the media, or diet and exercise regimes.

 

At the QDR meeting last month, it was postulated that the generation of new data might become a rare activity for researchers: it is expensive, time consuming, and often the questions can be answered by examining existing data. With research funding facing an increasing squeeze, many funding bodies are taking the opinion that they get better value from grants when the research has impact beyond one project, when the data can be reused again and again.

 

There is still an element of elitism about generating new data in research – it is an important career pathway, and the prestige given to PIs running their own large research projects are not shared with those doing ‘desk-based’ secondary analysis of someone else’s data. However, these attitudes are mostly irrational and institutionally driven: they need to change. Those undertaking new data collection will increasingly need to make sure that they design research projects to maximise secondary analysis of their data, by providing good documentation on the collection process, research questions and detailed metadata of the sources.

 

The UK now has a very popular research grant programme specifically to fund researchers to do secondary data analysis. Loiuse Corti told the group that despite uptake being slow in the first year, the call has become very competitive (like the rest of grant funding). These initiatives will really help raise the profile and acceptability of doing secondary analysis, and make sure that the most value is being gained from existing data.


However, making qualitative data available for secondary analysis does not necessarily mean that it is publicly available. Consent and ethics may not allow for the release of the complete data, making it difficult to archive. While researchers should increasingly start planning research projects and consent forms to make data archivable and shareable, it is not always appropriate. Sometimes, despite attempts to anonymise qualitative data, the detail of life stories and circumstances that participants share can make them identifiable. However, the UK Data Archive has a sort of ‘vetting’ scheme, where more sensitive datasets can only be accessed by verified researchers who have signed appropriate confidentiality agreements. There are many levels of access so that the maximum amount of data can be made available, with appropriate safeguards to protect participants.


I also think that it is a fallacy to claim that participants wouldn’t want their data to be shared with other researchers – in fact I think many respondents assume this will happen. If they are going to give their time to take part in a research project, they expect this to have maximum value and impact, many would be outraged to think of it sitting on the shelf unused for many years.


But these data archives still don’t have a good way to share coded qualitative data or the project files generated by qualitative software. Again, there was much discussion on this at the meeting, and Louise Corti gave a talk about her attempts to produce a standard for qualitative data (QuDex). But such a format can become very complicated, if we wish to support and share all the detailed aspects of qualitative research projects. These include not just multimedia sources and coding, but date and metadata for sources, information about the researchers and research questions, and possibly even the researcher’s journals and notes, which could be argued to be part of the research itself.

 

The QuDex format is extensive, but complicated to implement – especially as it requires researchers to spend a lot of time entering metadata to be worthwhile. Really, this requires a behaviour change for researchers: they need to record and document more aspects of their research projects. However, in some ways the standard was not comprehensive enough – it could not capture the different ways that notes and memos were recorded in Atlas.ti for example. This standard has yet to gain traction as there was little support from the qualitative software developers (for commercial and other reasons). However, the software landscape and attitudes to open data are changing, and major agreements from qualitative software developers (including Quirkos) mean that work is underway to create a standard that should eventually create allow not just for the interchange of coded qualitative data, but hopefully for easy archival storage as well.


So the future of archived qualitative archive requires behaviour change from researchers, ethics boards, participants, funding bodies and the designers of qualitative analysis software. However, I would say that these changes, while not necessarily fast enough, are already in motion, and the word is moving towards a more open world for qualitative (and quantitative) research.


For things to be aware of when considering using secondary sources of qualitative data and social media, read this post. And while you are at it, why not give Quirkos a try and see if it would be a good fit for your next qualitative research project – be it primary or secondary data. There is a free trial of the full version to download, with no registration or obligations. Quirkos is the simplest and most visual software tool for qualitative research, so see if it is right for you today!

 

 

Stepping back from coding software and reading qualitative data

printing and reading qualitative data

There is a lot of concern that qualitative analysis software distances people from their data. Some say that it encourages reductive behaviour, prevents deep reading of the data, and leads to a very quantified type of qualitative analysis (eg Savin-Baden and Major 2013).

 

I generally don’t agree with these statements, and other qualitative bloggers such as Christina Silver and Kristi Jackson have written responses to critics of qualitative analysis software recently. However, I want to counter this a little with a suggestion that it is also possible to be too close to your data, and in fact this is a considerable risk when using any software approach.

 

I know this is starting to sound contradictory, but it is important to strike a happy balance so you can see the wood from the trees. It’s best to have both a close, detailed reading and analysis of your data, as well as a sense of the bigger picture emerging across all your sources and themes. That was the impetus behind the design of Quirkos: that the canvas view of your themes, where the size of each bubble shows the amount of data coded to it, gives you a live birds-eye overview of your data at all times. It’s also why we designed the cluster view, to graphically show you the connections between themes and nodes in your qualitative data analysis.

 

It is very easy to treat analysis as a close reading exercise, taking each source in turn, reading it through and assigning sections to codes or themes as you go. This is a valid first step, but only part of what should be an iterative, cyclical process. There are also lots of ways to challenge your coding strategy to keep you alert to new things coming from the data, and seeing trends in different ways.

 

However, I have a confession. I am a bit of a Luddite in some ways: I still prefer to print out and read transcripts of data from qualitative projects away from the computer. This may sound shocking coming from the director of a qualitative analysis software company, but for me there is something about both the physicality of reading from paper, and the process of stepping away from the analysis process that still endears paper-based reading to me. This is not just at the start of the analysis process either, but during. I force myself to stop reading line-by-line, put myself in an environment where it is difficult to code, and try and read the corpus of data at more of a holistic scale.
I waste a lot of trees this way (even with recycled paper), but always return to the qualitative software with a fresh perspective, finish my coding and analysis there, but having made the best of both worlds. Yes, it is time consuming to have so many readings of the data, but I think good qualitative analysis deserves this time.

 

I know I am not the only researcher who likes to work in this way, and we designed Quirkos to make this easy to do. One of the most unique and ‘wow’ features of Quirkos is how you can create a standard Word document of all the data from your project, with all the coding preserved as colour-coded highlights. This makes it easy to printout, take away and read at your leisure, but still see how you have defined and analysed your data so far.

word export qualitative data

 

There are also some other really useful things you can do with the Word export, like share your coded data with a supervisor, colleague or even some of your participants. Even if you don’t have Microsoft Office, you can use free alternatives like LibreOffice or Google Docs, so pretty much everyone can see your coded data. But my favourite way to read away from the computer is to make a mini booklet, with turn-able pages – I find this much more engaging than just a large stack of A4/Letter pages stapled in the top corner. If you have a duplex printer that can print on both sides of the page, generate a PDF from the Word file (just use Save As…) and even the free version of Adobe Reader has an awesome setting in Print to automatically create and format a little booklet:

word booklet

 

 

I always get a fresh look at the data like this, and although I am trying not to be too micro-analytical and do a lot of coding, I am always able to scribble notes in the margin. Of course, there is nothing to stop you stepping back and doing a reading like this in the software itself, but I don’t like staring at a screen all day, and I am not disciplined enough to work on the computer and not get sucked into a little more coding. Coding can be a very satisfying and addictive process, but at the time I have to define higher-level themes in the coding framework, I need to step back and think about the bigger picture, before I dive into creating something based on the last source or theme I looked at. It’s also important to get the flow and causality of the sources sometimes, especially when doing narrative and temporal analysis. It’s difficult to read the direction of an interview or series of stories just from looking at isolated coded snippets.

 

Of course, you can also print out a report from Quirkos, containing all the coded data, and the list of codes and their relations. This is sometimes handy as a key on the side, especially if there are codes you think you are underusing. Normally at this stage in the blog I point out how you can do this with other software as well, but actually, for such a commonly required step, I find this very hard to do in other software packages. It is very difficult to get all the ‘coding stripes’ to display properly in Nvivo text outputs, and MaxQDA has lots of options to export coded data, but not whole coded sources that I can see. Atlas.ti does better here with the Print with Margin feature, which shows stripes and code names in the margin – however this only generates a PDF file, so is not editable.

 

So download the trial of Quirkos today, and every now and then step back and make sure you don’t get too close to your qualitative data…

 

 

Problems with quantitative polling, and answers from qualitative data

 

The results of the US elections this week show a surprising trend: modern quantitative polling keeps failing to predict the outcome of major elections.

 

In the UK this is nothing new, in both the 2015 general election and the EU referendum polling failed to predict the outcome. In 2015 the polls suggested very close levels of support for Labour and the Conservative party but on the night the Conservatives won a significant majority. Secondly, the polls for the Referendum on leaving the EU indicated there was a slight preference for Remain, when voters actually voted to Leave by a narrow margin. We now have a similar situation in the States, where despite polling ahead of Donald Trump, Hillary Clinton lost the Electoral College system (while winning a slight majority in the popular vote). There are also recent examples of polling errors in Israel, Greece and the Scottish Independence Referendum.

 

Now, it’s fair to say that most of these polls were within the margin of error, (typically 3%) and so you would expect these inaccurate outcomes to happen periodically. However, there seems to be a systematic bias here, in each time underestimating the support for more conservative attitudes. There is much hand-wrangling about this in the press, see for example this declaration of failure from the New York Times. The suggestion that journalists and traditional media outlets are out of touch with most of the population may be true, but does not explain the  polling discrepancies.

 

There are many methodological problems: numbers of people responding to telephone surveys is falling, perhaps not surprising considering the proliferation of nuisance calls in the UK. But this remains for most pollsters a vital way to get access to the largest group of voters: older people. In contrast, previous attempts to predict elections through social media and big data approaches have been fairly inaccurate, and likely will remain that way if social media continues to be dominated by the young.

 

However, I think there is another problem here: pollsters are not asking the right questions. Look how terribly worded the exit poll questions are, they try to get people to put themselves in a box as quickly as possible: demographically, religiously, and politically. Then they ask a series of binary questions like “Should illegal immigrants working in the U.S. should be offered legal status or deported to their home country?” giving no opportunity for nuance. The aim is clear – just get to a neat quantifiable output that matches a candidate or hot topic.

 

There’s another question which I think in all it’s many iterations is poorly worded: Who are you going to vote for? People might change whether they would support a different politician at any moment in time (including in a polling booth), but are unlikely to suddenly decide their family is not important to them. It’s often been shown that support for a candidate is not a reliable metric: people give you answers influenced by the media, the resdearcher and of course they can change their mind. But when you ask people questions about their beliefs, not a specific candidate, they tend to be much more accurate. It also does not always correlate that a person will believe a candidate is good, and vote for them. As we saw in Brexit, and possibly with the last US election, many people want to register a protest vote – they are not being heard or represented well, and people aren’t usually asked if this is one of the reasons they vote. It’s also very important to consider that people are often strategic voters, and are themselves influenced by the polls which are splashed everywhere. The polls have become a constant daily race of who’s ahead, possibly increasing voter fatigue and leading to complacency for supporters of who ever is ahead the day of the finishing line. These made future predictions much more difficult.

 


In contrast, here’s two examples of qualitative focus group data on the US election. The first is a very heavily moderated CBS group, which got very aggressive. Here, although there is a strong attempt to ask for one word answers on candidates, what comes out is a general distrust of the whole political system. This is also reflected in the Lord Ashcroft focus groups in different American states, which also include interviews with local journalists and party leaders. When people are not asked specific policy or candidate based questions, there is surprising  agreement: everyone is sick of the political system and the election process.


This qualitative data is really no more subjective than polls based on who answers a phone on a particular day, but provides a level of nuance lacking in the quantitative polls and mainstream debate, which helps explain why people are voting different ways – something many are still baffled by. There are problems with this type of data as well, it is difficult to accurately summarise and report on, and rarely are complete transcripts available for scrutiny. But if you want to better gauge the mood of a nation, discussion around the water-cooler or down the pub can be a lot more illuminating, especially when as a researcher or ethnographer you are able to get out of the way and listen (as you should when collecting qualitative data in focus groups).

 

Political data doesn’t have to be focus group driven either – these group discussions are done because they are cheap, but qualitative semi-structured interviews can really let you understand key individuals that might help explain larger trends. We did this before the 2015 general election, and the results clearly predicted and explained the collapse in support for the Labour party in Scotland.

 

There has been a rush in the polling to add more and more numbers to the surveys, with many reaching tens or even hundreds of thousands of respondents. But these give a very limited view of voter opinions, and as we’ve seen above can be very skewed by question and sampling method. It feels to me that deep qualitative conversations with a much smaller number of people from across the country would be a better way of gauging the social and political climate. And it’s important to make sure that participants have the power to set the agenda, because pollsters don’t always know what issues matter most to people. And for qualitative researchers and pollsters alike: if the right questions don’t get asked, you won’t get the right answers!

 

Don't forget to try Quirkos, the simplest and most visual way to analyse your qualitative text and mixed method data. We work for you, with a free trial and training materials, licences that don't expire and expert researcher support. Download Quirkos and try for your self!

 

 

 

Tips for running effective focus groups

In the last blog article I looked at some of the justifications for choosing focus groups as a method in qualitative research. This week, we will focus on some practical tips to make sure that focus groups run smoothly, and to ensure you get good engagement from your participants.

 


1. Make sure you have a helper!

It’s very difficult to run focus groups on your own. If you are wanting to layout the room, greet people, deal with refreshment requests, check recording equipment is working, start video cameras, take notes, ask questions, let in late-comers and facilitate discussion it’s much easier with two or even three people for larger groups. You will probably want to focus on listening to the discussion, not have to take notes and problem solve at the same time. Having another facilitator or helper around can make a lot of difference to how well the session runs, as well as how much good data is recorded from it.

 


2. Check your recording strategy

Most people will record audio and transcribe their focus groups later. You need to make sure that your recording equipment will pick up everyone in the room, and also that you have a backup dictaphone and batteries! Many more tips in this blog post article. If you are planning to video the session, think this through carefully.

 

Do you have the right equipment? A phone camera might seem OK, but they usually struggle to record long sessions, and are difficult to position in a way that will show everyone clearly. Special cameras designed for gig and band practice are actually really good for focus groups, they tend to have wide-angle lenses and good microphones so you don’t need to record separate audio. You might also want to have more than one camera (in a round-table discussion, someone will always have their back to the camera. Then you will want to think about using qualitative analysis software like Transana that will support multiple video feeds.

 

You also need to make sure that video is culturally appropriate for your group (some religions and cultures don’t approve of taking images) and that it won’t make people nervous and clam up in discussion. Usually I find a dictaphone less imposing than a camera lens, but you then loose the ability to record the body language of the group. Video also makes it much easier to identify different speakers!

 


3. Consent and introductions

I always prefer to do the consent forms and participant information before the session. Faffing around with forms to sign at the start or end of the workshop takes up a lot of time best used for discussion, and makes people hurried to read the project information. E-mail this to people ahead of time, so at least they can just sign on the day, or bring a completed form with them. I really feel that participants should get the option to see what they are signing up for before they agree to come to a session, so they are not made uncomfortable on the day if it doesn't sound right for them. However, make sure there is an opportunity for people to ask any questions, and state any additional preferences, privately or in public.

 


4. Food and drink

You may decide not to have refreshments at all (your venue might dictate that) but I really love having a good spread of food and drink at a focus group. It makes it feel more like a party or family occasion than an interrogation procedure, and really helps people open up.

 

While tea, coffee and biscuits/cookies might be enough for most people, I love baking and always bring something home-baked like a cake or cookies. Getting to talk about and offer  food is a great icebreaker, and also makes people feel valued when you have spent the time to make something. A key part of getting good data from a good focus group is to set a congenial atmosphere, and an interesting choice of drinks or fruit can really help this. Don’t forget to get dietary preferences ahead of time, and consider the need for vegetarian, diabetic and gluten-free options.

 


5. The venue and layout

A lot has already been said about the best way to set out a focus group discussion (see Chambers 2002), but there are a few basic things to consider. First, a round or rectangle table arrangement works best, not lecture hall-type rows. Everyone should be able to see the face of everyone else. It’s also important not to have the researcher/facilitator at the head or even centre of the table. You are not the boss of the session, merely there to guide the debate. There is already a power dynamic because you have invited people, and are running the session. Try and sit yourself on the side as an observer, not director of the session.

 

In terms of the venue, try and make sure it is as quiet as possible, and good natural light and even high ceilings can help spark creative discussion (Meyers-Levy and Zhu 2007).

 


6. Set and state the norms

A common problem in qualitative focus group discussions is that some people dominate the debate, while others are shy and contribute little. Chambers (2002) just suggests to say at the beginning of the session this tends to happen, to make people conscious of sharing too much or too little. You can also try and actively manage this during the session by prompting other people to speak, go round the room person by person, or have more formal systems where people raise their hands to talk or have to be holding a stone. These methods are more time consuming for the facilitator and can stifle open discussion, so it's best to use them only when necessary.

 

You should also set out ground rules, attempting to create an open space for uncritical discussion. It's not usually the aim for people to criticise the view of others, nor for the facilitator to be seen as the leader and boss. Make these things explicit at the start to make sure there is the right atmosphere for sharing: one where there is no right or wrong answer, and everyone has something valuable to contribute.

 


7. Exercises and energisers

To prompt better discussion when people are tired or not forthcoming, you can use exercises such as card ranking exercises, role play exercises and prompts for discussion such as stories or newspaper articles. Chambers (2002) suggests dozens of these, as well as some some off-the-wall 'energizer' exercises: fun games to get people to wake up and encourage discussion. More on this in the last blog post article. It can really help to go round the room and have people introduce themselves with a fun fact, not just to get the names and voices on tape for later identification, but as a warm up.

 

Also, the first question, exercise or discussion point should be easy. If the first topic is 'How did you feel when you had cancer?' that can be pretty intimidating to start with. Something much simpler, such as 'What was hospital food like?' or even 'How was your trip here?' are topics everyone can easily contribute to and safely argue over, gaining confidence to share something deeper later on.

 


8. Step back, and step out

In focus groups, the aim is usually to get participants to discuss with each-other, not a series of dialogues with the facilitator. The power dynamics of the group need to reflect this, and as soon as things are set in motion, the researcher should try and intervene as little as possible – occasionally asking for clarification or to set things back on track. Thus it's also their role to help participants understand this, and allow the group discussion to be as co-interactive as possible.

 

“When group dynamics worked well the co-participants acted as co-
researchers taking the research into new and often unexpected directions and engaging in interaction which were both complementary (such as sharing common experiences) and argumentative” 
- Kitzinger 1994

 


9. Anticipate depth

Focus groups usually last a long time, rarely less than 2 hours, but even a half or whole day of discussion can be appropriate if there are lots of complex topics to discuss. It's OK to consider having participants do multiple focus groups if there is lots to cover, just consider what will best fit around the lives of your participants.

 

At the end of these you should find there is a lot of detailed and deep qualitative data for analysis. It can really help digesting this to make lots of notes during the session, as a summary of key issues, your own reflexive comments on the process, and the unspoken subtext (who wasn't sharing on what topics, what people mean when they say, 'you know, that lady with the big hair').


You may also find that qualitative analysis software like Quirkos can help pull together all the complex themes and discussions from your focus groups, and break down the mass of transcribed data you will end up with! We designed Quirkos to be very simple and easy to use, so do download and try for yourself...

 

 

 

Considering and planning for qualitative focus groups

focus groups qualitative

 

This is the first in a two-part series on focus groups. This week, we are looking at some of the  why you might consider using them in a research project, and questions to make sure they are well integrated into your research strategy. Next week we will look at some practical tips for effectively running and facilitating a successful session.


Focus groups have been used as a research method since the 1950s, but were not as common in academia until much later (Colucci 2007). Essentially they are time limited sessions, usually in a shared physical space, where a group of individuals are invited to discuss with each other and a facilitator a topic of interest to the researcher.


These should not been seen as ‘natural’ group settings. They are not really an ethnographic method, because even if comprised of an existing group (for example of people who work together or belong to the same social group) the session exists purely to create a dialogue for research purposes.


Together with ‘focused’ or semi-structured interviews, they are one of the most commonly used methods in qualitative research, both in market research and the social sciences. So what situations and research questions are they appropriate for?


If you are considering choosing focus groups as an easy way to quickly collect data from a large number of respondents, think again! Although I have seen a lot of market research firms do a single focus group as the extent of their research, one group generates limited data on its own. It’s also false to consider data from a focus group being the same as interview data from the same number of people: there is a group dynamic which is usually the main benefit to adopting this approach. Focus groups are best at recording the interactions and debate between a group of people, not many separate opinions.


They are also very difficult to schedule and manage from a practical standpoint. The researcher must find a suitably large and quiet space that everyone can attend, and is at a mutually convenient time. Compared with scheduling one-on-one interviews, the practicalities are much more difficult: meeting in a café or small office is rarely a good venue. It may be necessary to hire a dedicated venue or meeting room, as well as proper microphones to make sure everyone’s voice can be heard in a recording. The numbers that actually show up on the day will always fluctuate, so its unusual for all focus groups to have the same number of participants.


Although a lot of research projects seem to just do 3 or 4 focus groups, it’s usually best to try for a larger number, because the dynamics and data are likely to be very different in each one. In general you are less likely to see saturation on complex issues, as things go ‘off the rails’ and participants take things in new directions. If managed right, this should be enlightening rather than scary, but you need to anticipate this possibility, and make sure you are planning to collect enough data to cover all the bases.


So, before you commit to focus groups in your qualitative methods, go through the questions below and make sure you have reasons to justify their inclusion. There isn’t a right answer to any of them, because they will vary so much between different research projects. But once you can answer these issues, you will have a great idea of how focus groups fit into your study, and be able to write them up for your methodology section.

 

Planning Groups

How accessible will focus groups be to your planned participants?  Are participants going to have language or confidence issues? Are you likely to get a good range of participation? If the people you want to talk to are shy or not used to speaking (in the language the researcher wants to conduct the session in) focus groups may not get everyone talking as much as you like.


Are there anonymity issues? Are people with a stigmatising condition going to be willing to disclose their status or experience to others in the group? Will most people already know each other (and their secrets) and some not? When working with sensitive issues, you need to consider these potential problems, and your ethics review board will want to know you’ve considered this too.


What size of group will work best, and is it appropriate to plan focus groups around pre-existing groups? Do you want to choose people in a group that have very different experiences to provoke debate or conflict? Alternatively you can schedule groups of people with similar backgrounds or opinions to better understand a particular subset of your population.

 

Format

What will the format of your focus group be, just an open discussion? Or will you use prompts, games, ranking exercises, card games, pictures, media clippings, flash cards or other tools to get discussion and interactivity (see Colucci (2007)? These can be useful not just as a prompt, but as a point of commonality and comparison between groups. But make sure they are appropriate for the kind of group you want to work with, and they don’t seem forced or patronising. (Kitzinger 1994).


Analysis

Last of all, think about how you are going to analyse the data. Focus groups really require an extra level of analysis: the dynamic and dialectic can be seen as an extra layer on what participants are revealing about themselves. You might also need to be able to identify individual speakers in the transcript and possibly their demographic details if you want to explore these.


What is the aim within your methodology: to generate open discussion, or confirm and detail a specific position? Often focus groups can be very revealing if you have a very loose theoretical grounding, or are trying to initially set a research question.


How will the group data triangulate as part of a mixed methodology? Will the same people be interviewed or surveyed? What explicitly will you get out of the focus groups that will uniquely contribute to the data?

 


So this all sounds very cautionary and negative, but focus groups can be a wonderful, rich and dynamic data tool, that really challenges the researcher and their assumptions. Finally, focus groups are INTENSE experiences for a researcher. There are so many things to juggle, including the data collection, facilitating and managing group dynamics, while also taking notes and running out to let in latecomers. It’s difficult to do with just one person, so make sure you get a friend or colleague to help out!

 

Quirkos can help you to manage and analyse your focus group transcriptions. If you have used other qualitative analysis software before, you might be surprised at how easy and visual Quirkos makes the analysis of qualitative text – you might even get to enjoy it! You can download a trial for free and see how it works, but there are also a bunch of video tutorials and walk-throughs so you quickly get the most out of your qualitative data.

 


Further Reading and References

 

Colucci, E., 2007, Focus groups can be fun: the use of activity-oriented questions in focus group discussions, Qual Health Res, 17(10), http://qhr.sagepub.com/content/17/10/1422.abstract


Grudens-Schuck, N., Allen, B., Larson., 2004, Methodology Brief: Focus group fundamentals, Extension Community and Economic Development Publications. Book 12.
http://lib.dr.iastate.edu/extension_communities_pubs/12


Kitzinger, J., 1994, The methodology of Focus Groups: the importance of interaction between research participants, Sociology of Health and Illness, 16(1), http://onlinelibrary.wiley.com/doi/10.1111/1467-9566.ep11347023/pdf

 

Robinson, N., 1999, The use of focus group methodology with
selected examples from sexual health
research, Journal of Advanced Nursing, 29(4), 905-913

 

 

Circles and feedback loops in qualitative research

qualitative research feedback loops

The best qualitative research forms an iterative loop, examining, and then re-examining. There are multiple reads of data, multiple layers of coding, and hopefully, constantly improving theory and insight into the underlying lived world. During the research process it is best to try to be in a constant state of feedback with your data, and theory.


During your literature review, you may have several cycles through the published literature, with each pass revealing a deeper network of links. You will typically see this when you start going back to ‘seminal’ texts on core concepts from older publications, showing cycles of different interpretations and trends in methodology that are connected. You can see this with paradigm trends like social captial, neo-liberalism and power. It’s possible to see major theorists like Foucault, Chomsky and Butler each create new cycles of debate in the field, building up from the previous literature.


A research project will often have a similar feedback loop between the literature and the data, where the theory influences the research questions and methodology, but engagement with the real ‘folk world’ provides challenge to interpretations of data and the practicalities of data collection. Thus the literature is challenged by the research process and findings, and so a new reading of the literature is demanded to correlate or challenge new interpretations.

 

Thus it’s a mistake to think that a literature review only happens at the beginning of the research process, it is important to engage with theory again, not just at the end of a project when drawing conclusions and writing up, but during the analysis process itself. Especially with qualitative research, the data will rarely neatly fit with one theory or another, but demand a synthesis or new angle on existing research.

 

The coding process is also like this, in that it usually requires many cycles through the data. After reading one source, it can feel like the major themes and codes for the project are clear, and will set the groundwork for the analytic framework. But what if you had started with another source? Would the codes you would have created have been the same? It’s easy to either get complacent with the first codes you start with, worrying that the coding structure gets too complicated if there you keep creating new nodes.

 

However, there will always be sources which contain unique data or express different opinions and experiences that don’t chime with existing codes. And what if this new code actually fits some of the previous data better? You would need to go back to previously analysed data sources and explore them again. This is why most experts will recommend multiple tranches through the data, not just to be consistent and complete, but because there is a feedback loop in the codes and themes themselves. Once you have a first coding structure, the framework itself can be examined and reinterpreted, looking for groupings and higher level interpretations. I’ve talked about this more in this blog article about qualitative coding.


Quirkos is designed to keep researchers deeply embedded in this feedback process, with each coding event subtly changing the dynamics of the coding structure. Connections and coding is shown in real time, so you can always see what is happening, what is being coded most, and thus constantly challenge your interpretation and analysis process.

 

Queries, questions and sub-set analysis should also be easy to run and dynamic, because good qualitative researchers shouldn’t only do interrogation and interpretation of the data at the end of the analysis process, it should be happening throughout it. That way surprises and uncertainties can be identified early, and new readings of the data illuminate these discoveries.

 

In a way, qualitative analysis is never done: and it is not usually a linear process. Even when project practicalities dictate an end point, a coded research project in software like Quirkos sits on your hard drive, awaiting time for secondary analysis, or for the data to be challenged from a different perspective and research question. And to help you when you get there, your data and coding bubbles will immediately show you where you left off – what the biggest themes where, how they connected, and allow you to go to any point in the text to see what was said.

 

And you shouldn’t need to go back and do retraining to use the software again. I hear so many stories of people who have done training courses for major qualitative data analysis software, and when it comes to revisiting their data, the operations are all forgotten. Now, Quirkos may not have as many features as other software, but the focus on keeping things visual and in plain sight means that these should comfortably fit under your thumb again, even after not using it for a long stretch.

 

So download the free trial of Quirkos today, and see how it’s different way of presenting the data helps you continuously engage with your data in fresh ways. Once you start thinking in circles, it’s tough to go back!

 

Triangulation in qualitative research

triangulation facets face qualitative

 

Triangles are my favourite shape,
  Three points where two lines meet

                                                                           alt-J

 

Qualitative methods are sometimes criticised as being subjective, based on single, unreliable sources of data. But with the exception of some case study research, most qualitative research will be designed to integrate insights from a variety of data sources, methods and interpretations to build a deep picture. Triangulation is the term used to describe this comparison and meshing of different data, be it combining quantitative with qualitative, or ‘qual on qual’.


I don’t think of a data in qualitative research as being a static and definite thing. It’s not like a point of data on a plot of graph: qualitative data has more depth and context than that. In triangulation, we think of two points of data that move towards an intersection. In fact, if you are trying to visualise triangulation, consider instead two vectors – directions suggested by two sources of data, that may converge at some point, creating a triangle. This point of intersection is where the researcher has seen a connection between the inference of the world implied by two different sources of data. However, there may be angles that run parallel, or divergent directions that will never cross: not all data will agree and connect, and it’s important to note this too.


You can triangulate almost all the constituent parts of the research process: method, theory, data and investigator.


Data triangulation, (also called participant or source triangulation) is probably the most common, where you try to examine data from different respondents but collected using the same method. If we consider that each participant has a unique and valid world view, the researcher’s job is often to try and look for a pattern or contradictions beyond the individual experience. You might also consider the need to triangulate between data collected at different times, to show changes in lived experience.

 

Since every method has weaknesses or bias, it is common for qualitative research projects to collect data in a variety of different ways to build up a better picture. Thus a project can collect data from the same or different participants using different methods, and use method or between-method triangulation to integrate them. Some qualitative techniques can be very complementary, for example semi-structured interviews can be combined with participant diaries or focus groups, to provide different levels of detail and voice. For example, what people share in a group discussion maybe less private than what they would reveal in a one-to-one interview, but in a group dynamic people can be reminded of issues they might forget to talk about otherwise.


Researchers can also design a mixed-method qualitative and quantitative study where very different methods are triangulated. This may take the form of a quantitative survey, where people rank an experience or service, combined with a qualitative focus group, interview or even open-ended comments. It’s also common to see a validated measure from psychology used to give a metric to something like pain, anxiety or depression, and then combine this with detailed data from a qualitative interview with that person.


In ‘theoretical triangulation’, a variety of different theories are used to interpret the data, such as discourse, narrative and context analysis, and these different ways of dissecting and illuminating the data are compared.


Finally there is ‘investigator triangulation’, where different researchers each conduct separate analysis of the data, and their different interpretations are reconciled or compared. In participatory analysis it’s also possible to have a kind of respondent triangulation, where a researcher is trying to compare their own interpretations of data with that of their respondents.

 

 

While there is a lot written about the theory of triangulation, there is not as much about actually doing it (Jick 1979). In practice, researchers often find it very difficult to DO the triangulation: different data sources tend to be difficult to mesh together, and will have very different discourses and interpretations. If you are seeing ‘anger’ and ‘dissatisfaction’ in interviews with a mental health service, it will be difficult to triangulate such emotions with the formal language of a policy document on service delivery.


In general the qualitative literature cautions against seeing triangulation as a way to improve the validity and reliability of research, since this tends to imply a rather positivist agenda in which there is an absolute truth which triangulation gets us closer to. However, there are plenty that suggest that the quality of qualitative research can be improved in this way, such as Golafshani (2003). So you need to be clear of your own theoretical underpinning: can you get to an ‘absolute’ or ‘relative’ truth through your own interpretations of two types of data? Perhaps rather than positivist this is a pluralist approach, creating multiplicities of understandings while still allowing for comparison.


It’s worth bearing in mind that triangulation and multiple methods isn’t an easy way to make better research. You still need to do all different sources justice: make sure data from each method is being fully analysed, and iteratively coded (if appropriate). You should also keep going back and forth, analysing data from alternate methods in a loop to make sure they are well integrated and considered.

 


Qualitative data analysis software can help with all this, since you will have a lot of data to process in different and complementary ways. In software like Quirkos you can create levels, groups and clusters to keep different analysis stages together, and have quick ways to do sub-set analysis on data from just one method. Check out the features overview or mixed-method analysis with Quirkos for more information about how qualitative research software can help manage triangulation.

 


References and further reading

Carter et al. 2014, The use of triangulation in qualitative research, Oncology Nursing Forum, 41(5), https://www.ncbi.nlm.nih.gov/pubmed/25158659

 

Denzin, 1978 The Research Act: A Theoretical Introduction to Sociological Methods, McGraw-Hill, New York.

 

Golafshani, N., 2003, Understanding reliability and validity in qualitative research, 8(4), http://nsuworks.nova.edu/cgi/viewcontent.cgi?article=1870&context=tqr


Bekhet A, Zauszniewski J, 2012, Methodological triangulation: an approach to
understanding data, Nurse Researcher, 20 (2), http://journals.rcni.com/doi/pdfplus/10.7748/nr2012.11.20.2.40.c9442

 

Jick, 1979, Mixing Qualitative and Quantitative Methods: Triangulation in Action,  Administrative Science Quarterly, 24(4),  https://www.jstor.org/stable/2392366

 

 

100 blog articles on qualitative research!

images by Paul Downey and AngMoKio

 

Since our regular series of articles started nearly three years ago, we have clocked up 100 blog posts on a wide variety of topics in qualitative research and analysis! These are mainly short overviews, aimed at students, newcomers and those looking to refresh their practice. However, they are all referenced with links to full-text academic articles should you need more depth. Some articles also cover practical tips that don't get into the literature, like transcribing without getting back-ache, and hot to write handy semi-strucutred interview guides. These have become the most popular part of our website, and there's now more than 80,000 words in my blog posts, easily the length of a good sized PhD thesis!

 

That's quite a lot to digest, so in addition to the full archive of qualitative research articles, I've put together a 'best-of', with top 5 articles on some of the main topics. These include Epistemology, Qualitative methods, Practicalities of qualitative research, Coding qualitative data, Tips and tricks for using Quirkos, and Qualitative evaluations and market research. Bookmark and share this page, and use it as a reference whenever you get stuck with any aspect of your qualitative research.

 

While some of them are specific to Quirkos (the easiest tool for qualitative research) most of the principles are universal and will work whatever software you are using. But don't forget you can download a free trial of Quirkos at any time, and see for yourself!

 


Epistemology

What is a Qualitative approach?
A basic overview of what constitutes a qualitative research methodology, and the differences between quantitative methods and epistimologies

 

What actually is Grounded Theory? A brief introduction
An overview of applying a grounded theory approach to qualitative research

 

Thinking About Me: Reflexivity in science and qualitative research
How to integrate a continuing reflexive process in a qualitative research project

 

Participatory Qualitative Analysis
Quirkos is designed to facilitate participatory research, and this post explores some of the benefits of including respondents in the interpretation of qualitative data

 

Top-down or bottom-up qualitative coding
Deciding whether to analyse data with high-level theory-driven codes, or smaller descriptive topics (hint – it's probably both!)

 

 


Qualitative methods

An overview of qualitative methods
A brief summary of some of the commonly used approaches to collect qualitative data

 

Starting out in Qualitative Analysis
First things to consider when choosing an analytical strategy

 

10 tips for semi-structured qualitative interviewing
Semi-structured interviews are one of the most commonly adopted qualitative methods, this article provides some hints to make sure they go smoothly, and provide rich data

 

Finding, using and some cautions on secondary qualitative data
Social media analysis is an increasingly popular research tool, but as with all secondary data analysis, requires acknowledging some caveats

 

Participant diaries for qualitative research
Longitudinal and self-recorded data can be a real gold mine for qualitative analysis, find out how it can help your study

 


Practicalities of qualitative research

Transcription for qualitative interviews and focus-groups
Part of a whole series of blog articles on getting qualitative audio transcribed, or doing it yourself, and how to avoid some of the pitfalls

 

Designing a semi-structured interview guide for qualitative interviews
An interview guide can give the researcher confidence and the right level of consistency, but shouldn't be too long or too descriptive...

 

Recruitment for qualitative research
While finding people to take part in your qualitative study can seem daunting, there are many strategies to choose, and should be closely matched with the research objectives

 

Sampling considerations in qualitative research
How do you know if you have the right people in your study? Going beyond snowball sampling for qualitative research

 

Reaching saturation point in qualitative research
You'll frequently hear people talking about getting to data saturation, and this post explains what that means, and how to plan for it

 

 

Coding qualitative data

Developing and populating a qualitative coding framework in Quirkos
How to start out with an analytical coding framework for exploring, dissecting and building up your qualitative data

 

Play and Experimentation in Qualitative Analysis
I feel that great insight often comes from experimenting with qualitative data and trying new ways to examine it, and your analytical approach should allow for this

 

6 meta-categories for qualitative coding and analysis
Don't just think of descriptive codes, use qualitative software to log and keep track of the best quotes, surprises and other meta-categories

 

Turning qualitative coding on its head
Sometimes the most productive way forward is to try a completely new approach. This post outlines several strange but insightful ways to recategorise and examine your qualitative data

 

Merging and splitting themes in qualitative analysis
It's important to have an iterative coding process, and you will usually want to re-examine themes and decide whether they need to be more specific or vague

 

 


Quirkos tips and tricks

Using Quirkos for Systematic Reviews and Evidence Synthesis
Qualitative software makes a great tool for literature reviews, and this article outlines how to sep up a project to make useful reports and outputs

 

How to organise notes and memos in Quirkos
Keeping memos is an important tool during the analytical process, and Quirkos allows you to organise and code memo sources in the same way you work with other data

 

Bringing survey data and mixed-method research into Quirkos
Data from online survey platforms often contains both qualitative and quantitative components, which can be easily brought into Quirkos with a quick tool

 

Levels: 3-dimensional node and topic grouping in Quirkos
When clustering themes isn't comprehensive enough, levels allows you to create grouped categories of themes that go across multiple clustered bubbles

 

10 reasons to try qualitative analysis with Quirkos
Some short tips to make the most of Quirkos, and get going quickly with your qualitative analysis

 

 

Qualitative market research and evaluations

Delivering qualitative market insights with Quirkos
A case study from an LA based market research firm on how Quirkos allowed whole teams to get involved in data interpretation for their client

 

Paper vs. computer assisted qualitative analysis
Many smaller market research firms still do most of their qualitative analysis on paper, but there are huge advantages to agencies and clients to adopt a computer-assisted approach

 

The importance of keeping open-ended qualitative responses in surveys
While many survey designers attempt to reduce costs by removing qualitative answers, these can be a vital source of context and satisfaction for users

 

Qualitative evaluations: methods, data and analysis
Evaluating programmes can take many approaches, but it's important to make sure qualitative depth is one of the methods adopted

 

Evaluating feedback
Feedback on events, satisfaction and engagement is a vital source of knowledge for improvement, and Quirkos lets you quickly segment this to identify trends and problems

 

 

 

Thinking About Me: Reflexivity in science and qualitative research

self rembrandt reflexivity

Reflexivity is a process (and it should be a continuing process) of reflecting on how the researcher could be influencing a research project.


In a traditional positivist research paradigm, the researcher attempts to be a neutral influence on  research. They make rational and logical interpretations, and assume a ‘null hypothesis’, in which they expect all experiments to have no effect, and have no pre-defined concept of what the research will show.


However, this is a lofty aspiration and difficult to achieve in practice. Humans are fallible and emotional beings, with conflicting pressures on jobs, publication records and their own hunches. There are countless stories of renowned academics having to retract papers, or their whole research careers because of faked results, flawed interpretations or biased coding procedures.


Many consider it to be impossible to fully remove the influence of the researcher from the process, and so all research would be ‘tainted’ in some way by the prejudices of those in the project. This links into the concept of “implicit bias” where even well-meaning individuals are influenced by subconscious prejudices. These have been shown to have a significant discriminatory impact on pay, treatment in hospitals and recruitment along lines of gender and ethnicity.


So does this mean that we should abandon research, and the pursuit of truly understanding the world around us? No! Although we might reject the notion of attaining an absolute truth, that doesn’t mean we can’t learn something. Instead of pretending that the researcher is an invisible and neutral piece of the puzzle, a positionality and reflexivity approach argues that the background of the researcher should be detailed in the same way as the data collection methods and analytical techniques.


But how is this done in practice? Does a researcher have to bare their soul to the world, and submit their complete tax history? Not quite, but many in feminist and post-positivist methodologies will create a ‘positionality statement’ or ‘reflexivity statement’. This is a little like a CV or self-portrait of potential experiences and bias, in which the researcher is honest about personal factors that might influence their decisions and interpretations. These might include the age, gender, ethnicity and class of the researcher, social and research issues they consider important, their country and culture, political leanings, life experiences and education. In many cases a researcher will include such a statement with their research publications and outputs, just Googling ‘positionality statements’ will provide dozens of links to examples.

 

However, I feel that this is a minimum level of engagement with the issue, and it’s actually important to keep a reflexive stance throughout the research process. Just like how a one-off interview is not as accurate a record as a daily diary, keeping reflexivity notes as an ongoing part of a research journal is much more powerful. Here a researcher can log changes in their situation, assumptions and decisions made throughout the research process that might be affected by their personal stance. It’s important that the researcher is constantly aware of when they are making decisions, because each is a potential source of influence. This includes deciding what to study, who to sample, what questions to ask, and which sections of text to code and present in findings.


Why this is especially pertinent to qualitative research? It’s often raised in social science, especially ethnography and close case study work with disadvantaged or hard-to-reach populations where researchers have a much closer engagement with their subjects and data. It could be considered that there are more opportunities for personal stance to have an impact here, and that many qualitative methods, especially the analysis process using grounded theory, are open to multiple interpretations that vary by researcher. Many make the claim that qualitative research and data analysis is more subjective than quantitative methods, but as we’ve argued above, it might be better to say that they are both subjective. Many qualitative epistemological approaches are not afraid of this subjectivity, but will argue it is better made forthright and thus challenged, rather than trying to keep it in the dark.


Now, this may sound a little crazy, especially to those in traditionally positivist fields like STEM subjects (Science, Technology Engineering, Mathematics). Here there is generally a different move: to use process and peer review to remove as many aspects of the research that are open to subjective interpretation as possible. This direction is fine too!


However, I would argue that researchers already have to make a type of reflexivity document: a conflict of interest statement. Here academics are supposed to declare any financial or personal interest in the research area that might influence their neutrality. This is just like a positionality statement! An admission that researchers can be influenced by prejudices and external factors, and that readers should be aware of such conflicts of interest when doing their own interpretation of the results.


If it can be the case that money can influence science (and it totally can) it’s also been shown that gender and other aspects of an academic's background can too. All reflexivity asks us to do is be open and honest with our readers about who we are, so they can better understand and challenge the decisions we make.

 

 

Like all our blog articles, this is intended to be a primer on some very complex issues. You’ll find a list of references and further reading below (in addition to the links included above). Don’t forget to try Quirkos for all your qualitative data analysis needs! It can help you keep, manage and code a reflexive journal throughout your analysis procedure. See this blog article for more!

 

 

References

 

Bourke, B., 2014, Positionality: Reflecting on the Research Process, The Qualitative Report 19, http://www.nova.edu/ssss/QR/QR19/bourke18.pdf


Day, E., 2002, Me, My*self and I: Personal and Professional Re-Constructions in Ethnographic Research, FQS 3(3) http://www.qualitative-research.net/index.php/fqs/article/view/824/1790


Greenwald, A., Krieger, L., 2006, Implicit Bias: Scientific Foundations, California Law Review, 94(4). http://www.jstor.org/stable/20439056


Lynch, M., 2000, Against Reflexivity as an Academic Virtue and Source of Privileged Knowledge, Theory, Culture & Society 17(3), http://tcs.sagepub.com/content/17/3/26.short


Savin-Baden, M., Major C., 2013, Personal stance, positionality and reflexivity, in Qualitative Research: The essential guide to theory and practice. Routledge, London.


Soros, G., 2013, Fallibility, reflexivity and the human uncertainty principle, Journal of Economic Methodology, 20(4) https://www.georgesoros.com/essays/fallibility-reflexivity-and-the-human-uncertainty-principle-2/

 

 

The importance of keeping open-ended qualitative responses in surveys

open-ended qualitative responses in surveys

I once had a very interesting conversation at a MRS event with a market researcher from a major media company. He told me that they were increasingly ‘costing-out’ the qualitative open-ended questions from customer surveys because they were too expensive and time consuming to analyse. Increasingly they were replacing open-ended questions with a series of Likert scale questions which could be automatically and statistically examined.

 

I hear similar arguments a lot, and I totally understand the sentiment: doing good qualitative research is expensive, and requires good interpretation. However, it’s just as possible to do statistical analysis poorly, and come up with meaningless and inaccurate answers. For example, when working with Likert scales, you have to be careful about which parametric tests you use, and make sure that the data is normally distributed (Sullivan and Artino 2013).

 

There is evidence that increasing the number of options in closed questions does not significantly change the responses participants share (Dawes 2008), so if you need a good level of nuance into customer perceptions, why not let your users choose their own words. “Quick Qual” approaches, like asking people to use one word to describe the product or their experience can be really illuminating. Better yet, these responses are easy to analyse, and present as an engaging word cloud!

 

Even when you have longer responses, it’s not necessary to always take a full classification and quantification approach to qualitative survey data such as in Nardo (2003). For most market research investigations, this level of detail is not needed by researcher or client.

 

Indeed, you don’t need to do deep analysis of the data to get some value from it. A quick read through some of the comments can make sure your questions are on track, and there aren’t other common issues being raised. It helps check you were asking the right questions, and can help explain why answers for some people aren’t matching up with the rest. As ever, qualitative data is great for surprises, responses you hadn’t thought of, and understanding motivations.

 

Removing open ended questions means you can’t provide nice quotes or verbatims from the feedback, which are great for grounding a report and making it come to life. If you have no quotes from respondents, you also are missing the opportunity to create marketing campaigns around comments from customer evangelists, something Lidl UK has done well by featuring positive Tweets about their brand. In this article marketing director Claire Farrant notes the importance of listening and engaging with customer feedback in this way. It can also make people more satisfied with the feedback process if they have a chance to voice their opinions in more depth.

 

I think it’s also vital to include open-ended questions when piloting a survey or questionnaire. Having qualitative data at an early stage can let you refine your questions, and the possible responses. Sometimes the language used by respondents is important to reflect when setting closed questions: you don’t want to be asking questions like “How practical did you find this product” when the most common term coming from the qualitative data is “Durable”. It’s not always necessary to capture and analyse qualitative data for thousands of responses, but looking at a sample of a few dozen or hundred can show if you are on the right track before a big push.

 

You also shouldn’t worry too much about open-ended surveys having lower completion rates. A huge study by SurveyMonkey found that a single open question actually increased engagement slightly, and only when there were 5 or more open-ended response boxes did this have a negative impact on completion.

 

Finally, without qualitative responses, you lose the ability to triangulate and integrate your qualitative and quantitative data: one of the most powerful tools in survey analysis. For example, in Quirkos it is trivial to do very quick comparative subset analysis, using any of the closed questions as a pivot point. So you can look at the open ended responses from people who gave high satisfaction scores next to those that were low, and rather than then being stuck trying to explain the difference in opinion, you can look at the written comments to get an insight into why they differ.

 

And I think this is key to creating good reports for clients. Usually, the end point for a customer is not being told that 83% of their customers are satisfied with their helpline: they want to actions that will improve or optimise delivery. What exactly was the reason 17% of people had a bad experience? It’s all very well to create an elaborate chain of closed questions, such as ‘You said you were unsatisfied. Which of these reasons bests explains this? You said the response time made you unsatisfied. How long did you wait? 0-3min, 3-5min etc. etc. But these types of surveys are time consuming to program and make comprehensive, and sometimes just allowing someone to type “I had to wait more than 15 minutes for a response” would have given you all the data you needed on a critical point.

 

The depth and insight from qualitative data can illuminate differences in respondent’s experiences, and give the key information to move things forward. Instead of thinking how can you cost-out qualitative responses, think instead how you can make sure they are integrated to provide maximum client value! A partnership between closed and open questions is usually the most powerful way to get both a quick summary and deep insight into complex interactions, and there is no need to be afraid of the open box!

 

Quirkos is designed to make it easy to bring both qualitative and quantitative data from surveys together, and use the intuitive visual interface to explore and play with market research data. Download a free trial of our qualitative analysis software, or contact us for a demo, and see how quickly you can step-up from paper based analysis into a streamlined and insightful MRX workflow!

 

Analytical memos and notes in qualitative data analysis and coding

Image adapted from https://commons.wikimedia.org/wiki/File:Male_forehead-01_ies.jpg - Frank Vincentz

There is a lot more to qualitative coding than just deciding which sections of text belong in which theme. It is a continuing, iterative and often subjective process, which can take weeks or even months. During this time, it’s almost essential to be recording your thoughts, reflecting on the process, and keeping yourself writing and thinking about the bigger picture. Writing doesn’t start after the analysis process, in qualitative research it often should precede, follow and run in parallel to a iterative interpretation.


The standard way to do this is either through a research journal (which is also vital during the data collection process) or through analytic memos. Memos create an important extra level of narrative: an interface between the participant’s data, the researcher’s interpretation and wider theory.


You can also use memos as part of a summary process, to articulate your interpretations of the data in a more concise format, or even throw the data wider and larger by drawing from larger theory.


It’s also a good cognitive exercise: regularly make yourself write what you are thinking, and keep yourself articulating yourself. It will make writing up at the end a lot easier in the end! Memos can be a very flexible tool, and qualitative software can help keep these notes organised. Here are 9 different ways you might use memos as part of your work-flow for qualitative data analysis:

 

Surprises and intrigue
This is probably the most obvious way to use memos: note during your reading and coding things that are especially interesting, challenging or significant in the data. It’s important to do more than just ‘tag’ these sections, reflect to yourself (and others) why these sections or statements stand out.

 

Points where you are not sure
Another common use of memos is to record sections of the data that are ambiguous, could be interpreted in different ways, or just plain don’t fit neatly in to existing codes or interpretations. But again, this should be more than just ‘flagging’ bits that need to be looked at again later, it’s important to record why the section is different: sometimes the act of having to describe the section can help comprehension and illuminate the underlying causation.

 

Discussion with other researchers
Large qualitative research projects will often have multiple people coding and analysing the data. This can help to spread the workload, but also allows for a plurality of interpretations, and peer-checking of assumptions and interpretations. Thus memos are very important in a team project, as they can be used to explain why one researcher interpreted or coded sources in a certain way, and flag up ambiguous or interesting sections for discussion.

 

Paper-trail
Even if you are not working as part of a team, it can be useful to keep memos to explain your coding and analytical choices. This may be important to your supervisors (or viva panel) as part of a research thesis, and can be seen as good practice for sharing findings in which you are transparent about your interpretations. There are also some people with a positivist/quantitative outlook who find qualitative research difficult to trust because of the large amount of seemingly subjective interpretation. Memos which detail your decision making process can help ‘show your working out’ and justify your choices to others.

 

Challenging or confirming theory
This is another common use of memos, to discuss how the data either supports or challenges theory. It is unusual for respondents to neatly say something like “I don’t think my life fits with the classical structure of an Aeschylean tragedy” should this happen to be your theoretical approach! This means you need to make these observations and higher interpretation, and note how particular statements will influence your interpretations and conclusions. If someone says something that turns your theoretical framework on its head, note it, but also use the memos as a space to record context that might be used later to explain this outlier. Memos like this might also help you identify patterns in the data that weren’t immediately obvious.

 

Questioning and critiquing the data/sources
Respondents will not always say what they mean, and sometimes there is an unspoken agenda below the surface. Depending on the analytical approach, an important role of the researcher is often to draw deeper inferences which may be implied or hinted at by the discourse. Sometimes, participants will outright contradict themselves, or suggest answers which seem to be at odds with the rest of what they have shared. It’s also a great place to note the unsaid. You can’t code data that isn’t there, but sometimes it’s really obvious that a respondent is avoiding discussing a particular issue (or person). Memos can note this observation, and discuss why topics might be uncomfrotable or left out in the narrative.


Part of an iterative process
Most qualitative research does not follow a linear structure, it is iterative and researchers go back and re-examine the data at different stages in the process. Memos should be no different, they can be analysed themselves, and should be revisited and reviewed as you go along to show changes in thought, or wider patterns that are emerging.


Record your prejudices and assumptions
There is a lot of discussion in the literature about the importance of reflexivity in qualitative research, and recognising the influence of the non-neutral researcher voice. Too often, this does not go further than a short reflexivity/positionality statement, but should really be a constantly reconsidered part of the analytical process. Memos can be used as a prompt and record of your reflexive process, how the data is challenges your prejudices, or how you might be introducing bias in the interpretation of the data.


Personal thoughts and future directions
As you go through the data, you may be noticing interesting observations which are tangential, but might form the basis of a follow-on research project or reinterpretation of the data. Keeping memos as you go along will allow you to draw from this again and remember what excited you about the data in the first place.

 

 

Qualitative analysis software can help with the memo process, keeping them all in the same place, and allowing you to see all your memos together, or connected to the relevant section of data. However, most of the major software packages (Quirkos included) don’t exactly forefront the memo tools, so it is important to remember they are there and use them consistently through the analytical process.

 

Memos in Quirkos are best done using a separate source which you edit and write your memos in. Keeping your notes like this allows you to code your memos in the same way you would with your other data, and use the source properties to include or exclude your memos in reports and outputs as needed. However, it can be a little awkward to flip between the memo and active source, and there is currently no way to attach memos to a particular coding event. However, this is something we are working on for the next major release, and this should help researchers to keep better notes of their process as they go along. More detail on qualitative memos in Quirkos can be found in this blog post article.

 

 

There is a one-month free trial of Quirkos, and it is so simple to use that you should be able to get going just by watching one of our short intro videos, or the built-in guide. We are also here to help at any stage of your process, with advice about the best way to record your analytical memos, coding frameworks or anything else. Don’t be shy, and get in touch!

 


References and further reading:


Chapman, Y., Francis, K., 2008. Memoing in qualitative research, Journal of Research in Nursing, 13(1). http://jrn.sagepub.com/content/13/1/68.short?rss=1&ssource=mfc

 

Gibbs, G., 2002, Writing as Analysis, http://onlineqda.hud.ac.uk/Intro_QDA/writing_analysis.php

Saldana, J., 2015, The Coding Manual for Qualitative Researchers, Writing Analytic Memos about Narritative and Visual Data, Sage, London. https://books.google.co.uk/books?id=ZhxiCgAAQBAJ

 

 

Starting a qualitative research thesis, and choosing a CAQDAS package

qualitative thesis software

 

For those about to embark on a qualitative Masters or PhD thesis, we salute you!

 

More and more post-graduate students are using qualitative methods in their research projects, or adopting mixed-method data collection and using a small amount of qualitative data which needs to be combined with quantitative data. So this year, how can students decide the best approach for the analysis of their data, and can CAQDAS or QDA software help their studies?

 

First, as far as possible, don’t chose the software, choose the method. Think about what you are trying to research, the best way to get deep data to answer your research questions. The type and amount of data you have will be an important factor. Next, how much existing literature and theory there is around your research area? This will affect whether you will adopt a grounded theory approach, or will be testing or challenging existing theory.

 

Indeed, you may decide that that you don’t need software for your research project. For small projects, especially case studies, you may be more comfortable using printouts of your data, and while reading mark important sections with highlighters and post-it notes. Read Séror (2005) for a comparison of computer vs paper methods. You could also look at the 5 Level QDA, an approach to planning and learning the use of qualitative software so that you develop strategies and tactics that help you make the most of the QDA software.

 

Unfortunately, if you decide you want to use a particular software solution it’s not always as simple as it should be. You will have to eventually make a practical choice based on what software your university has, what support they provide, and what your peers and supervisors use.

 

However, while you are a student, it’s also a good time to experiment and see what works best for you. Not only do all the major qualitative software packages offer a free trial, student licences are hugely discounted against the full versions. This gives you the option to buy a copy for yourself (for a relatively small amount of money).

 

There’s a lot of variety in the different qualitative data analysis software available. The most common one is Nvivo, which your university or department may already have a licence for. This is a very powerful package, but can be intimidating for first-time users. Common alternatives like MAXQDA or Atlas.ti are more user friendly, but also adopt similar spreadsheet-like interfaces. There are also lots of more niche alternatives, for example Transana is unmatched for video analysis, and Dedoose works entirely in the cloud so you can access it from any computer. For a more comprehensive list, check out the Wikipedia list, or the profiles on textanalysis.info

 

Quirkos does a couple of things differently though. First, our student licences don’t expire, and are some of the cheapest around. This means that it doesn’t matter if your PhD takes 3 or 13 years, you will still be able to access your work and data without paying again. And yes, you can keep using your licence into your professional career. It also aims to be the easiest software package to use, and puts visualisations of the data first and foremost in the interface.

 

So give Quirkos a try, but don’t forget about all the other alternatives out there: between them all you will find something that works in the way you want it to and makes your research a little less painful!

 

 

Qualitative coding with the head and the heart

qualitative coding head and heart

 

In the analysis of qualitative data, it can be easy to fall in the habit of creating either very descriptive, or very general theoretical codes. It’s often a good idea to take a step back, and examine your coding framework, challenging yourself to look at the data in a fresh way. There are some more suggestions for how to do this in a blog post article about turning coding strategies on their head. But while in Delhi recently to deliver some training on using Quirkos, I was struck by a couple of exhibits at the National Museum which in a roundabout way made me think about coding qualitative data, and getting the balance right between analytical and emotional coding frameworks.

 

There were several depictions of Hindu deities trampling a dwarf called Apasmāra, who represented ignorance. I loved this focus of minimising ignorance, but it’s important to note that in Hindu mythology, ignorance should not be killed or completely vanquished, lest knowledge become too easy to obtain without effort.

 

Another sculpture depicted Yogini Vrishanna, a female deity that had taken the bull-head form. It was apparently common for deities to periodically take on an animal head to prevent over-intellectualism, and allow more instinctive, animalistic behaviour!

 

I was fascinated between this balance being depicted between venerating study and thought, but at the same time warning against over thinking. I think this is a message that we should really take to heart when coding qualitative data. It’s very easy to create coding themes that are often far too simple and descriptive to give much insight in the data: to treat the analysis as purely a categorization exercise. When this happens, students often create codes that are basically an index of low-level themes in a text. While this is often a useful first step, it’s important to go beyond this, and create codes (or better yet, a whole phase of coding) which are more interpretive, and require a little more thinking.

 

However, it’s also possible to go too far in the opposite direction and over-think your codes. Either this comes from looking at the data too tightly, focusing on very narrow and niche themes, or from the over-intellectualising that Yogini Vrishanna was trying to avoid above. When the researcher has their head deeply in the theory (and lets be honest this is an occupational hazard for those in the humanities and social sciences), there is a tendency to create very complicated high-level themes. Are respondents really talking about ‘social capital’, ‘non-capitalocentric ethics’ or ‘epistemic activism’? Or are these labels which the researcher has imposed on the data?

 

These might be the times we have to put on our imaginary animal head, and try to be more inductive and spontaneous with our coding. But it also requires coding less from the head, and more from the heart. In most qualitative research we are attempting to understand the world through the perspective of our respondents, and most people are emotional beings, acting not just for rational reasons.

 

If our interpretations are too close to the academic, and not the lived experiences of our study communities, we risk missing the true picture. Sometimes we need a this theoretical overview to see more complex trends, but they should never be too far from the data in a single qualitative study. Be true to both your head and your heart in equal measure, and don’t be afraid to go back and look at your data again with a different animal head on!

 

If you need help to organise and visualise all the different stages of coding, try using qualitative analysis software like Quirkos! Download a free trial, and see for yourself...

 

Reaching saturation point in qualitative research

saturation in qualitative research

 

A common question from newcomers to qualitative research is, what’s the right sample size? How many people do I need to have in my project to get a good answer for my research questions? For research based on quantitative data, there is usually a definitive answer: you can decide ahead of time what sample size is needed to gain a significant result for a particular test or method.

 

In qualitative research, there is no neat measure of significance, so getting a good sample size is more difficult. The literature often talks about reaching ‘saturation point’ - a term taken from physical science to represent a moment during the analysis of the data where the same themes are recurring, and no new insights are given by additional sources of data. Saturation is for example when no more water can be absorbed by a sponge, but it’s not always the case in research that too much is a bad thing. Saturation in qualitative research is a difficult concept to define Bowen (2008), but has come to be associated with the point in a qualitative research project when there is enough data to ensure the research questions can be answered.

 

However, as with all aspects of qualitative research, the depth of the data is often more important than the numbers (Burmeister & Aitken, 2012). A small number of rich interviews or sources, especially as part of a ethnography can have the importance of dozens of shorter interviews. For Fusch (2015):

 

“The easiest way to differentiate between rich and thick data is to think of rich as quality and thick as quantity. Thick data is a lot of data; rich data is many - layered, intricate, detailed, nuanced, and more. One can have a lot of thick data that is not rich; conversely, one can have rich data but not a lot of it. The trick, if you will, is to have both.”

 

So the quantity of the data is only one part of the story. The researcher needs to engage with it at an early level to ensure “all data [has] equal consideration in the analytic coding procedures. Frequency of occurrence of any specific incident should be ignored. Saturation involves eliciting all forms of types of occurrences, valuing variation over quantity.” Morse (1995). When the amount of variation in the data is levelling off, and new perspectives and explanations are no longer coming from the data, you may be approaching saturation. The other consideration is when there are no new perspectives on the research question, for example Brod et al. (2009) recommend constructing a ‘saturation grid’ listing the major topics or research questions against interviews or other sources, and ensuring all bases have been covered.

 

But despite this, is it still possible to put rough numbers on how many sources are required for a qualitative research project? Many papers have attempted to do this, and as could be expected, the results vary greatly. Mason (2010) looked at the average number of respondents in PhD thesis using on qualitative research. They found an average of 30 sources were used, but with a low of 1 source, a high of 95 and a standard deviation of 18.5! It is interesting to look at their data tables, as they show succinctly the differences in sample size expected for different methodological approaches, such as case study, ethnography, narrative enquiry, or semi-structured interviews.

 


While 30 in-depth interviews may seem high (especially for what is practical in a PhD study) others work with much less: a retrospective examination from a qualitative project by Guest et al. (2006) found that even though they conducted 60 interviews, they had saturation after 12, with most of the themes emergent after just 6. On the other hand, if students have supervisors who have more of a mixed-method or quantitative background, they will often struggle to justify the low number of participants suggested for methods of qualitative enquiry.

 


The important thing to note is that it is nearly impossible for a researcher to know when they have reached saturation point unless they are analysing the data as it is collected. This exposes one of the key ties of the saturation concept to grounded theory, and it requires an iterative approach to data collection and analysis. Instead of setting a fixed number of interviews or focus-groups to conduct at the start of the project, the investigator should be continuously going through cycles of collection and analysis until nothing new is being revealed.

 


This can be a difficult notion to work with, especially when ethics committees or institutional review boards, limited time or funds place a practical upper limit on the quantity of data collection. Indeed Morse et al (2014) found that in most dissertations they examined, the sample size was chosen for often practical reasons, not because a claim of saturation was made.

 


You should also be aware that many take umbrage at the idea that one should use the concept of saturation. O’Reilly (2003) notes that since the concept of saturation comes out of grounded theory, it’s not always appropriate to apply to research projects, and the term has become over used in the literature. It’s also not a good indicator by itself of the quality of qualitative research.

 


For more on these issues, I would recommend any of the articles referenced above, as well as discussion with supervisors, peers and colleagues. There is also more on sampling considerations in qualitative research in our previous blog post article.

 

 

Finally, don’t forget that Quirkos can help you take an iterative approach to analysis and data collection, allowing you to quickly analyse your qualitative data as you go through your project, helping you visualise your path to saturation (if you so choose this approach!). Download a free trial for yourself, and take a closer look at the rest of the features the software offers.

 

Tips for managing mixed method and participant data in Quirkos and CAQDAS software

mixed method data

 

Even if you are working with pure qualitative data, like interview transcripts, focus groups, diaries, research diaries or ethnography, you will probably also have some categorical data about your respondents. This might include demographic data, your own reflexive notes, context about the interview or circumstances around the data collection. This discrete or even quantitative data can be very useful in organising your data sources across a qualitative project, but it can also be used to compare groups of respondents.

 


It’s also common to be working with more extensive mixed data in a mixed method research project. This frequently requires triangulating survey data with in-depth interviews for context and deeper understanding. However, much survey data also includes qualitative text data in the form of open-ended questions, comments and long written answers.

 


This blog has looked before at how to bring in survey data from on-line survey platforms like Surveymonkey, Qualtrics and Limesurvey. It’s really easy to do this, whatever you are using, just export as a CSV file, which Quirkos can read and import directly. You’ll get the option to change whether you want each question to be treated as discrete data, a longer qualitative answer, or even the name/identifier for each source.

 


But even if you haven’t collected your data using an online platform, it is quite easy to format it in a spreadsheet. I would recommend this as an option for many studies, it’s simply good data management to be able to look at all your participant data together. I often have a table of respondent’s data (password protected of course) which contains a column for names, contact details, whether I have consent forms, as well as age, occupation and other relevant information. During data collection and recruitment having this information neatly arranged helps me remember who I have contacted about the research project (and when), who has agreed to take part, as well as suggestions from snowball sampling for other people to contact.

 


Finally, a respondent ‘database’ like this can also be used to record my own notes on the respondents and data collection. If there is someone I have tried to contact many times but seems reluctant to take part, this is important to note. It can remind me when I have agreed to interview people, and keep together my own comments on how well this went. I can record which audio and transcript files contain the interview for this respondent, acting as a ‘master key’ of anonymised recordings. 

 


So once you have your long-form qualitative data, how best to integrate this with the rest of the participant data? Again, I’m going to give examples using Quirkos here, but the similar principals will apply to many other CAQDAS/QDA software packages.

 


First, you could import the spreadsheet data as is, and add the transcripts later. To do this, just save your participant database as a CSV file in Excel, Google Docs, LibreOffice or your spreadsheet software of choice. You can bring in the file into a blank Quirkos project using the ‘Import source from CSV’ on the bottom right of the screen. The wizard in the next page will allow you to choose how you want to treat each column in the spreadsheet, and each row of data will become a new source. When you have brought in the data from the spreadsheet, you can individually bring the qualitative data in as the text source for each participant, copy and pasting from wherever you have the transcript data.

 


However, it’s also possible to just put the text into a column in the spreadsheet. It might look unmanageable in Excel when a single cell has pages of text data, but it will make for an easy one step import into Quirkos. Now when you bring in the data to Quirkos, just select the column with the text data as the ‘Question’ and discrete data as the ‘Properties’ (although they should be auto-detected like this).

 


You can also do direct data entry in Quirkos itself, and there are some features to help make this quick and relatively painless. The Properties and Values editor allows you to create categories and values to define your sources. There are also built in values for True/False, Yes/No, options from 1 -10 or Likert scales from Agree to Disagree. These let you quickly enter common types of data, and select them for each source. It’s also possible to export this data later as a CSV file to bring back into spreadsheet software.

 

mixed method data entry in quirkos

 

Once your data has been coded in Quirkos, you can use tools like the query view and the comparison views to quickly see differences between groups of respondents. You can also create simple graphs and outputs of your quantitative and discrete data. Having not just demographic information, but also your notes and thoughts together is vital context to properly interpreting your qualitative and quantitative data.

 

 

A final good reason to keep a good database of your research data is to make sure that it is properly documented for secondary analysis and future use. Should you want to ever work with the data again, share it with another research team, or the wider community, an anonymised data table like this is important to make sure the research has the right metadata to be used for different lines of enquiry.

 

 

Get an overview of Quirkos and then try for yourself with our free trial, and see how it can help manage pure qualitative or mixed method research projects.

 

 

 

What actually is Grounded Theory? A brief introduction

grounded theory

 

“It’s where you make up as you go along!”

 

For a lot of students, Grounded Theory is used to describe a qualitative analytical method, where you create a coding framework on the fly, from interesting topics that emerge from the data. However, that's not really accurate. There is a lot more to it, and a myriad of different approaches.


Basically, grounded theory aims to create a new theory of interpreting the world, either when it’s an area where there isn’t any existing theory, or you want to challenge what is already out there. An approach that is often overused, it is a valuable way of approaching qualitative research when you aren’t sure what questions to ask. However, it is also a methodological box of worms, with a number of different approaches and confusing literature.


One of my favourite quotes on the subject is from Dey (1999) who says that there are “probably as many versions of grounded theory as there are grounded theorists”. And it can be a problem: a quick search of Google Scholar will show literally hundreds of qualitative research articles with the phrase “grounded theory was used” and no more explanation than this. If you are lucky, you’ll get a reference, probably to Strauss and Corbin (1990). And you can find many examples in peer-reviewed literature describing grounded theory as if there is only one approach.

 

Realistically there are several main types of grounded theory:

 

Classical (CGT)
Classical grounded theory is based on the Glaser and Strauss (1967) book “The Discovery of Grounded Theory”, in which it is envisaged more as a theory generation methodology, rather than just an analytical approach. The idea is that you examine data and discover in it new theory – new ways of explaining the world. Here everything is data, and you should include fieldwork notes as well as other literature in your process. However, a gap is recommended so that literature is not examined first (like when doing a literature review) creating bias too early, but rather engaging with existing theory as something to be challenged.


Here the common coding types are substantive and theoretical – creating an iterative one-two punch which gets you from data to theory. Coding is considered to be very inductive, having less initial focus from the literature.

 

Modified (Straussian)
The way most people think about grounded theory probably links closest to the Strauss and Corbin (1990) interpretation of grounded theory, which is probably more systematic and concerned with coding and structuring qualitative data. It traditionally proposes a three (or sometimes two) stage iterative coding approach, first creating open codes (inductive), then grouping and relating them with axial coding, and finally a process of selective coding. In this approach, you may consider a literature review to be a restrictive process, binding you to prejudices from existing theory. But depending on the different interpretations, modified grounded theory might be more action oriented, and allow more theory to come from the researcher as well as the data. Speaking of which…

 

Constructivist
The seminal work on constructivism here is from Charmaz (2000 or 2006), and it’s about the way researchers create their own interpretations of theory from the data. It aims to challenge the idea that theory can be ‘discovered’ from the data – as if it was just lying there, neutral and waiting to be unearthed. Instead it tries to recognise that theory will always be biased by the way researchers and participants create their own understanding of society and reality. This engagement between participants and researchers is often cited as a key part of the constructivist approach.
Coding stages would typically be open, focused and then theoretical. Whether you see this as being substantively different from the ‘open – axial – selective’ modified grounded theory strategy is up to you. You’ll see many different interpretations and implementations of all these coding approaches, so focus more on choosing the philosophy that lies behind them.

 

Feminist
A lot of the literature here comes from the nursing field, including Kushner and Morrow (2003), Wuest (1995), and Keddy (2006). There are clear connections here with constructivist and post-modern approaches: especially the rejection of positivist interpretations (even in grounded theory!), recognition of multiple possible interpretations of reality, and the examination of diversity, privilege and power relations.

 

Post-modern
Again, a really difficult segmentation to try and label, but for starters think Foucault, power and discourse. Mapping of the social world can be important here, and some writers argue that the practice of trying to generate theory at all is difficult to include in a postmodern interpretation. This is a reaction against the positivist approach some see as inherent in classical grounded theory. For where this leaves the poor researcher practically, there are at least one main suggested approach here from Clarke (2005) who focuses on mapping the social world, including actors and noting what has been left unsaid.

 

There are also what seem to me to be a variety of approaches plus a particular methodology, such as discursive grounded theory where the focus is more on the language used in the data (McCreaddie and Payne 2010). It basically seeks to integrate discourse analysis to look at how participants use language to describe themselves and their worlds. However, I would argue that many different ways of analysing data like discourse analysis can be combined with grounded theory approaches, so I am not sure they are a category of their own right.

 

 

To do grounded theory justice, you really need to do more than read this crude blog post! I’d recommend the chapter on Grounded Theory in Savin-Baden and Howell Major’s (2013) textbook on Qualitative Research. There’s also the wonderfully titled "A Novice Researcher’s First Walk Through the Maze of Grounded Theory" by Evans (2013). Grounding Grounded Theory (Dey 1999) is also a good read – much more critical and humorous than most. However, grounded theory is such a pervasive trope in qualitative research, indeed is seen by some to define qualitative research, that it does require some understanding and engagement.

 

But it’s also worth noting that for practical purposes, it’s not necessary to get involved in all the infighting and debate in the grounded theory literature. For most researchers the best advice is to read a little of each, and decide which approach is going to work best for you based on your research questions and personal preferences. Even better is if you can find another piece of research that describes a grounded theory approach you like, then you can just follow their lead: either citing them or their preferred references. Or, as Dey (1999) notes, you can just create your own approach to grounded theory! Many argue that grounded theory encourages such interpretation and pluralism, just be clear to yourself and your readers what you have chosen to do and why!

 

Merging and splitting themes in qualitative analysis

split and merge qual codes

To merge or to split qualitative codes, that is the question…

 

One of the most asked questions when designing a qualitative coding structure is ‘How many codes should I have?’. It’s easy to start out a project thinking that just a few themes will cover the research questions, but sooner or later qualitative analysis tends towards ballooning thematic structure, and before you’ve even started you might have a framework with dozens of codes. And while going through and analysing the data, you might end up with another couple dozen more. So it’s quite common for researchers to end up with more than a hundred codes (or sometimes hundreds)!

 

This can be alarming for students doing qualitative analysis for the first time, but I would argue it’s fine in most situations. While it can be confusing and disorienting if you are using paper and highlighters, when using CAQDAS software a large number of themes can be quite manageable. However, this itself can be a problem, since qualitative software makes it almost too easy to create an unwieldy number of codes. While some restraint is always advisable, when I am running workshops I usually advise new coders not to worry, since with the software it is easier to merge codes later, than split them apart.

 

I’m going to use the example of Quirkos here, but the same principal applies to any qualitative data analysis package. When you are going through and analysing your qualitative text sources, reading and coding them is the most time consuming part. If you create a new code for a theme half way through coding your data because you can see it is becoming important, you will have to go back to the beginning and read through the already coded sources to make sure you have complete coverage. That’s why it’s normally easier to think through codes before starting a code/read through.

 

Of course there is some methodological variance here: if you are doing certain types of grounded theory this may not apply as you will want to create themes on the fly. It’s also worth noting that good qualitative coding is an iterative process, and you should expect to go through the data several times anyway. Usually each time you do this you will look at the code structure in a different way – maybe creating a more higher-level, theory driven coding framework on each pass.

 

However, there is another way that QDA software helps you manage your qualitative themes: since it is simple to merge smaller codes together under a more general heading. In Quirkos, just right click on the code bubble you want to keep, and you will see the dialogue below:

 

merging qualitative codes in quirkos


Then select from the drop down list of other themes in your project which topic you want to merge into the Quirk you selected first. That’s it! All the coded text in the second bubble will get added to the first one, and it will keep the name of that code, appended with “(merged)” so you can identify it.

 

Since it is so easy to merge topics in qualitative software, I generally suggest that people aren’t afraid to create a large number of very specific topics, knowing they can merge them together later. For example, if you are create a code for when people are talking about eating out at a restaurant, why not start with separate codes for Fast food, Mexican, Chinese, Haute cuisine etc - since you can always merge them later under the generic ‘Restaurant’ theme if you decide you don’t need that much detail.

 

It is also possible to retroactively split broad codes into smaller categories, but this is a much more engaged process. To do this in Quirkos, I would start by taking the code you want to expand (say Restaurant) and make sure it is a top level code – in other words is not a subcategory of another code. Then, create the codes you want to break out (for example Thai, Vegetarian, Organic) and make them sub categories of the main node. Then, double click on the top Quirk, and you will get a list of all the text coded to the top node (Restaurant). From this view in Quirkos, you can drag and drop each code into the relevant subcategory (eg Organic, Thai):


splitting qualitative codes in quirkos


Once you have gone through and recoded all the quotes into new codes, you can either delete the quotes from the top level code (Restaurant) one by one (by right clicking on the highlight stripe), or remove all quotes from that node by deleting the top-node entirely. If you still want to have a Restaurant Quirk at the top to contain the sub categories, just recreate it, and add the sub-categories to it. That way you will have a blank ‘Restaurant’ theme to keep the subcategories (Thai, Organic) together.

 

So to summarise, don’t be afraid to have too many codes in CAQDAS software – use the flexibility it gives you to experiment. While there is always too much of a good thing, the software will help you see all the coding options at once, so you can decide the best place to categorise each quote. With the ability to merge, and even split apart codes with a little effort, it’s always possible to adjust  your coding framework later – in fact you should anticipate the need to do this as you refine your interpretations. You can also save your project at one stage of the coding, and go back to that point if you need to revert to an earlier state to try a different approach. For more information about large or small coding strategies, this blog post article goes into more depth.


If you want to see how this works in Quirkos, just download the free trial and try for yourself. Quirkos makes operations like merge and split really easy, and the software is designed to be intuitive, visual and colourful. So give it a try, and always contact us if you have any questions or suggestions on how we can make common operations like this quicker and simpler!

 

 

In vivo coding and revealing life from the text

Ged Carrol https://www.flickr.com/photos/renaissancechambara/21768441485


Following on from the last blog post on creating weird and wonderful categories to code your qualitative data, I want to talk about an often overlooked way of creating coding topics – using direct quotes from participants to name codes or topics. This is sometimes called “in vivo” coding, from the Latin ‘in life’ and not to be confused with the ubiquitous qualitative analysis software ‘Nvivo’ which can be used for any type of coding, not just in vivo!


In an older article I did talk about having a category for ‘key quotes’ - those beautiful times when a respondent articulates something perfectly, and you know that quote is going to appear in an article, or even be the article title. However, with in vivo coding, a researcher will create a coding category based on a key phrase or word used by a participant. For example someone might say ‘It felt like I was hit by a bus’ to describe their shock at the event, and rather than creating a topic/node/category/Quirk for ‘shock’, the researcher will name it ‘hit by a bus’. This is especially useful when metaphors like this are commonly used, or someone uses an especially vivid turn of phrase.


In vivo coding doesn’t just apply to metaphor or emotions, and can keep researchers close to the language that respondents themselves are using. For example when talking about how their bedroom looks, someone might talk about ‘mess’, ‘chaos’, or ‘disorganised’ and their specific choice of word may be revealing about their personality and embarrassment. It can also mitigate the tendency for a researcher to impose their own discourse and meaning onto the text.


This method is discussed in more depth in Johnny Saldaña’s book, The Coding Manual for Qualitative Researchers, which also points out how a read-through of the text to create in vivo codes can be a useful process to create a summary of each source.


Ryan and Bernard (2003) use a different terminology, indigenous categories or typographies after Patton (1990). However, here the meaning is a little different – they are looking for unusual or unfamiliar terms which respondents use in their own subculture. A good example of these are slang terms unique to a particular group, such as drug users, surfers, or the shifting vernacular of teenagers. Again, conceptualising the lives of participants in their own words can create a more accurate interpretation, especially later down the line when you are working more exclusively with the codes.


Obviously, this method is really a type of grounded theory, letting codes and theory emerge from the data. In a way, you could consider that if in vivo coding is ‘from life’ or grows from the data, then framework coding to an existing structure is more akin to ‘in vitro’ (from glass) where codes are based on a more rigid interpretation of theory. This is just like the controlled laboratory conditions of in vitro research with more consistent, but less creative, creations.


However, there are problems in trying to interpret the data in this way, most obviously, how ubiquitous will an in-vivo code from one source be across everyone’s transcripts? If someone talks about a shocking event in one source as feeling like being ‘hit by a bus’ and another ‘world dropped out from under me’, would we code the same text together? Both are clearly about ‘shock’ and would probably belong in the same theme, but does the different language require a slightly different interpretation? Wouldn’t you loose some of the nuance of the in vivo coding process if similar themes like these were lumped together?


The answer to all of these issues is probably ‘yes’. However, they are not insurmountable. In fact, Johnny Saldaña suggests that an in vivo coding process works best as a first reading of the data, creating not just a summary if read in order,  but a framework from each source which should be later combined with a ‘higher’ level of second coding across all the data. So after completing in vivo coding, the researcher can go back and create grouped coding categories based around common elements (like shock) or/and conceptual theory level codes (like long term psychological effects) which resonate across all the sources.


This sounds like it would be a very time consuming process, but in fact multi-level coding (which I often advocate) can be very efficient, especially with an in vivo coding as the first process. It may be that you just highlight some of these key words, on paper or Word, or create a series of columns in Excel adjacent to each sentence or paragraph of source material. Since the researcher doesn’t have to ponder the best word or phrase to describe the category at this stage, creating the coding framework is quick. It’s also a great process for participatory analysis, since respondents can quickly engage with selecting juicy morsels of text.


Don’t forget, you don’t have to use an exclusivly in vivo coding framework: just remember that it’s an option, and use for key illuminating quotes along side your other codes. Again, there is no one-size-fits-all approach for qualitative analysis, but knowing the range of methods allows you to choose the best way forward for each research question or project.


CAQDAS/QDA software makes it easy to keep all the different stages of your coding process together, and also create new topics by splitting and emerging existing codes. While the procedure will vary a little across the different qualitative analysis packages, the basics are very similar, so I’ll give a quick example of how you might do this in Quirkos.


Not a lot of people know this, but you can create a new Quirk/topic in Quirkos by dropping a section of text directly onto the create new bubble button, so this is a good way to create a lot of themes on the fly (as with in vivo coding). Just name these according to the in vivo phrase, and make sure that you highlight the whole section of relevant text for coding, so that you can easily see the context and what they are talking about.


Once you have done a full (or partial) reading and coding of your qualitative data, you can work with these codes in several ways. Perhaps the easiest is to create a umbrella (or parent) code (like shock) to which you can make relevant in vivo codes subcategories, just by dragging and dropping them onto the top node. Now, when you double click on the main node, you will see quotes from all the in vivo subcategories in one place.

 

qualitative research software - quirkos

 

It’s also possible to use the Levels feature in Quirkos to group your codes: this is especially useful when you might want to put an in vivo code into more than one higher level group. For example, the ‘hit by a bus’ code might belong in ‘shock’ but also a separate category called ‘metaphors’. You can create levels from the Quirk Properties dialogue of any Quirk, assign codes to one or more of these levels, and explore them using the query view. See this blog post for more on how to use levels in Quirkos.


It’s also possible to save a snapshot of your project at any point, and then actually merge codes together to keep them all under the same Quirk. You will loose most of the original in vivo codes this way (which is why the other options are usually better) but if you find yourself just dealing with too many codes, or want to create a neat report based on a few key concepts this can be a good way to go. Just right click on the Quirks you want to keep, and select ‘Merge Quirk with...’ to choose another topic to be absorbed into it. Don’t forget all actions in Quirkos have Undo and Redo options!


We don’t have an example dataset coded using in vivo quotes, but if you look at some of the sources from our Scottish Independence research project, you will see some great comments about politics and politicians that leap out of the page and would work great for in vivo coding. So why not try it out, and give in vivo coding a whirl with the free trial of Quirkos: affordable, flexible qualitative software that makes coding all these different approaches a breeze!

 

 

Turning qualitative coding on its head

CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=248747


For the first time in ages I attended a workshop on qualitative methods, run by the wonderful Johnny Saldaña. Developing software has become a full time (and then some) occupation for me, which means I have little scope for my own professional development as a qualitative researcher. This session was not only a welcome change, but also an eye-opening critique to the way that many in the room (myself included) approach coding qualitative data.

 

Professor Saldaña has written an excellent Coding Manual for Qualitative Researchers, and the workshop really brought to life some of the lessons and techniques in the book. Fundamental to all the approaches was a direct challenge to researchers doing qualitative coding: code different.

 

Like many researchers, I am guilty of taking coding as a reductive, mechanical exercise. My codes tend to be very basic and descriptive – what is often called index coding. My codes are often a summary word of what the sentence or section of text is literally about. From this, I will later take a more ‘grand-stand’ view of the text and codes themselves, looking at connections between themes to create categories that are closer to theory and insight.

 

However, Professor Saldaña gave (to my count) at least 12 different coding frameworks and strategies that were completely unique to me. While I am not going to go into them all here (that’s what the textbook, courses and the companion website is for!) it was not one particular strategy that stuck with me, but the diversity of approaches.

 

It’s easy when you start out with qualitative data analysis to try a simple strategy – after all it can be a time consuming and daunting conceptual process. And when you have worked with a particular approach for many years (and are surrounded by colleagues who have a similar outlook) it is difficult to challenge yourself. But as I have said before, to prevent coding being merely a reductive and descriptive act, it needs to be continuous and iterative. To truly be analysis and interrogate not just the data, but the researcher’s conceptualisation of the data, it must challenge and encourage different readings of the data.

 

For example, Professor Saldaña actually has a background in performance and theatre, and brings some common approaches from that sphere to the coding process: exactly the kind of cross-disciplinary inspiration I love! When an actor or actress is approaching a scene or character, they may engage with the script (which is much like a qualitative transcript) looking at the character's objectives, conflicts, tactics, attitudes, emotions and subtexts. The question is: what is the character trying to do or communicate, and how? This sort of actor-centred approach works really well in qualitative analysis, in which people, narratives and stories are often central to the research question.

 

So if you have an interview with someone, for example on their experience with the adoption process, imagine you are a writer dissecting the motivations of a character in a novel. What are they trying to do? Justify how they would be a good parent (objectives)? Ok, so how are they doing this (tactics)? And what does this reveal about their attitudes and emotions? Is there a subtext here – are they hurt because of a previous rejection?

 

Other techniques talked about the importance of creating codes which were based around emotions, participant’s values, or even actions: for example, can you make all your codes gerunds (words that end in –ing)? While there was a distinct message that researchers can mix and match these different coding categories, it felt to me a really good challenge to try and view the whole data set from one particular view point (for example conflicts) and then step to one side and look again with a different lens.

 

It’s a little like trying to understand a piece of contemporary sculpture: you need to see it up close, far away, and then from different angles to appreciate the different forms and meaning. Looking at qualitative data can be similar – sometimes the whole picture looks so abstract or baffling, that you have to dissect it in different ways. But often the simplest methods of analysis are not going to provide real insight. Analysing a Henry Moore sculpture by the simplest categories (what is the material, size, setting) may not give much more understanding. Cutting up a work into sections like head, torso or leg does little to explore the overall intention or meaning. And certain data or research questions suit particular analytical approaches. If a sculpture is purely abstract, it is not useful to try and look for aspects of human form - even if the eye is constantly looking for such associations.

 

Here, context is everything. Can you get a sense of what the artist wanted to say? Was it an emotion, a political statement, a subtle treatise on conventional beauty? And much like impressionist painting, sometimes a very close reading stops the viewer from being able to see the brush strokes from the trees.

 

Another talk I went to on how researchers use qualitative analysis software, noted that some users assumed that the software and the coding process was either a replacement or better activity than a close reading of the text. While I don’t think that coding qualitative data can ever replace a detailed reading or familiarity with the source text, coding exercises can help read in different ways, and hence allow new interpretations to come to light. Use them to read your data sideways, backwards, and though someone else’s eyes.

 

But software can help manage and make sense of these different readings. If you have different coding categories from different interpretations, you can store these together, and use different bits from each interpretation. But it can also make it easier to experiment, and look at different stages of the process at any time. In Quirkos you can use the Levels feature to group different categories of coding together, and look at any one (or several) of those lenses at a time.

 

Whatever approach you take to coding, try to really challenge yourself, so that you are forced to categorise and thus interpret the data in different ways. And don't be suprsied if the first approach isn't the one that reveals the most insight!

 

There is a lot more on our blog about coding, for example populating a coding framework and coding your codes. There will also be more articles on coding qualitative data to come, so make sure to follow us on Twitter, and if you are looking for simple, unobtrusive software for qualitative analysis check out Quirkos!

 

7 things we learned from ICQI 2016

ICQI conference - image from Ariescwliang

 

I was lucky enough to attend the ICQI 2016 conference last week in Champaign at the University of Illinois. We managed to speak to a lot of people about using Quirkos, but there were hundreds of other talks, and here are some pointers from just a few of them!

 

 

1. Qualitative research is like being at high school
Johnny Saldaña’s keynote described (with cutting accuracy) the research cliques that people tend to stick to. It's important for us to try and think outside these methodological or topic boxes, and learn from other people doing things in different ways. With so many varied sessions and hundreds of talks, conferences like ICQI 2016 are great places to do this.

 

We were also treated to clips from high school movies, and our own Qualitative High School song! The Digital Tools thread got their own theme song: a list of all the different qualitative analysis software packages sung to the tune of ‘ABC’ - the nursery rhyme, not the Jackson 5 hit!

 

 

2. There is a definite theoretical trend
The conference featured lots of talks on Butler, Foucault, but not one explicitly on Derrida! A philosophical bias perhaps? I’m always interested in the different philosophy that is drawn from between North American, British and Continental debates…

 

 

3. Qualitative research balances a divide between chaos and order
Maggie MacLure gave an intriguing keynote about how qualitative research needs to balance the intoxicating chaos and experimentation of Dionysus with the order and clarity of Apollo (channelling Deleuze). She argued that we must resist the tendency of method and neo-liberal positioned research to push for conformity, and go further in advocating for real ontological change. She also said that researchers should do more to challenge the primacy of language: surely why we need a little Derrida here and there?!

 

 

4. We should embrace doubt and uncertainty
This was definitely something that Maggie MacLure's keynote touched on, but a session chaired by Charles Vander talked about uncertainty in the coding process, and how this can be difficult (but ultimately beneficial). Referencing Locke, Golden-Biddle and Feldman (2008), Charles talked about the need to Embrace not knowing, nurture hurdles and disrupt order (while also engaging with the world and connecting with struggle). It's important for students that allowing doubt and uncertainty doesn't lead to fear – a difficult thing when there are set deadlines and things aren’t going the right way, and even true for most academics! We need to teach that qualitative analysis is not a fixed linear process, experimentation and failure is key part of it. Kathy Charmaz echoed this while talking about grounded theory, and noted that ‘coding should be magical, not just mechanical’.

 


5. We should challenge ourselves to think about codes and coding in completely different ways

Johnny Saldaña's coding workshop (which follows on from his excellent textbook) gave examples of the incredible variety of different coding categories one can create. Rather than just creating merely descriptive index coding, try and get to the actions and motivations in the text. Create code lists which are based around actions, emotions, conflicts or even dramaturgical concepts: in which you are exploring the motivations and tactics of those in your research data. More to follow on this...

 

 

6. We still have a lot to learn about how researchers use qualitative software
Two great talks from Ely Lieber and NYU/CUNY took the wonderful meta-step of doing qualitative (and mixed method) analysis on qualitative researchers, to see how they used qualitative software and what they wanted to do with it.

Katherine Gregory and Sarah DeMott looked at responses from hundreds of users of QDA software, and found a strong preference for getting to outputs as soon as possible, and saw people using qualitative data in very quantitative ways. Eli Lieber from Dedoose looked at what he called ‘Research and Evaluation Data Analysis Software’ and saw from 355 QDA users that there was a risk of playing with data rather than really learning from it, and that many were using coding in software as a replacement for deep reading of the data.


There was also a lot of talk about the digital humanities movement, and there was some great insight from Harriett Green on how this shift looks for librarians and curators of data, and how researchers are wanting to connect and explore diverse digital archives.

 


7. Qualitative research still feels like a fringe activity
The ‘march’ of neo-liberalism was a pervasive conference theme, but there were a lot of discussions around the marginalised place of qualitative research in academia. We heard stories of qualitative modules being removed or made online only, problems with getting papers submitted in mainstream journals, and the lack of engagement from evidence users and policy makers. Conferences like this are essential to reinforce connections between researchers working all over the world, but there is clearly still need for a lot of outreach to advance the position of qualitative research in the field.

 

 

There are dozens more fascination talks I could draw from, but these are just a few highlights from my own badly scribbled notes. It was wonderful to meet so many dedicated researchers, working on so many conceptual and social issues, and it always challenges me to think how Quirkos can better meet the needs of such a disparate group of users. So don’t forget to download the trial, and give us more feedback!

 

You should also connect with the Digital Tools for Qualitative Research group, who organised one of the conference Special Interest Groups, but has many more activities and learning events across the year. Hope to see many more of you next year!

 

Top 10 qualitative research blog posts

top 10 qualitative blog articles

We've now got more than 70 posts on the official Quirkos blog, on lots of different aspects of qualitative research and using Quirkos in different fields. But it's now getting a bit difficult to navigate, so I wanted to do a quick recap with the 10 most popular articles, based on the number of hits over the last two years.

 

Tools for critical appraisal of qualitative research

A review of tools that can be used to assess the quality of qualitative research.

 

Transcription for qualitative research

The first on a series of posts about transcribing qualitative research, breaking open the process and costs.

 

10 tips for recording good qualitative audio

Some tips for recording interviews and focus-groups for good quality transcription

 

10 tips for semi-structured qualitative interviewing

Some advice to help researchers conduct good interviews, and what to plan for in advance

 

Sampling issues in qualitative research

Issues to consider when sampling, and later recruiting participants in qualitative studies

 

Developing an interview guide for semi-structured interviews

The importance of having a guide to facilitate in-depth qualitative interviews

 

Transcribing your own qualitative data

Last on the transcription trifecta, tips for making transcription a bit easier if you have to do it yourself

 

Participant diaries for qualitative research

Some different approaches to self-report and experience sampling in qualitative research

 

Recruitment for qualitative research

Factors to consider when trying to get participants for qualitative research

 

Engaging qualitative research with a quantitative audience

The importance of packaging and presenting qualitative research in ways that can be understood by quantitative-focused policy makers and journal editors

 

There are a lot more themes to explore in the blog post, including posts on how to use CAQDAS software, and doing your qualitative analysis in Quirkos, the most colourful and intuitive way to explore your qualitative research.

 

 

Participant diaries for qualitative research

participant diaries

 

I’ve written a little about this before, but I really love participant diaries!


In qualitative research, you are often trying to understand the lives, experiences and motivations of other people. Through methods like interviews and focus groups, you can get a one-off insight into people’s own descriptions of themselves. If you want to measure change over a period, you need to schedule a series of meetings, and each of which will be limited by what a participant will recall and share.


However, using diary methodologies, you can get a longer and much more regular insight into lived experiences, plus you also change the researcher-participant power dynamic. Interviews and focus groups can sometimes be a bit of an interrogation, with the researcher asking questions, and participants given the role of answering. With diaries, participants can have more autonomy to share what they want, as well as where and when (Meth 2003).


These techniques are also called self-report or ‘Contemporaneous assessment methods’, but there are actually a lot of different ways you can collect diary entries. There are some great reviews of different diary based methods, (eg Bolger et al. 2003), but let’s look at some of the different approaches.


The most obvious is to give people a little journal or exercise book to write in, and ask them to record on a regular basis any aspects of their day that are relevant to your research topic. If they are expected to make notes on the go, make it a slim pocket sized one. If they are going to write a more traditional diary at the end of each day, make a nice exercise book to work in. I’ve actually found that people end up getting quite attached to their diaries, and will often ask for them back. So make sure you have some way to copy or transcribe them and consider offering to return them once you have investigated them, or you could give back a copy if you wish to keep hold of the real thing.

 

You can also do voice diaries – something I tried in Botswana. We were initially worried that literacy levels in rural areas would mean that participants would either be unable, or reluctant to create written entries. So I offered everyone a small voice recorder, where they could record spoken notes that we would transcribe at the end of the session. While you could give a group of people an inexpensive (~£20) Dictaphone, I actually brought a bunch of cheap no-brand MP3 players which only cost ~£5 each, had a built in voice recorder and headphones, and could work on a single AAA battery (which was easy to find from local shops, since few respondents had electricity for recharging). The audio quality was not great, but perfectly adequate. People really liked these because they could also play music (and had a radio), and they were cheap enough to be lost or left as thank-you gifts at the end of the research.

 

There is also a large literature on ‘experience sampling’ – where participants are prompted at regular or random intervals to record on what they are doing or how they are feeling at that time. Initially this work was done using pagers, (Larson 1989) when participants would be ‘beeped’ at random times during the day and asked to write down what they were doing at the time. More recent studies have used smartphones to both prompt and directly collect responses (Chen et al. 2014).

 

Secondly, there is now a lot of online journal research, both researcher solicited as part of a qualitative research project (Kaun 2015), or collected from people’s blogs and social media posts. This is especially popular in market research when looking at consumer behaviour (Patterson 2005), project evaluation (Cohen et al. 2006).

 

Diary methods can create detailed, and reliable data. One study found that asking participants to record diary entries three times a day to measure stigmatised behaviour like sexual activities found an 89.7% adherence rate (Hensel et al. 2012), far higher than would be expected from traditional survey methods. There is a lot of diary based research in the sexual and mental health literature: for more discussion on the discrepancies and reliability between diary and recall methods, there is a good overview in Coxon (1999) but many studies like Garry et al. (2002) found that diary based methods generated more accurate responses. Note that these kinds of studies tend to be mixed-method, collecting both discrete quantitative data and open ended qualitative comments.

 

Whatever the method you are choosing, it’s important to set up some clear guidelines to follow. Personally I think either a telephone conversation or face-to-face meeting is a good idea to give a chance for participants to ask questions. If you’ve not done research diaries before, it’s a good idea to pilot them with one or two people to make sure you are briefing people clearly, and they can write useful entries for you. The guidelines, (explained and taped to the inside of the diary) should make it clear:

  • What you are interested in hearing about
  • What it will be used for
  • How often you expect people to write
  • How much they should write
  • How to get in touch with you
  • How long they should be writing entries, and how to return the diary.

 

Even if you expressly specify that your participants should write their journals should be written everyday for three weeks, you should be prepared for the fact that many won’t manage this. You’ll have some that start well but lapse, others that forget until the end and do it all in the last day before they see you, and everything in-between. You need to assume this will happen with some or all of your respondents, and consider how this is going to affect how you interpret the data and draw conclusions. It shouldn’t necessarily mean that the data is useless, just that you need to be aware of the limitations when analysing it. There will also be a huge variety in how much people write, despite your guidelines. Some will love the experience, sharing volumes of long entries, others might just write a few sentences, which might still be revealing.

 

For these reasons, diary-like methodologies are usually used in addition to other methods, such as semi-structured interviews (Meth 2003), or detailed surveys. Diaries can be used to triangulate claims made by respondents in different data sources (Schroder 2003) or provide more richness and detail to the individual narrative. From the researchers point of view, the difference between having data where a respondent says they have been bullied, and having an account of a specific incident recorded that day is significant, and gives a great amount of depth and illumination into the underlying issues.

 

Qualitative software - Quirkos

 

However, you also need to carefully consider the confidentiality and other ethical issues. Often participants will share a lot of personal information in diaries, and you must agree how you will deal with this and anonymise it for your research. While many respondents find keeping a qualitative diary a positive and reflexive process, it can be stressful to ask people in difficult situations to reflect on uncomfortable issues. There is also the risk that the diary could be lost, or read by other people mentioned in it, creating a potential disclosure risk to participants. Depending on what you are asking about, it might be wise to ask participants themselves to create anonymised entries, with pseudo-names and places as they write.

 

Last, but not least, what about your own diary? Many researchers will keep a diary, journal or ‘field notes’ during the research process (Altricher and Holly 2004), which can help provide context and reflexivity as well as a good way of recording thoughts on ideas and issues that arise during the data collection process. This is also a valuable source of qualitative data itself, and it’s often useful to include your journal in the analysis process – if not coded, then at least to remind you of your own reflections and experiences during the research journey.

 

So how can you analyse the text of your participant diaries? In Quirkos of course! Quirkos takes all the basics you need to do qualitative analysis, and puts it in a simple, and easy to use package. Try for yourself with a free trial, or find out more about the features and benefits.

 

Sharing qualitative research data from Quirkos

exporting and sharing qualitative data

Once you’ve coded, explored and analysed your qualitative data, it’s time to share it with the world. For students, the first step will be supervisors, for researchers it might be peers or the wider research community, and for market research firms, it will be their clients. Regardless of who the end user of your research is, Quirkos offers a lot of different ways to get your hard earned coding out into the real world.

 

Share your project file
The best, and easiest way to share your coded data is to send your project file to someone. If they have a copy of Quirkos (even the trial) they will be able to explore the project in the same way you can, and you can work on it collaboratively. Files are compatible across Windows, Mac and Linux, and are small enough they can be e-mailed, put on a USB stick or Dropbox as needed.

 

Word export
One of people’s favourite features is the Word export, which creates a standard Word file of your data, with comments and coloured highlights showing your complete coding. This means that pretty much anyone can see your coding, since the file will open in Microsoft Office, LibreOffice/OpenOffice, Google Docs, Pages (on Mac) and many others. It’s also a great way to print out your project if you prefer to read though it on paper, while still being able to see all your coding. If you print the ‘Full Markup’ view, you will still be able to see the name (and author) of the code on a black and white printer!qualitative word export from quirkos


There are two options available in the ‘Project’ button – either ‘Export All Sources as Word Document’ which creates one long file, or ‘Export Each Source’ which creates a separate file for each source in the project in a folder you specify.

 

Reports
So this is the most conventional output in Quirkos, a customisable document which gives a summary of the project, and an ordered list of coded text segments. It also includes graphical views of your coding framework, including the clustered views which show the connections between themes. When generated in Quirkos, you will get a two columned preview, with a view of how the report will look on the left, and all the options for what you want to include in the report on the right.


You can print this directly, save it as a PDF document, or even save as a webpage. This last option creates a report folder that anyone can open, explore and customise in their browser, in the same way as you are able to in the Quirkos report view. This also creates a folder which contains all the images in the report (such as the canvas and overlap views) that you can then include directly in presentations or articles.

quirkos qualitative data report


There are many options available here, including the ability to list all quotes by source (ie everything one person said) or by theme (ie everything everyone said on one topic). You can change how these quotes are formatted (by making the text or highlight into the colour of the Quirk) and the level of detail, such as whether to include the source name, properties and percentage of coding.

 

Sub-set reports (query view)
By default, the report button will generate output of the whole project. But if you want to just get responses from a sub-set of your data, you can generate reports containing only the results of filters from the query view. So you could generate a report that only shows the responses from Men or Women, or by one of the authors in the project.

 

CSV export
Quirkos also gives you the option to export your project as CSV files – a common spreadsheet format which you can open with in Excel, SPSS or equivalents. This allows you to do more quantitative analysis in statistical software, generate graphs of your coding, and conduct more detailed sub-analysis. The CSV export creates a series of files which represent the different tables in the project database, with v_highlight.csv containing your coded quotes. Other files contain the question and answers (in a structured project), a list of all your codes, levels, and source properties (also called metadata).

 

Database editing
For true power users, there is also the option to perform full SQL operations on your project file. Since Quirkos saves all your project data as a standard SQLite database, it’s possible to open and edit it with a number of third party tools such as SQL Browser to perform advanced operations. You can also use standard command line operations (CLI) like SELECT FROM WHERE to explore and edit the database. Our full manual has more details on the database structure. Hopefully, this will also allow for better integration with other qualitative analysis software in the future.

 

If you are interesting in seeing how Quirkos can help with coding and presenting your qualitative data, you can download a one-month free trial and try for yourself. Good luck with your research!

 

Tools for critical appraisal of qualitative research

appraising qualitative data

I've mentioned before how the general public are very quantitatively literate: we are used to dealing with news containing graphs, percentages, growth rates, and big numbers, and they are common enough that people rarely have trouble engaging with them.

 

In many fields of studies this is also true for researchers and those who use evidence professionally. They become accustomed to p-values, common statistical tests, and plot charts. Lots of research is based on quantitative data, and there is a training and familiarity in these methods and data presentation techniques which create a lingua-franca of researchers across disciplines and regions.

 

However, I've found in previous research that many evidence based decision makers are not comfortable with qualitative research. There are many reasons for this, but I frequently hear people essentially say that they don't know how to appraise it. While they can look at a sample size and recruitment technique and a r-square value and get an idea of the limitations of a study, this is much harder for many practitioners to do with qualitative techniques they are less familiar with.

 

But this needn’t be the case, qualitative research is not rocket science, and there are fundamental common values which can be used to assess the quality of a piece of research. This week, a discussion on appraisal of qualitative research was started on Twitter started by the Mental Health group of the 'National Elf Service’ (@Mental_Elf) - an organisation devoted to collating and summarising health evidence for practitioners.

 

People contributed many great suggestions of guides and toolkits that anyone can use to examine and critique a qualitative study, even if the user is not familiar with qualitative methodologies. I frequently come across this barrier to promoting qualitative research in public sector organisations, so was halfway through putting together these resources when I realised they might be useful to others!

 

First of all, David Nunan (@dnunan79) based at the University of Oxford shared an appraisal tool developed at the Centre for Evidence-Based Medicine (@CebmOxford).

 

Lucy Terry (@LucyACTerry) offered specific guidelines for charities from New Philanthropy Capital, with gives five key quality criteria, that the research should be: Valid, Reliable, Confirmable, Reflexive and Responsible.

 

There’s also an article by Kuper et al (2008) which offers guidance on assessing a study using qualitative evidence. As a starting point, they list 6 questions to ask:

  • Was the sample used in the study appropriate to its research question?
  • Were the data collected appropriately?
  • Were the data analysed appropriately?
  • Can I transfer the results of this study to my own setting?
  • Does the study adequately address potential ethical issues, including reflexivity?
  • Overall: is what the researchers did clear?
     

The International Centre for Allied Health Evidence at the University of South Australia has a list of critical apprasial tools, including ones specific to qualitative research. From these, I quite like the checklist format of one developed by the Critical Appraisal Skills Programme, I can imagine this going down well with health commissioners.

 

Another from the Occupational Therapy Evidence-Based Practice Research Group at McMaster University in Canada is more detailed, and is also available in multiple languages and an editable Word document.

 

Finally, Margaret Roller and Paul Lavrakas have a recent textbook (Applied Qualitative Research Design: A Total Quality Framework Approach 2015) that covers many of these issues in research, and detail the Total Quality Framework that can be used for designing, discussing and evaluating qualitative research. The book contains specific chapters on detailing the application of the framework to different projects and methodologies. Margaret Roller also has an article on her excellent blog on weighing the value of qualitative research, which gives an example of the Total Quality Framework.

 

In short, there are a lot of options to choose from, but the take away message from them is that the questions are simple, short, and largely common sense. However, the process of assessing even just a few pieces of qualitative research in this way will quickly get evidence based practitioners into the habit of asking these questions of most projects they come across, hopefully increasing their comfort level in dealing with qualitative studies.

 

The tools are also useful for students, even if they are familiar with qualitative methodologies, as it helps facilitate a critical reading that can give focus to paper discussion groups or literature reviews. Adopting one of the appraisal techniques here (or modifying one) would also be a great start to a systematic review or meta-analysis.

 

Finally, there are a few sources from the Evidence and Ethnicity in Commissioning project I was involved with that might be useful, but if you have any suggestions please let me know, either in the forum or by e-mailing daniel@quirkos.com and I will add these to the list. Don't forget to find out more about using Quirkos for your qualitative analysis and download the free trial.

 

 

Developing and populating a qualitative coding framework in Quirkos

coding blog

 

In previous blog articles I’ve looked at some of the methodological considerations in developing a coding framework. This article looks at top-down or bottom-up approaches, whether you start with large overarching themes (a-priori) and break them down, or begin with smaller more simple themes, and gradually impose meanings and connections in an inductive approach. There’s a need in this series of articles to talk about the various different approaches which are grouped together as grounded theory, but this will come in a future article.

 

For now, I want to leave the methodological and theoretical debates aside, and look purely at the mechanics of creating the coding framework in qualitative analysis software. While I’m going to be describing the process using Quirkos as the example software, the fundamentals will apply even if you are using Nvivo, MaxQDA, AtlasTi, Dedoose, or most of the other CAQDAS packages out there. It might help to follow this guide with the software of your choice, you can download a free trial of Quirkos right here and get going in minutes.

 

First of all, a slightly guilty confession: I personally always plan out my themes on paper first. This might sound a bit hypocritical coming from someone who designs software for a living, but I find myself being a lot more creative on paper, and there’s something about the physicality of scribbling all over a big sheet of paper that helps me think better. I do this a lot less now that Quirkos lets me physically move themes around the screen, group them by colour and topic, but for a big complicated project it’s normally where I start.

 

But the computer obviously allows you to create and manage hundreds of topics, rearrange and rename them (which is difficult to do on paper, even with pencil and eraser!). It will also make it easy to assign parts of your data to one of the topics, and see all of the data associated with it. While paper notes may help conceptually think through some of the likely topics in the study and connect them to your research questions, I would recommend users to move to a QDA software package fairly early on in their project.

 

Obviously, whether you are taking an a-priori or grounded approach will change whether you will creating most of your themes before you start coding, or adding to them as you go along. Either way, you will need to create your topics/categories/nodes/themes/bubbles or whatever you want to call them. In Quirkos the themes are called ‘Quirks’ informally, and are represented by default as coloured bubbles. You can drag and move these anywhere around the screen, change their colours, and their size increases every time you add some text to them. It’s a neat way to get confirmation and feedback on your coding. In other software packages there will just be a number next to the list of themes that shows how many coding events belong to each topic.

 


In Quirkos, there are actually three different ways to create a bubble theme. The most common is the large (+) button at the top left of a canvas area. This creates a new topic bubble in a random place with a random colour, and automatically opens the Properties dialogue for you to edit it. Here you can change the name, for example to ‘Fish’ and put in a longer description: ‘Things that live in water and lay eggs’ so that the definition is clear to yourself and others. You can also choose the colour, from some 16 million options available. There is also the option to set a ‘level’ for this Quirk bubble, which is a way to group intersecting themes so that one topic can belong to multiple groups. For example, you could create a level called ‘Things in the sea’ that includes Fish, Dolphins and Ships, and another category called ‘Living things’ that has Fish, Dolphins and Lions. In Quirkos, you can change any of these properties at any time by right clicking on the appropriate bubble.

 

quirkos qualitative properties editor

 

Secondly, you can right click anywhere on the ‘canvas’ area that stores your topics to create a new theme bubble at that location. This is useful if you have a little cluster of topics on a similar theme, and you want to create a new related bubble near the other ones. Of course, you can move the bubbles around later, but this makes things a bit easier.

 

If you are creating topics on the fly, you can also create a new category by dragging and dropping text directly onto the same add Quirk button. This creates a new bubble that already contains the text you dragged onto the button. This time, the property dialogue doesn’t immediately pop-up, so that you can keep adding more sections of data to the theme. Don’t forget to name it eventually though!

 

drag and drop qualitative topic creation

 

All software packages allow you to group your themes in some way, usually this is in a list or tree view, where sub-categories are indented below their ‘parent’ node. For example, you might have the parent category ‘Fish’ and the sub-categories ‘Pike’, ‘Salmon’ and ‘Trout’. Further, there might be sub-sub categories, so for example ‘Trout’ might have themes for ‘Brown Trout’, ‘Brook Trout’ and ‘Rainbow Trout’. This is a useful way to group and sort your themes, especially as many qualitative projects end up with dozens or even hundreds of themes.

 

In Quirkos, categories work a little differently. To make a theme a sub-category, just drag and drop that bubble onto the bubble that will be its parent, like stacking them. You will see that the sub-category goes behind the parent bubble, and when you move your mouse over the top category, the others will pop out, automatically arranging like petals from a flower. You can also remove categories just by dragging and pulling it out from the parent just like picking petals from a flower! You can also create sub-sub categories (ie up to three levels depth) but no more than this. When a Quirk has subcategories clustered below it, this is indicated by a white ring inside the bubble. This method of operation makes creating clusters (and changing your mind) very easy and visual.

 

Now, to add something to the topic, you just have to select some text, and drag and drop it onto the bubble or theme. This will work in most software packages, although in some you can also right click within the selected text where you will find a list of codes to assign that section to.


Quirkos, like other software, will show coloured highlighted stripes over the text or in the margin that show in the document which sections have been added to which codes. In Quirkos, you can always see what topic the stripe represents by hovering the mouse cursor over the coloured section, and the topic name will appear in the bottom left of the screen. You can also right-click on the stripe and remove that section of text from the code at any time. Once you have done some coding, in most software packages you can double click on the topic and see everything you’ve coded at this point.

 

Hopefully this should give you confidence to let the software do what it does best: keep track of lots of different topics and what goes in them. How you actually choose which topics and methodology to use in your project is still up to you, but using software helps you keep everything together and gives you a lot of tools for exploring the data later. Don’t forget to read more about the specific features of Quirkos here and download the free trial from here.

 

Transcribing your own qualitative data

diy qualitative transcription

In a previous blog article I talked about some of the practicalities and costs involved in using a professional transcribing service to turn your beautifully recorded qualitative interviews and focus groups into text data ready for analysis. However, hiring a transcriber is expensive, and is often beyond the means of most post-graduate researchers.

 

There are also serious advantages to doing the transcription yourself that make a better end result and get you much closer to your data. In this article I’m going to go through some practical tips that should make doing transcription a little less painful.

 

But first, a little more on the benefits of transcribing your own data. If you were there in the room with the respondent, you asked the questions, and were watching and listening to the participant. Do the transcription soon after the interview and you are likely to remember words that might be muffled in the recording, points that the respondent emphasised by shaking their head – lots of little details to capture.

 

It’s important to remember that transcription is an interpretive act (Bailey 2008), you can’t just convert an interview into a perfect text version of that data. While this might be obvious when working between different languages where translation is required, I would argue that a transcriber always makes subjective decisions about misheard words, how to record pauses and inflictions, or unconsciously changes words or their order.

 

As I’ve mentioned before, you loose a lot of the nuance of an interview when moving to text, and the transcriber has to make choices about how to mitigate this: Was this hesitation or just pausing for breath? How should I indicate the participant banged on the table for emphasis? Capturing this non-verbal communication in a transcript can really change the interpretation in qualitative data, so I like it when this process is in the control of the researcher. For a lot more on these and other issues there is a review of the qualitative transcription literature by Davidson (2009).


 

What do I actually type?

In a word, everything: the questions, the answers, the hesitations and mumbles, and things that were communicated, but not said verbally.

 

First, some guidelines for what the transcription should look like, bearing in mind that there is no one standard. You can use a word processor, or a spreadsheet like Excel. It can be a little more difficult to get formatting right in a spreadsheet, for example you will need to use Shift+Return to make a new paragraph within a cell, and getting it to look right on a printed page is more of a challenge. Yet since interviews and especially focus groups will usually have more than one voice to assign text to, you need some way to structure the data.

 

In a spreadsheet you can use three columns: the first for an occasional time index (so you see where in the audio this section of text occurs), the second for name of voice, and the third widest one for text. While you can use a table to do the same thing in Word, spreadsheets will do auto-complete for your names, making things a bit faster. However, for just a one-on-one interview, it’s easy to just use a Q: / A: formatting for each respondent in a spreadsheet, and put periodic time stamps in brackets at the top of each page.

 

Second, record non-verbal data in a consistent way, usually in square brackets. For example [hesitates], [laughter], [bangs fist on table], or even when [coffee is delivered]. You may choose to use italics or bold type to show when someone puts emphasis on a word, but choose one or the other and be consistent.

 

Next, consider your system for indicating pauses. Usually a short pause is represented by three dots ‘…’ Anything longer is recorded in square brackets and roughly timed [5 second pause]. These pauses can show hesitation in the participant to answer a difficult question, and long pauses may have special meaning. There is actually a whole article on the importance of silences by Poland and Pederson (1998).

 

When you are transcribing, you also need to decide on the level of detail. Will you record every Um, Er, and stutter? In verbal speech these are surprisingly common. Most qualitative research does want this level of detail, but it is obviously more time consuming to type. You’ll often have corrections in the speech as well, commonly “I’ve… I’ll never say that ag... any more”. Do you include the first self correction? It’s clear in the audio the participant was going to say ‘again’ but changed themselves to ‘any more’ - should I record this? Decide on the level of detail early on, and be consistent.

 

Sometimes people can go completely off topic, or there will be a section in the audio where you were complaining about the traffic, ordering coffee, or a phone call interrupted things. If you decide it’s not relevant to capture, just indicate with time markings what happened in square brackets: [cup smashed on the floor, 5min to clear up].

 

Once you are done with an interview, it’s a good idea to listen to it back, reading through the transcript and correcting any mistakes. The first few times you will be surprised at how often you swapped a few words, or got strange typos.

 

 

So how long will it all take?

Starting out with all this can be daunting, especially if you have a large number of interviews to transcribe. A good rule of thumb is that transcribing an interview verbatim will take between 3 and 6 times longer than the audio. So for an hour of recording, it could take as little as three hours, or as much as six to type up.

 

This sounds horrifying, and it is. I’m quite a fast typer, and have done quite a bit of transcription before, but I average between 3x and 4x the audio time. If you are slow at typing, need to pause the audio a lot, or have to put in a lot of extra descriptive detail it can take a lot longer. The tips below should help you get towards the 3x benchmark, but it’s worth planning out your time a little before you begin.

 

If you have twenty interviews each lasting on average 1 hour, you should probably plan for at least 60 hours of transcription time. You are looking at nearly 9 days or two weeks of work at a standard 9-5 work day. I don’t say this to frighten you, just to mentally acclimatise you to the task ahead!

 

It’s also worth noting that transcription is very intensive work. You will be frantically typing as fast as you can, and it requires extreme mental concentration to listen and type simultaneously, while also watching for errors and fixing typos. I don’t think most people could just do two or three hour sessions at a time without going a little crazy! So you need to plan in some breaks, or at least some different non-typing work.

 

If this sounds insurmountable, don’t panic. Just spread out the work, especially if you can do the transcripts after each interview, instead of in one huge batch. This is generally better since you can review one interview before you do the next one, giving you a chance to change how you ask questions and cover any gaps. Transcription can also be quite engrossing (since you can’t possibly do anything else at the same time), and it’s nice to see the hours ticking off.

 

 

 

So how can you make this faster?

You need to set up your computer (or laptop) to be a professional transcribing station, where you can hear the audio, start and stop it easily, and type comfortably for a long period of time.

 

Even if you type really fast, you won’t be able to keep up with the speed that people speak, meaning you will have to frequently start and stop the audio to catch up. Most professionals will use a ‘foot-pedal’ to do this, so that they don’t have to stop typing, come out of the word processing software and pause an audio player. Even if you are playing audio from a dictaphone next to you, going away from the keyboard, stopping and starting the buttons on the dictaphone and coming back to type again quickly becomes tedious.

 

A foot-pedal lets you start and stop the audio by tapping with your foot (or toe) and often has additional buttons to rewind a little (very useful) or fast-forward through the audio. Now, these cost around £30/$40 or more, but can be a worthwhile investment. However, it’s also worth checking to see if you can borrow one from a colleague, or even if your department or library has one for hire.

 

But if you are a cheapskate like me, there are other ways to do this. Did you know that you can have two or more keyboards attached to a computer, and they will both work? An extra keyboard (with a USB connector) can cost as little as £10/$15 if you don’t already have a spare lying around, and can be plugged into a laptop as well. Put it on the floor, and you can set up one of the keys as a ‘global shortcut’ in an audio player like VLC. Here’s a forum detailing how to set up a certain key so that it will start and stop the audio even if you are typing in another programme. Put your second keyboard on the floor, and tap your chosen key with your toe to start and stop! Even if you only use one keyboard, you can set a shortcut in VCL (for example Alt+1), and every time you press that combination it will play or pause the audio, even if VLC player is hidden.

 

There’s another advantage to using VLC: it can slow down your recordings as they are played back! Once your audio is playing, click on the Playback menu item, then Speed. Change to Slower, and listen as your participants magically start talking like sleepy drunks! This helps me more than anything, because I can slow down the speech to a level that means I can type constantly without getting behind. This method does warp the speech, and having the setting too high can make it difficult to understand. However, the less you have to pause and stop the audio to catch up with your typing, the faster your transcription will go.

 

You can also do this with audio software like Audacity. Here, import your audio file, and click on Effect, and Change Tempo. Drag the slider to the left to slow down the speech (try 20% – 50%) without changing the ‘pitch’ so everyone doesn’t end up sounding like Barry White. You can then save the file with your desired speed, and the quality can be a little better than the live speed changes in VLC.

 

General tips for good typing can help too. Watch the screen as you type, not your fingers, so that you can quickly pick up on mistakes. Learn to use all your fingers to type, don’t just ‘hunt and peck’ - a quick typing tutorial might save you hours in the long run if you don’t do this already.

 

Last of all, consider your posture. I’m serious! If you are going to be hunched up and typing for days and days, bad posture is going to make you ache and get stressed. Make sure your desk and chair are the right height for you, try using a proper keyboard if working from a laptop (or at least prop up the laptop to a good angle). Make sure the lighting is good, there is no screen glare, and use a foot rest if this helps the position of your back. Scrunched up on a sofa with a laptop in your lap for 60 hours is a great way to get cramp, back-ache and RSI. Try and take a break at least every half an hour: get up and stretch, especially your hands and arms.

 

So, you have your beautiful and detailed transcripts? Now you can bring them into Quirkos to analyse them! Quirkos is ideal for students doing their first qualitative analysis project, as it makes coding and analysis of text visual, colourful and easy to learn. There’s a free trial on our website, and you can bring in data from lots of different sources to work with.

 

Sampling considerations in qualitative research

sampling crowd image by https://www.flickr.com/photos/jamescridland/613445810/in/photostream/

 

Two weeks ago I talked about the importance of developing a recruitment strategy when designing a research project. This week we will do a brief overview of sampling for qualitative research, but it is a huge and complicated issue. There’s a great chapter ‘Designing and Selecting Samples’ in the book Qualitative Research Practice (Ritchie et al 2013) which goes over many of these methods in detail.

 

Your research questions and methodological approach (ie grounded theory) will guide you to the right sampling methods for your study – there is never a one-size-fits-all approach in qualitative research! For more detail on this, especially on the importance of culturally embedded sampling, there is a well cited article by Luborsky and Rubinstein (1995). But it’s also worth talking to colleagues, supervisors and peers to get advice and feedback on your proposals.

 

Marshall (1996) briefly describes three different approaches to qualitative sampling: judgement/purposeful sampling, theoretical sampling and convenience sampling.

 

But before you choose any approach, you need to decide what you are trying to achieve with your sampling. Do you have a specific group of people that you need to have in your study, or should it be representative of the general population? Are you trying to discover something about a niche, or something that is generalizable to everyone? A lot of qualitative research is about a specific group of people, and Marshall notes:
“This is a more intellectual strategy than the simple demographic stratification of epidemiological studies, though age, gender and social class might be important variables. If the subjects are known to the research, they may be stratified according to known public attitudes or beliefs.”

 

Broadly speaking, convenience, judgement and theoretical sampling can be seen as purposeful – deliberately selecting people of interest in some way. However, randomly selecting people from a large population is still a desirable approach in some qualitative research. Because qualitative studies tend to have a small sample size due to the in-depth nature of engagement with each participant, this can have an impact if you want a representative sample. If you randomly select 15 people, you might by chance end up with more women than men, or a younger than desired sample. That is why qualitative studies may use a little bit of purposeful sampling, finding people to make sure the final profile matches the desired sampling frame. For much more on this, check out the last blog post article on recruitment.

 

Sample size will often also depend on conceptual approach: if you are testing a prior hypothesis, you may be able to get away with a smaller sample size, while a grounded theory approach to develop new insights might need a larger group of respondents to test that the findings are applicable. Here, you are likely to take a ‘theoretical sampling’ approach (Glaser and Strauss 1967) where you specifically choose people who have experiences that would contribute to a theoretical construct. This is often iterative, in that after reviewing the data (for theoretical insights) the researcher goes out again to find other participants the model suggests might be of interest.

 

The convenience sampling approach which Marshal mentions as being the ‘least rigorous technique’ is where researchers target the most ‘easily accessible’ respondents. This could even be friends, family or faculty. This approach can rarely be methodologically justified, and is unlikely to provide a representative sample. However, it is endemic in many fields, especially psychology, where researchers tend to turn to easily accessible psychology students for experiments: skewing the results towards white, rich, well-educated Western students.

 

Now we turn to snowball sampling (Goodman 1961). This is different from purposeful sampling in that new respondents are suggested by others. In general, this is most suited to work with ‘marginalised or hard-to-reach’ populations, where responders are not often forthcoming (Sadler et al 2010). For example, people may not be open about their drug use, political views or living with stigmatising conditions, yet often form closely connected networks. Thus, by gaining trust with one person in the group, others can be recommended to the researcher. However, it is important to note the limitations with this approach. Here, there is the risk of systemic bias: if the first person you recruit is not representive in some way, their referrals may not be either. So you may be looking at people living with HIV/AIDS, and recruit through a support group that is formed entirely of men: they are unlikely to suggest women for the study.

 

For these reasons there are limits to the generalisability and appropriateness of snowball sampling for most subjects of inquiry, and it should not be taken as an easy fix. Yet while many practitioners explain the limitations with snowball research, it can be very well suited for certain kinds of social and action research, this article by Noy (2008) outlines some of the potential benefits to power relations and studying social networks.

 

Finally, there is the issue of sample size and ‘saturation’. This is when there is enough data collected to confidently answer the research questions. For a lot of qualitative research this means collected and coded data as well, especially if using some variant of grounded theory. However, saturation is often a source of anxiety for researchers: see for example the amusingly titled article “Are We There Yet?” by Fusch and Ness (2015). Unlike quantitative studies where a sample size can be determined by the desired effect size and confidence interval in a chosen statistical test, it is more difficult to put an exact number on the right number of participant responses. This is especially because responses are themselves qualitative, not just numbers in a list: so one response may be more data rich than another.

 

While a general rule of thumb would indicate there is no harm in collecting more data than is strictly necessary, there is always a practical limitation, especially in resource and time constrained post-graduate studies. It can also be more difficult to recruit than anticipated, and many projects working with very specific or hard-to-reach groups can struggle to find a large enough sample size. This is not always a disaster, but may require a re-examination of the research questions, to see what insights and conclusions are still obtainable.

 

Generally, researchers should have a target sample size and definition of what data saturation will look like for their project before they begin sampling and recruitment. Don’t forget that qualitative case studies may only include one respondent or data point, and in some situations that can be appropriate. However, getting the sampling approach and sample size right is something that comes with experience, advice and practice.

 

As I always seem to be saying in this blog, it’s also worth considering the intended audience for your research outputs. If you want to publish in a certain journal or academic discipline, it may not be responsive to research based on qualitative methods with small or ‘non-representative’ samples. Silverman (2013 p424) mentions this explicitly with examples of students who had publications rejected for these reasons.

 

So as ever, plan ahead for what you want to achieve for your research project, the questions you want to answer, and work backwards to choose the appropriate methodology, methods and sample for your work. Also, check the companion article about recruitment, most of these issues need to be considered in tandem.

 

Once you have your data, Quirkos can be a great way to analyse it, whether your sample size has one or dozens of respondents! There is a free trial and example data sets to see for yourself if it suits your way of working, and much more information in these pages. We also have a newly relaunched forum, with specific sections on qualitative methodology if you wanted to ask questions, or comment on anything raised in this blog series.

 

 

Qualitative evidence for SANDS Lothians

qualitative charity research - image by cchana

Charities and third sector organisations are often sitting on lots of very useful qualitative evidence, and I have already written a short blot post article on some common sources of data that can support funding applications, evaluations and impact assessments. We wanted to do a ‘qualitative case study’: to work with one local charity to explore what qualitative evidence they already had, what they could collect, and use Quirkos to help create some reports and impact assessments.

 

SANDS Lothians is an Edinburgh based charity that provides long-term counselling and support for families who have experienced bereavement through the loss of a child near-birth. They approached us after seeing advertisements for one of our local qualitative training workshops.


Director Nicola Welsh takes up the story. “During my first six months in post, I could see there was much evidence to highlight the value of our work but was struggling to pull this together in some order which was presentable to others. Through working with Daniel and Kristin we were able to start to structure what we were looking to highlight and with their help begin to organise our information so it was available to share with others. Quirkos allowed us to pull information from service users, stats and studies to present this in a professional document. They gave us the confidence to ask our users about their experiences and encouraged us to record all the services we offered to allow others at a glance to get a feel for what we provide.”

 

First of all, we discussed what would be most useful to the organisation. Since they were in discussion with major partners about possible funding, an impact assessment would be valuable in this process.

 

They also identified concerns from their users about a specific issue, prescriptions for anti-depressants, and wanted to investigate this further. It was important to identify the audience that SANDS Lothians wanted to reach with this information, in this case, GPs and other health professionals. This set the format of a possible output: a short briefing paper on different types of support that parents experiencing bereavement could be referred to.

 

We started by doing an ‘evidence assessment’ (or evidence audit as this previous blog post article notes), looking for evidence on impact that SANDS Lothians already had. Some of this was quantitative, such as the number of phone calls received on a monthly basis. As they had recently started counting these calls, it was valuable evidence of people using their support and guidance services. In the future they will be able to see trends in the data, such as an increase in demand or seasonal variation that will help them plan better.

 

They already had national reports from NHS Scotland on Infant Mortality, and some data from the local health board. But we quickly identified a need for supportive scientific literature that would help them make a better case for extending their counselling services. One partner had expressed concerns that counselling was ineffective, but we found a number of studies that showed counselling to be beneficial for this kind of bereavement. Finding these journal articles for them helped provide legitimacy to the approach detailed in the impact assessment.

 

In fact, a simple step was to create a list of all the different services that SANDS Lothians provides. This had not been done before, but quickly showed how many different kinds of support were offered, and the diversity of their work. This is also powerful information for potential funders or partners, and useful to be able to present quickly.

 

Finally, we did a mini qualitative research project!

 

A post on their Facebook page asking for people to share experiences about being prescribed antidepressants after bereavement got more than 20 responses. While most of these were very short, they did give us valuable and interesting information: for example, not all people who had been suggested anti-depressants by their GP saw this as negative, and some talked about how these had helped them at a difficult time.

 

SANDS Lothians already had amazing and detailed written testimonials and stories from service users, so I was able to combine the responses from testimonials and comments from the Facebook feed into one Quirkos project, and draw across them all as needed.

 

Using Quirkos to pull out the different responses to anti-depressants showed that there were similar numbers of positive and negative responses, and also highlighted parent’s worries we had not considered, such as the effect of medication if trying to conceive again. This is the power of an qualitative approach: by asking open questions, we got a responses about issues we wouldn’t have asked about in a direct survey.

 

quirkos bubble cluster view

 

When writing up the report, Quirkos made it quick and easy to pull out supportive quotes. As I had previously gone through and coded the text, I could click on the counselling bubble, immediately see relevant comments, and copy and paste them into the report. Now SANDS Lothians also has an organised database of comments on how their counselling services helped clients, which they can draw on at any time.

 

Nicola explains how they have used the research outputs. “The impact assessment and white paper has been extremely valuable to our work. This has been shared with senior NHS Lothian staff regarding possible future partnership working.  I have also shared this information with the Scottish Government following the Bonomy recommendations. The recommendations highlight the need for clear pathways with outside charities who are able to assist bereaved parents. I was able to forward our papers to show our current support and illustrate the position Lothians are in regarding the opportunity to have excellent bereavement care following the loss of a baby. It strengthened the work we do and the testimonials give real evidence of the need for this care. 

 

I have also given our papers out at recent talks with community midwives and charge midwives in West Lothian and Royal Infirmary Edinburgh. Cecilia has attached the papers to grant applications which again strengthens our applications and validates our work.”

 

Most importantly, SANDS Lothians now have a framework to keep collecting data, “We will continue to record all data and update our papers for 2016.  Following our work with Quirkos, we will start to collate case studies which gives real evidence for our work and the experiences of parents.  Our next step would be to look specifically at our counselling service and its value.” 

 

“The work with Quirkos was extremely helpful. In very small charities, it is difficult to always have the skills to be an expert in all areas and find the time to train. We are extremely grateful to Daniel and Kristin who generously volunteered their time to assist us to produce this work. I would highly recommend them to any business or third sector organisation who need assistance in producing qualitative research.  We have gained confidence as a charity from our journey with Quirkos and would most definitely consider working with them again in the future.”

 

It was an incredible and emotional experience to work with Nicola and Cecilia at SANDS Lothians on this small project, and I am so grateful to for them for inviting us in to help them, and sharing so much. If you want any more information about the services they offer, or need to speak to someone about losing a baby through stillbirth, miscarriage or soon after birth, all their contact details are available on their website: http://www.sands-lothians.org.uk .

 

If you want any more information about Quirkos and a qualitative approach, feel free to contact us directly, or there is much more information on our website. Download a free trial, or read more about adopting a qualitative approach.

 

 

Designing a semi-structured interview guide for qualitative interviews

clipboard by wikidave https://www.flickr.com/photos/wikidave/7386337594

 

Interviews are a frequently used research method in qualitative studies. You will see dozens of papers that state something like “We conducted n in-depth semi-structured interviews with key informants”. But what exactly does this mean? What exactly counts as in-depth? How structured are semi-structured interviews?

 

The term “in-depth” is defined fairly vaguely in the literature: it generally means a one-to-one interview on one general topic, which is covered in detail. Usually these qualitative interviews last about an hour, although sometimes much longer. It sounds like two people having a discussion, but there are differences in the power dynamics, and end goal: for the classic sociologist Burgess (2002) these are “conversations with a purpose”.

 

Qualitative interviews generally differ from quantitative survey based questions in that they are looking for a more detailed and nuanced response. They also acknowledge there is no ‘one-size fits all’, especially when asking someone to recall a personal narrative about their experiences. Instead of a fixed “research protocol” that asks the same question to each respondent, most interviewees adopt a more flexible approach. However there is still a need “...to ensure that the same general areas of information are collected from each interviewee; this provides more focus than the conversational approach, but still allows a degree of freedom and adaptability in getting information from the interviewee” –MacNamara (2009).

 

Turner (2010) (who coincidentally shares the same name as me) describes three different types of qualitative interview; Informal Conversation, General Interview Guide, and Standardised Open-Ended. These can be seen as a scale from least to most structured, and we are going to focus on the ‘interview guide’ approach, which takes a middle ground.

 

An interview guide is like a cheat-sheet for the interviewer – it contains a list of questions and topic areas that should be covered in the interview. However, these are not to be read verbatim and in order, in fact they are more like an aide-mémoire. “Usually the interviewer will have a prepared set of questions but these are only used as a guide, and departures from the guidelines are not seen as a problem but are often encouraged” – Silverman (2013). That way, the interviewer can add extra questions about an unexpected but relevant area that emerges, and sections that don’t apply to the participant can be negated.

 

So what do these look like, and how does one go about writing a suitable semi-structured interview guide? Unfortunately, it is rare in journal articles for researchers to share the interview guide, and it’s difficult to find good examples on the internet. Basically they look like a list of short questions and follow-on prompts, grouped by topic. There will generally be about a dozen. I’ve written my fair share of interview guides for qualitative research projects over the years, either on my own or with the collaboration of colleagues, so I’m happy to share some tips.

 


Questions should answer your research questions
Your research project should have one or several main research questions, and these should be used to guide the topics covered in the interviews, and hopefully answer the research questions. However, you can’t just ask your respondents “Can the experience of male My Little Pony fans be described through the lens of Derridean deconstruction?”. You will need to break down your research into questions that have meaning for the participant and that they can engage with. The questions should be fairly informal and jargon free (unless that person is an expert in that field of jargon), open ended - so they can’t be easily answered with a yes or no, and non-leading so that respondents aren’t pushed down a certain interpretation.

 

 

Link to your proposed analytical approach
The questions on your guide should also be constructed in such a way that they will work well for your proposed method of analysis – which again you should already have decided. If you are doing narrative analysis, questions should be encouraging respondents to tell their story and history. In Interpretative Phenomenological Analysis you may want to ask more detail about people’s interpretations of their experiences. Think how you will want to analyse, compare and write up your research, and make sure that the questioning style fits your own approach.

 

 

Specific ‘Why’ and prompt questions
It is very rare in semi-structured interviews that you will ask one question, get a response, and then move on to the next topic. Firstly you will need to provide some structure for the participant, so they are not expected (or encouraged) to recite their whole life story. But on the other level, you will usually want to probe more about specific issues or conditions. That is where the flexible approach comes in. Someone might reveal something that you are interested in, and is relevant to the research project. So ask more! It’s often useful in the guide to list a series of prompt words that remind you of more areas of detail that might be covered. For example, the question “When did you first visit the doctor?” might be annotated with optional prompts such as “Why did you go then?”, “Were you afraid?” or “Did anyone go with you?”. Prompt words might reduce this to ‘Why THEN / afraid / with someone’.

 

 

Be flexible with order
Generally, an interview guide will be grouped into several topics, each with a few questions. One of the most difficult skills is how to segue from one topic or question to the next, while still seeming like a normal conversation. The best way to manage this is to make sure that you are always listening to the interviewee, and thinking at the same time about how what they are saying links to other discussion topics. If someone starts talking about how they felt isolated visiting the doctor, and one of your topics is about their experience with their doctor, you can ask ‘Did you doctor make you feel less isolated?’. You might then be asking about topic 4, when you are only on topic 1, but you now have a logical link to ask the more general written question ‘Did you feel the doctor supported you?’. The ability to flow from topic to topic as the conversation evolves (while still covering everything on the interview guide) is tricky, and requires you to:

 

 

Know your guide backwards - literally
I almost never went into an interview without a printed copy of the interview guide in front of me, but it was kind of like Dumbo’s magic feather: it made me feel safe, but I didn’t really need it. You should know everything on your interview guide off by heart, and in any sequence. Since things will crop up in unpredictable ways, you should be comfortable asking questions in different orders to help the conversational flow. Still, it’s always good to have the interview guide in front of you; it lets you tick off questions as they are asked (so you can see what hasn’t been covered), is space to write notes, and also can be less intimidating for the interviewee, as you can look at your notes occasionally rather than staring them in the eye all the time.

 


Try for natural conversation
Legard, Keegan and Ward (2003) note that “Although a good in-depth interview will appear naturalistic, it will bear little resemblance to an everyday conversation”. You will usually find that the most honest and rich responses come from relaxed, non-combative discussions. Make the first question easy, to ease the participant into the interview, and get them used to the question-answer format. But don’t let it feel like a tennis match, where you are always asking the questions. If they ask something of you, reply! Don’t sit in silence: nod, say ‘Yes’, or ‘Of course’ every now and then, to show you are listening and empathising like a normal human being. Yet do be careful about sharing your own potentially leading opinions, and making the discussion about yourself.

 

 

Discuss with your research team / supervisors
You should take the time to get feedback and suggestions from peers, be they other people on your research project, or your PhD supervisors. This means preparing the interview guide well in advance of your first interview, leaving time for discussion and revisions. Seasoned interviewers will have tips about wording and structuring questions, and even the most experienced researcher can benefit from a second opinion. Getting it right at this stage is very important, it’s no good discovering after you’ve done all your interviews that you didn’t ask about something important.

 

 

Adapting the guide
While these are semi-structured interviews, in general you will usually want to cover the same general areas every time you do an interview, no least so that there is some point of comparison. It’s also common to do a first few interviews and realise that you are not asking about a critical area, or that some new potential insight is emerging (especially if you are taking a grounded theory approach). In qualitative research, this need not be a disaster (if this flexibility is methodologically appropriate), and it is possible to revise your interview guide. However, if you do end up making significant revisions, make sure you keep both versions, and a note of which respondents were interviewed with each version of the guide.

 

 

Test the timing
Inevitably, you will not have exactly the same amount of time for each interview, and respondents will differ in how fast they talk and how often they go off-topic! Make sure you have enough questions to get the detail you need, but also have ‘lower priority’ questions you can drop if things are taking too long. Test the timing of your interview guide with a few participants, or even friends before you settle on it, and revise as necessary. Try and get your interview guide down to one side of paper at the most: it is a prompt, not an encyclopaedia!

 


Hopefully these points will help demystify qualitative interview guides, and help you craft a useful tool to shape your semi-structured interviews. I’d also caution that semi-structured interviewing is a very difficult process, and benefits majorly from practice. I have been with many new researchers who tend to fall back on the interview guide too much, and read it verbatim. This generally leads to closed-off responses, and missed opportunities to further explore interesting revelations. Treat your interview guide as a guide, not a gospel, and be flexible. It’s extra hard, because you have to juggle asking questions, listening, choosing the next question, keeping the research topic in your head and making sure everything is covered – but when you do it right, you’ll get rich research data that you will actually be excited to go home and analyse.

 

 

Don’t forget to check out some of the references above, as well as the myriad of excellent articles and textbooks on qualitative interviews. There’s also Quirkos itself, software to help you make the research process engaging and visual, with a free trial to download of this innovative tool. We also have a rapidly growing series of blog post articles on qualitative interviews. These now include 10 tips for qualitative interviewing, transcribing qualitative interviews and focus groups, and how to make sure you get good recordings. Our blog is updated with articles like this every week, and you can hear about it first by following our Twitter feed @quirkossoftware.

 

 

An early spring update on Quirkos for 2016

spring snowdrops

 

About this time last year, I posted an update on Quirkos development for the next year. Even though February continues to be cold and largely snow-drop free in Scotland, why not make it a tradition?!

 

It’s really amazing how much Quirkos has grown over the last 18 months since our first release. We now have hundreds of users in more than 50 universities across the world. The best part of this is that we now get much more feedback and suggestions from qualitative researchers who are using Quirkos for different projects. Although we have always had a ‘road-map’ for developing new features for Quirkos, it’s been an aim to keep that flexible so we adapt to people’s needs.

 

We are planning a new update for Quirkos (free of course) for the end of March 2016. This version (1.4) will be a fairly major upgrade, but as ever will be released at the same time for Windows, Mac and Linux, with identical features and compatibility across all three.

 

The most significant improvement will be speed. Although v1.3 did improve this a little, it was not enough. The underlying ‘engine’ for coding and highlights was laggy and slow with large projects, and required complete rewriting from scratch. It has justifiably been the biggest source of criticism so far about Quirkos, but we hope this will now remove the last thing holding many users back. This has taken months, which is why this release is a little later than our typical quarterly updates. However, the difference so far is amazing: a near 10 fold increase in speed when loading, coding and editing sources. Although the interface will still look the same, everyone will notice the under-the-hood difference in small and large projects alike.

 

There will also be a few minor bug fixes in this release. We had reports that when moving encrypted projects between Windows and Mac, passwords were not accepted. We’ve fixed this issue, and a few others that people have reported. There are also several small improvements suggested by users that should make exploring the data easier. So please always e-mail us with bugs or suggestions, everything reported gets investigated, and we try and fix issues as soon as we can!

 

We will be sending the new version out to an international group of beta-testers at the end of February, so we are confident that everything works as intended before we make it publicly available. The best way to keep abreast of updates is to follow our Twitter feed: twitter.com/quirkossoftware which is usually updated every day.

 

Looking forward, the next release (v1.5) is due for the summer, and will add some exciting new features, probably including the second most frequently requested addition: memos! Proper note taking functionality is top of many people’s request lists, and will make it much easier to record researcher’s ponderings during the analysis process. For the meantime, check out our blog post article on how to record and code your notes in Quirkos. We also hope to add a lot more tools to help look at word-frequency in their qualitative data sets, including the ever popular word clouds!

 

In addition to all this, we will have a major new collaboration to announce in the next few months. This is going to represent a major leap forward in functionality for Quirkos, bringing some top minds into the fray to work on the next generation of qualitative analysis software.

 

So far, we have reinvested all our sales income into development, to make sure that we keep making the software better, and keep current and future users happy. Since all our updates are free, the best way to support further development is to buy a licence, and you will always benefit from work we do in the future to add new capabilities, and be able to suggest the features that will make your qualitative research easier and more fun.

 

 

Delivering qualitative market insights with Quirkos

delivering fashion

 

To build a well-designed, well-thought-out, and ultimately useful product, today’s technology companies must gain a deep understanding of the working mentality of people who will use that product. For Melody Truckload, a Los Angeles tech company focused on app-based freight logistics, this means intense market research and a focus on training sales agents as researchers.

 

Kody Kinzie, director of Melody’s special research and operations team, Cythlin Intelligence, was faced with introducing qualitative social research and analysis to people who had never considered themselves researchers before.

 

“Quirkos was the first truly accessible qualitative program I found,” Kinzie said.

 

Quirkos was designed with the philosophy that anyone can become a qualitative researcher. The goal is to allow companies and agencies to adopt unique ways to understand their staff and the wider marketplace. By making qualitative data visual and easy to code, users can see their results emerge and gain quick overviews of complex issues.

 

Companies like Melody are at the forefront of developing the next generation of qualitative insight, and Quirkos is helping to open the door to innovative new methods of business intelligence.

 

Kinzie started training his team members to use Quirkos but said he soon discovered that the simple coding allowed even a novice to develop complex data structures with notable uniqueness. Often, he found that these code structures were well suited to analyzing particular elements the researchers were interested in, and he began documenting the experiment to evaluate the resulting structures.

 

Here’s how Kinzie and his team use Quirkos:
One team member will send a Quirkos database to another team member — a researcher who examines the code structure and walks the requesting team member through an explanation of the thought process that went into creating the code. The data structure’s strengths and weaknesses are then assessed and distilled into a report. The researcher examines Melody’s code construction to discover what kind of information it is most effective at analyzing or categorizing, as well as whether the code tags and organizes information or clusters information into meaningful relationships.

 

This helps researchers understand what kind of questions these information structures should be applied to, and where a particular researcher’s methods might excel. The ability to use Quirkos to build and analyze unique and flexible databases from these structures has given Melody an edge in developing and sharing insights throughout the team.

 

While Melody Truckload’s app currently wraps up beta testing with commercial partners, the Quirkos approach has been put to the test most recently on the Melody team’s latest project, Melody Fashion.

 

“In the complex world of L.A.’s Fashion District, which is the part of town that houses the city’s fashion industry wholesale market, freight consolidation desperately needs to be modernized,” Kinzie said. “The objective of Melody Fashion is to provide a platform for fulfilment and consolidation that takes into account a detailed understanding of a market with many players.”

 

To that end, sales agents were trained to analyze interactions using grounded theory on Quirkos and to aggregate data garnered in their interactions with customers. It led to valuable insights, including a partnership with local shipping experts to bring Melody Fashion’s technology to the district.

 

Melody operations manager Marcus Galamay, who introduced new agents to Quirkos software and guided them through their first qualitative exercises, said, “Quirkos provides an intuitive introduction to qualitative analysis for our sales agents, augmenting their role in a way that’s expanded our insights into our client base. It’s a niche that many might not think to pursue, but it’s already delivered results in terms of better understanding of the data we generate and refining our market strategy based on that.”

 

Thanks to its ease of use and its powerful ability to assist in important social research, Quirkos was instrumental in providing Melody with the insight necessary to build smart and useful technology for a distinct and totally new customer base.

 

 

Using properties to describe your qualitative data sources

Properties and values editor in Quirkos

In Quirkos, the qualitative data you bring into the project is grouped as 'sources'. Each source might be something like an interview transcript, a news article, your own notes and memos, or even journal articles. Since it can be any source of text data, you can have a project that includes a large number of different types of source, which can be useful when putting your research together. This means that you can code things like your research questions, articles on theory, or even grey literature, and keep them in the same place as your research data.


The benefit of this approach is that you can quickly cross-reference your own research together with written articles, coding them on the same themes so you can compare them. However, there will be times that you only want to look at data from some of your sources. Perhaps you only want to look at journal articles written between a certain period, or look at respondent's data from just one city. By using the Source Properties in Quirkos, you can do all this and more: it allows you an essentially unlimited number of ways to describe the data. You can then use the query view to see results that match one or more properties, and even do comparisons. This Properties-Query combo is the best way to examine your coded qualitative data for trends and differences.

 

This article will outline a few different ways that you can use the source properties, and how to get the most use out of your research data and other sources.


When you bring a data source into Quirkos, the computer doesn't know anything about it. It's good practice to describe it, using what is sometimes called 'metadata' or 'data about data'. So for example, respondent data might have values for Age, Gender, Location, Occupation, Purchasing Habits... the list is endless. Research papers and textbooks will have values like Journal Name, Pulbication Year, Volume, Author, Page number etc.

 

Each of these categories in Quirkos are called 'Properties' and the possible data belonging to each property are called 'Values'. So for example, the Age of a respondent is a Property, and the value could be 42. Quirkos lets you have a practically unlimited number of Properties that describe all the sources in a project, and an unlimited number of Values.


The values can also be numerical (like age in years), discrete (like categories for Old, Young or 20-29) or even comments (like 'This person was uncomfortable revealing their age'). Properties can even have a mix of different data types as values.


To create properties and values in your project, click on the small 'grid' button on the top right corner of the screen. This toggles the properties view, and will show you the properties and values for the data source you are currently viewing. To look at a different source, just select it from the tabs at the bottom, or the complete list of sources in the source browser button (bottom left of the source column).


One here, you can create a new property and value with the (+) button at the bottom of the column, or use the 'Properties and Values Editor' to add lots of data at once, or to remove or edit existing values. The Editor also gives you the option of rearranging Properties and Values, and changing a Property to be 'multiple-choice' will let you assign more than one Value to each Property (for example to show that a person has multiple hobbies).


There are also a couple of features that help speed up data entry, for example the Properties Editor also allows you to create Properties that have pre-existing common values, for example 'Yes/No' properties, or common Likert Agree-Disagree scales. To define values for a property, use the orange drop-down arrow next to each Property. When you click on this, you can see all the values that have already been defined, as well as the option to add a new value directly.


I always try and encourage people to also use the properties creatively. You can use them to quickly create groups of your sources, and explore them together. So you may create a property for 'Unusual case', select Yes for those sources, and see what makes them special. There might even be something you didn't collect survey data for, but  is a clear category in the text, such as how someone voted. You can make this a Property too, and easily see who these people are and what they said. They can also be process-based properties: 'Ones I haven't coded Yet' or 'Ones I need to go over again'. Use the properties as a flexible way to manage and sort your data, in anyway you see fit! You can of course create and remove properties and values at any stage of your project, and don't forget to describe the 'type' of source: article, transcript, notes etc.


When you want to explore the data by property, use the Query view. This lets you set up very simple filters that will show you results of coded data that comes from particular sources. You can even run two queries at once, and see the results side-by-side to compare them. While by default the [ = ] option will return sources that match the value, you can also use 'Not equal' [!=] and ranges for numerical or alphabetic values ( < > etc). It's also possible to add many queries together with a simple interface, to create complex filters. So for example you can return results from just people between the ages of 30-35, who are Male, and live in France OR Germany.

 


This was a quick summary of how to describe your qualitative data in Quirkos: as always you can find more information in the video guides, and ask us a question in the forum.

 

 

Starting out in Qualitative Analysis

Qualitative analysis 101

 

When people are doing their first qualitative analysis project using software, it’s difficult to know where to begin. I get a lot of e-mails from people who want some advice in planning out what they will actually DO in the software, and how that will help them. I am happy to help out individually, because everyone’s project is different. However, here are a few pointers which cover the basics and can help demystify the process. These should actually apply to any software, not just Quirkos!

 

First off: what are you going to be able to do? In a nutshell, you will read through the sources, and for each section that is interesting to you and about a certain topic, you will ‘code’ or ‘tag’ that section of text to that topic. By doing this, the software lets you quickly see all the sections of text, the ‘quotes’ about that topic, across all of your sources. So you can see everything everyone said about ‘Politics’ or ‘Negative’ – or both.

 

You can then look for trends or outliers in the project, by looking at just responses with a particular characteristic like gender. You’ll also be able to search for a keyword, and generate a report with all your coded sections brought together. When you come to write up your qualitative project, the software can help you find quotes on a particular topic, visualise the data or show sub-section analysis.  

 

So here are the basic steps:

 

1.       Bring in your sources.
I’m assuming at this stage that you have the qualitative data you want to work with already. This could be any source of text on your computer. If you can copy and paste it, you can bring it into Quirkos. For this example let’s assume that you have transcripts from interviews: this means that you have already done a series of interviews, transcribed them, and have them in a file (say a Word document or raw text file). I’d suggest that before you bring them in, just have a quick look through and correct them in a Word Processor for typos and misheard words. While you can edit the text in Quirkos later, while using a Word or equivalent you have the advantage of spell checkers and grammar checkers.

 

Now, create a new, unstructured project in Quirkos, and save it somewhere locally on your computer. We don’t recommend you save directly to a network location, or USB stick, as if either of these go down, you will have a problem! Next, bring in the sources using the (+) Add Source button on the bottom right. You can bring in each file one at a time, or a whole folder of files in one go, in which case the file name will become the default source name. Don’t forget, you can always add more sources later, there is no need to bring in everything before you start coding. Now your project file (a little .qrk file you named) will contain all the text sources in one place. With Quirkos files, just backing up and copying this file saves the whole project.

 


2.       Describe your sources
It’s usually a good idea to describe some characteristics of your qualitative sources that you might use later to look for differences or similarities in the data. Often these are basic demographic characteristics like age or gender, but can also be things about the interview, such as the location, or your own notes.

 

To do this in Quirkos, click on the little grid button on the top right of the screen, and use the source properties. The first thing you can do here is change the name of the sources from the default (either a sequential number like ‘Source 7’ or the file name. You can create a property with the square [+] ‘Quickly add a new property’ button. The property (eg Gender) and a single value (eg Male) can be added here. The drop down arrow next to that property can be used later to add extra values.

 

The reason for doing this is that you can later run ‘queries’ which show results from just certain sources that have properties you defined. So you can do a side-by-side comparison of coded responses from men next to women. Don’t forget, you can add properties at any time, so you can even create a variable for ‘these people don’t fit the theory’ after you’ve coded, and try and see what they are saying that makes them different.

 

 

3.       Create your themes
Whatever you call them: themes, nodes, bubbles, topics or Quirks, these are the categories of interest you want to collect quotes about from the text. There are two approaches here, you can try and create all the categories you will use before you start reading and coding the text (this is sometimes called a framework approach), or you can add themes as you go (grounded theory). (For much much more on these approaches, look here and here.)

 

In Quirkos, you create themes as coloured bubbles, which grow in size the more text is added. Just click on the grey (+) button on the top right of the canvas view to add a new theme. You can also change the name, colour, level in this dialogue, or right click on the bubble and select ‘Quirk Properties’ at any time. To group, just drag and drop bubbles on top of each other.

 

 

4.       Do your coding
Essentially, the coding process involves finding every time someone said something about ‘Dieting’ and adding that sentence or paragraph to the ‘Dieting’ bubble or node. This is what is going to take the most time in your analysis (days or weeks) and is still a manual process. It’s best to read through each source in turn, and code it as you go.

 

However, you can also use the keyword search to look for words like ‘Diet’ or ‘eating’ and code from the results. This makes it quicker, but there is the risk of missing segments that use a keyword you didn’t think to search for like ‘cut-down’. The keywords search can help when you (inevitably) decide to add a new topic halfway through, and the first few interviews haven’t been coded for the new themes. You can use the search to look for related terms and find those new segments without having to go over the whole text again.

 

 

5.       Be iterative
Even if you are not using a grounded theory approach, going back over the data a second time, and rethinking codes and how you have categorised things can be really useful. Trust me: even if you know the data pretty well, after reading it all again, you will see some topics in a slightly different light, or will find interesting things you never thought would be there.

 

You may also want to rearrange your codes, especially if you have grouped them. Maybe the name you gave a theme isn’t quite right now: it’s grown or got more specific. Some vague codes like ‘Angry’ might need to be split out into ‘Irate’ and ‘Annoyed’. Depending on your approach, you  will probably constantly tweak and adjust the themes and coding so they best represent the intersection of your research questions and data.

 

 

6.       Explore the data.
Once your qualitative data is all coded, the big advantages of using CAQDAS software come into play. Using the database of your tagged text, you can choose to look at it in anyway: using any of the source properties, who did the coding or when, or whether a result comes from any particular group of codes. This is done using the 'Query' views in Quirkos.

 

In Quirkos there are also a lot of visualisation options that can show you the overall shape and structure of the project, the amount of coding, and connections that are emerging between the sources. You can then use these to help write your outputs, be they journal articles, evaluations or a thesis. Software will generate reports that let you share summaries of the coded data, and include key statistics and overviews of the project.


While it does seem like a lot of work to get to this stage, it can save so much time at the final stages of writing up your project, when you can call up a useful quote quickly. It also can help in the future to have this structured repository of qualitative data, so that secondary analysis or adding to the dataset does not involve re-inventing the wheel!

 

Finally, there is no one-size-fits-all approach, and it's important to find a strategy that fits with your way of working. Before you set out, talk to peers and supervisors, read guides and textbooks, and even go on training courses. While the software can help, it's not a replacement for considered thinking, and you should always have a good idea about what you want to do with the data in the end.

 

 

Qualitative evidence for evaluations and impact assessments

qualitative evidence for charities

For the last few months we have been working with SANDS Lothians, a local charity offering help and support for families who have lost a baby in miscarriage, stillbirth or soon after birth. They offer amazing services, including counselling, peer discussion groups and advice to health professionals, which can help ease the pain and isolation of a difficult journey.

 

We helped them put together a compilation of qualitative evidence in Quirkos. This has come from a many sources they already have, but putting it together and pulling out some of the key themes means they have a qualitative database they can use for quickly putting together evaluations, reports and impact assessments. Many organisations will have a lot of qualitative data already, and this can easily become really valuable evidence.

 

First, try doing an ‘audit’ for qualitative data you already have. Look though the potential sources listed below (and any other sources you can think of), and find historical evidence you can bring in. Secondly, keep these sources in mind in day-to-day work, and remember to flag them when you see them. If you get a nice e-mail from someone that they liked an event you ran, or a service they use, save it! It’s all evidence, and can help make a convincing case for funders and other supporters in the future.

 

Here are a few potential sources of qualitative feedback (and even quantitative data) you can bring together as evidence for evaluations and future work:

 

 

1.  Feedback from service users:

Feedback from e-mails is probably the easiest to pull together, as it is already typed up. Whenever someone complements your services, thank them and store the comments as feedback for another day. It is easy to build up a virtual ‘guest-book’ in this way, and soon you will have dozens of supportive comments that you can use to show the difference your organisation makes. Even when you get phone calls, try and make notes of important things that people say. It’s not just positive comments too, note suggestions and if people say there is something missing  – this can be evidence to funders that you need extra resources.

You can also specifically ask for stories from users you know well, these can form case studies to base a report around. If you have a specific project in mind, you can do a quick survey. Ask former users to share their experience on an issue, either by contacting people directly, or asking for comments through social media. By collating these responses, you can get quick support for the direction of a project or new service.

 


2. Social media

Comments and messages of support from your Facebook friends, Twitter followers, and pictures of people running marathons for you on Instagram are all evidence of support for the work you do. Pull out the nice messages, and don’t forget, the number of followers and likes you have are evidence of your impact and reach.

 


3. Local (and international) news

A lot of charities are good at running activities that end up in the local news, so keep clippings as evidence of the impact of your events, and the exposure you get. Funders like to work with organisations that are visible, so collect and collate these. There may also be news stories talking about problems in the community that are related to issues you work on, these can show the importance of the work you do.

 


4. Reports from local authority and national organisations

Keep an eye out for reports from local council meetings and public sector organisations that might be relevant to your charity. If there are discussions on an area you work on, it is another source of evidence about the need for your interventions.


There may also be national organisations or local partners that work in similar areas – again they are likely to write reports highlighting the significance of your area, often with great statistics and links to other evidence. Share and collaborate evidence, and together the impact will be stronger!

 

5. Academic evidence

One of the most powerful ways you can add legitimacy to your impact assessment or funding applications is by linking to research on the importance of the problems you are tackling, or the potential benefits of your style of intervention. A quick search in Google Scholar (scholar.google.com) for keywords like ‘obesity’ ‘intervention’ can find dozens of articles that might be relevant. The journal articles themselves will often be behind ‘paywalls’ that mean you can’t read or download the whole paper. However, the summary is free to read, and probably gives you enough information to support your argument one way or another. Just link to the paper, and refer to it as (‘Author’s surname’, ‘Year of Publication’) for example (Turner 2013).

 

It might also be worth seeking out a relationship with a friendly academic at a local university. Look through Google (or ask through your networks) for someone that works in your area, and contact them to ask for help. Researchers have their own impact obligations, so are sometimes interested in partnering with local charities to ensure their research is used more widely. It can be a mutually beneficial relationship…

 

 

 

Hopefully these examples will help you think through all the different things you already have around you that can be turned into qualitative evidence, and some things you can seek out. We will have more blog posts on our work with local charities soon, and how you can use Quirkos to collate and analyse this qualitative evidence.

 

 

Quirkos 1.3 is released!

Quirkos version 1.3 on Linux

We are proud to announce a significant update for Quirkos, that adds significant new features, improves performance, and provides a fresh new look. Major changes include:

  • PDF import
  • Greater ability to work with Levels to group and explore themes
  • Improved performance when working with large projects
  • New report generation and styling
  • Ability to copy and paste quotes directly from search and hierarchy views
  • Improved CSV export
  • New tree-hierarchy view for Quirks
  • Numerous bug fixes
  • Cleaner visual look

 

We’ve made a few tweaks to the way Quirkos looks, tidying up dialogue boxes and improving the general style and visibility, but maintaining the same layout, so there is nothing out of place for experienced users.

 


There is once again no change to Quirkos project files, so all versions of Quirkos can talk to each other with no issues, and there is no need to do anything to your files – just keep working with your qualitative data. The update is free for all paid users, and a simple process to install. Just download the latest version, install to the same directory as the last release, and the new version will replace the old. There is no need to update the licence code, and we would recommend all users to move to the new version as soon as they can to take advantage of the improvements!

 


Lots of people have requested PDF support, so that users can add journal articles and PDF reports into Quirkos, and we are happy to say this is now enabled. Please note that at the moment PDF support is limited to text only – some PDF files, especially from older journals that have been scanned in are not actually stored as text, but as a scanned image of text. Quirkos can’t read the text from these PDFs, and you will usually need to use OCR (optical character recognition) software to convert these (included in some professional editions of Acrobat Reader for example).

 


We have always supported ‘Levels’ in Quirkos, a way to group Quirks that can work across hierarchical groupings and parent-child relationships. Many people desired to work with categories in this way, so we have improved the ways you can work with levels. They are now a refinable category in search results and queries, allowing you to generate a report containing data refined by level, and a whole extra dimension to group your qualitative themes.

 


Reports have been completely revamped to improve how you share qualitative data, with better images, and a simpler layout. There are now many more options for showing the properties belonging to each quote, streamlined and grouped section headings, better display of hierarchial groupings, and a much more polished, professional look. As always, our reports can be shared as PDF, interactive HTML, or customised using basic CSS and Javascript.

 


Although the canvas view with distinctive topic bubbles is a distinguishing feature in Quirkos, we know some people prefer to work with a more traditional tree hierarchy view. We’ve taken on board a lot of feedback, and reworked the ‘luggage label’ view to a tree structure, so that it works better with large numbers of nodes. The hierarchy of grouped codes in this view has also been made clearer.

 


There are also numerous bug fixes and performance improvements, fixing some issues with activation, improving the speed when working with large sources, and some dialogue improvements to the properties editor on OS X.

 

We are also excited to launch our first release for Linux! Just like all the other platforms, the functionality, interface and project files are identical, so you can work across platforms with ease. There will be a separate blog post article about Quirkos on Linux tomorrow.

 


We are really excited about the improvements in the new version, so download it today, and let us know if you have any other suggestions or feedback. Nearly all of the features we have added have come from suggestions made by users, so keep giving us your feedback, and we will try and add your dream features to the next version...

 

 

The CAQDAS jigsaw: integrating with workflows

 

I’m increasingly seeing qualitative research software as being the middle piece of a jigsaw puzzle that has three stages: collection, coding/exploring, and communication. These steps are not always clear cut, and generally there should be a fluid link between them. But the process, and enacting of these steps is often quite distinct, and the more I think about the ‘typical’ workflow for qualitative analysis, the more I see these stages, and most critically, a need to be flexible, and allow people different ways of working.

 

At any stage it’s important to choose the best tools (and approach) for the job. For qualitative analysis, people have so many different approaches and needs, that it’s impossible to impose a ‘one-size-fits-all’ approach. Some people might be working with multimedia sources, have anything from 3 to 300 sources, and be using various different methodological and conceptual approaches. On top of all this are the more mundane, but important practical limitations, such as time, familiarity with computers, and what software packages their institution makes available to them.

 

But my contention is that the best way to go about facilitating a workflow is not to be a jack-of-all trades, but a master of one. For CAQDAS (Computer Assisted Qualitative Data AnalysiS) software, it should focus on what it does best: aiding the analysis process, and realise that it has to co-exist with many other software packages.

 

For the first stage, collection and transcription of data, I would generally not recommend people use any CAQDAS package. If you are recording transcripts, these are best done on a Dictaphone, and transcribing them is best done in proper word-processing software. While it’s technically possible to type directly into nearly all CAQDAS software tools (including Quirkos), why would you? Nearly everyone has access to Word or LibreOffice, which gives excellent spell-checking tools for typos, and much more control over saving and formatting each source. Even if you are working with multimedia data, you are probably going to trim audio transcripts in Audacity (or Pro-Tools), and resize and colour correct pictures in Photoshop.

 

So I think that qualitative analysis software needs to recognise this, and take in data from as many different sources as possible, and not try and tie people to one format or platform. It’s great to have tight integration with something like Evernote or SurveyMonkey, but both of those cost extra, and aren’t always the right approach for people, so it’s better to be input-agnostic.

 

But once you’ve got data in, it’s stage 2 where qualitative software shines. CAQDAS software is dedicated to the coding and sorting of qualitative data, and has tools and interfaces specifically designed to make this part of the process easier and quicker. However, that’s not how everyone wants to work. Some people are working in teams where not everyone has access to CAQDAS, and others prefer to use Word and Excel to sort and code data. That should be OK too, because for most people the comfortable and familiar way is the easiest path, and what it’s easy to forget as a software developer is that people want to focus on the data and findings, not the tools and process.

 

So CAQDAS should ideally be able to bring in data coded in other ways, for people that prefer to just do the visualisation and exploration in qualitative software. But CAQDAS should also be able to export coded data at this stage, so that people can play with the data in other ways. Some people want to do statistical analysis, so it should connect with SPSS or R. And it should also be able to work with spreadsheet software, because so many people are familiar with it, and it can be used to make very specific graphs.

 

Again, it’s possible to do all of this in most CAQDAS software, but I’ve yet to see any package that gives the statistical possibilities and rigour that R does, and while graphs seem to get prettier with every new version, I still prefer the greater customisation and export options you get in Excel.

 

The final stage is sharing and communicating, and once again this should be flexible too. Some people will have to get across their findings in a presentation, so generate images for this. Many will be writing up longer reports, so export options for getting quotes into word-processing software is essential again. At this stage you will (hopefully) be engaging with an ever widening audience, so outputs need to be completely software agnostic so everyone can read them.

 

When you start seeing all the different tools that people use in the course of their research project, this concept of CAQDAS being a middle piece of the puzzle becomes really clear, and allowing people flexibility is really important. Developing CAQDAS software is a challenge, because everyone has slightly different needs. But the solution usually seems to be more ways in, and more ways out. That way people can rely on the software as little or as much as they like, and always find an easy way to integrate with all the tools in their workflow.

 

I was inspired to write this by reading a recent article on the Five-level QDA approach, written by Christine Silver and Nick Woolf. They outline a really strong ‘Analytic Planning Worksheet’ that is designed to get people to stop and break down their analytical tasks before they start coding, so that they can identify the best tools and process for each stage. This helps researchers create a customisable workflow for their projects, which they can use with trainers to identify which software is best for each step.

 

Next week, I’m going to write a blog post more specifically about the Five-level QDA, and pedagogical issues that the article raises about learning qualitative research software. Watch this space!

 

 

Participatory Qualitative Analysis

laptops for qualitative analysis

 

Engaging participants in the research process can be a valuable and insightful endeavour, leading to researchers addressing the right issues, and asking the right questions. Many funding boards in the UK (especially in health) make engaging with members of the public, or targets of the research a requirement in publicly funded research.

 

While there are similar obligations to provide dissemination and research outputs that are targeted at ‘lay’ members of the public, the engagement process usually ends in the planning stage. It is rare for researchers to have participants, or even major organisational stakeholders, become part of the analysis process, and use their interpretations to translate the data into meaningful findings.

 

With surprisingly little training, I believe that anyone can do qualitative analysis, and get engaged in actions like coding and topic discovery in qualitative data sets.

 

I’ve written about this before but earlier this year we actually had a chance to try this out with Quirkos. It was one of the main reasons we wanted to design new qualitative analysis software; existing solutions were too difficult to learn for non-expert researchers (and quite a lot of experienced experts too).

 

So when we did our research project on the Scottish Referendum, we invited all of the participants to come along to a series of workshops and try analysing the data themselves. Out of 12, only 3 actually came along, but none of these people had any experience of doing qualitative research before.

 

And they were great at it!

 

In a two hour session, respondents were given a quick overview of how to do coding in Quirkos (in just 15 minutes), and a basic framework of codes they could use to analyse the text. They were free to use these topics, or create their own as they wished – all 3 participants chose to add codes to the existing framework.

 

They were each given transcripts from someone else’s anonymised interview: as these were group sessions, we didn’t want people to be identified while coding their own transcript. Each were 30 minute interviews, around 5000 words in length. In the two hour session, all participants had coded one interview completely, and done most (or all) of the second. One participant was so engrossed in the process, he had to be sent home before he missed his dinner, but took a copy of Quirkos and the data home to keep working on his own computer.

 

The graph below shows how quickly the participants learnt how to code. The y axis shows the number of seconds between each ‘coding event’: every time someone coded a new piece of text (and numbered sequentially along the x axis). The time taken to code starts off high, with questions and missteps meaning each event takes a minute or more. However, the time between events quickly decreases, and in fact the average time for the respondents was to add a code every 20 seconds. This is after any gaps longer than 3 minutes have been removed – these are assumed to be breaks for tea or debate! Each user made at least 140 tags, assigning text to one or more categories.

 

 

So participants can be used as cheap labour to speed up or triangulate the coding process? Well, it can be more than this. The topics they chose to add to the framework (‘love of Scotland’, ‘anti-English feelings’, ‘Scottish Difference’) highlighted their own interpretations of the data, showing their own opinions and variations. It also prompted discussion with other coders, about what they thought about the views of people in the dataset, how they had interpreted the data:


“Suspicion, oh yeah, that’s negative trust. Love of Scotland, oh! I put anti-English feelings which is the opposite! Ours are like inverse pictures of each other’s!”

 

Yes: obviously we recorded and transcribed the discussions and reflections, and analysed them in Quirkos! And these revealed that people expressed familiar issues with reflexivity, reliability and process that could have come from experienced qualitative researchers:


“My view on what the categories mean or what the person is saying might change before the end, so I could have actually read the whole thing through before doing the comments”


“I started adding in categories, and then thinking, ooh, if I’d added that in earlier I could actually have tied it up to such-and-such comment”


“I thought that bit revealed a lot about her political beliefs, and I could feel my emotions entering into my judgement”


“I also didn’t want to leave any comment unclassified, but we could do, couldn’t we? That to me is about the mechanics of using the computer, ticky box thing.”

 

This is probably the most useful part of the project to a researcher: the input of participants can be used as stimulus for additional discussion and data collection, or to challenge the way researchers do their own coding. I found myself being challenged about how I had assigned codes to controversial topics, and researchers could use a more formal triangulation process to compare coding between researchers and participants, thus verifying themes, or identifying and challenging significant differences.

 

Obviously, this is a tiny experimental project, and the experience of 3 well educated, middle-class Scots should not be interpreted as meaning that anyone can (or would want to) do this kind of analysis. But I believe we should do try this kind of approach whenever it is appropriate. For most social research, the experts are the people who are always in the field – the participants who are living these lives every day.

 

You can download the full report, as well as the transcripts and coded data as a Quirkos file from http://www.quirkos.com/workshops/referendum/

 

 

Engaging qualitative research with a quantitative audience.

graphs of quantiatative data in media

 

The last two blog post articles were based on a talk I was invited to give at ‘Mind the Gap’, a conference organised by MDH RSA at the University of Sheffield. You can find the slides here, but they are not very text heavy, so don’t read well without audio!

 

The two talks which preceded me, by Professors Glynis Cousin and John Sandars, echoed quite a few of the themes. Professor Cousin spoke persuasively about reductionism in qualitative research, in her talk on the ‘Science of the Singular’ and the significance that can be drawn from a single case study. She argued that by necessity all research is reductive, and even ‘fictive’, but that doesn’t restrict what we can interpret from it.

 

Professor Cousin described how both Goffman (1961) and Kenkessie (1962) did extensive ethnographies on mental asylums about the same time, but one wrote a classic academic text, and the other the ‘fictive’ novel, One Flew Over the Cuckoo’s Nest. One could argue that both were very influential, but the different approaches in ‘writing-up’ appeal to different audiences.

 

That notion of writing for your audience was evident in Professor Sanders talk, and his concern for communications methods that have the most impact. Drawing from a variety of mixed-method research projects in education, he talked about choosing a methodology that has to balance the approach the researcher desires in their heart, with what the audience will accept. It is little use choosing an action-research approach if the target audience (or journal editors) find it inappropriate in some way.

 

This sparked some debate about how well qualitative methods are accepted in mainstream journals, and if there is a preference towards publishing research based on quantitative methods. Some felt that authors felt an obligation to take a defensive stance when describing qualitative methods, further restricting the limited word limits that cut down so much detail in qualitative dissemination. The final speaker, Dr Kiera Barlett also touched on this issue when discussing publications strategies for mixed-method projects. Should you have separate qualitative and quantitative papers for respective journals, or try and have publications that draw from all aspects of the study? Obviously this will depend on the field, findings and methods chosen, but it again raised a difficult issue.

 

Is it still the case that quantitative findings have more impact than qualitative ones? Do journal articles, funders and decision makers still have a preference for what are seen as more traditional statistical based methodologies? From my own anecdotal position I would have to agree with most of these, although to be fair I have seen little evidence of funding bodies (at least in the UK and in social sciences and health) having a strong preference against qualitative methods of inquiry.

 

However, during the discussion at the conference it was noted that the preference for ‘traditional’ methods is not just restricted to journal reviewers but the culture of disciplines at large. This is often for good reason, and not restricted to a qualitative/quantitative divide: particular techniques and statistical tests tend to dominate, partly because they are well known. This has a great advantage: if you use a common indicator or test, people probably have a better understanding of the approach and limitations, so can interpret the results better, and compare with other studies. With a novel approach, one could argue that readers also need to also go and read all the references in the methodology section (which they may or may not bother to do), and that comparisons and research synthesis are made more difficult.

 

As for journal articles, participants pointed out that many online and open-access journals have removed word limits (or effectively done so by allowing hyperlinked appendices), making publication of long text based selections of qualitative data easier. However, this doesn’t necessarily increase palatability, and that’s why I want to get back to this issue about considering the audience for research findings, and choosing an appropriate medium.

 

It may be easy to say that if research is predominantly a quantitative world, quantifying, summarising, and statistically analysing qualitative data is the way to go. But this is abhorrent, not just to the heart of a qualitative researcher, but also deceptive - imparting a quantitative fiction on a qualitative story. Perhaps the challenge is to think of approaches outside the written journal article. If we can submit a graphic novel as a PhD or explain your research as a dance we can reach new audiences, and engage in new ways with existing ones.

 

Producing graphs, pie charts, and even the bubble views in Quirkos are all ways that essentially summarise, quantify and potentially trivialise qualitative data. But if this allows us to access a wider audience used to quantitative methods, it may have a valuable utility, at least in providing that first engagement that makes a reader want to look in more detail. In my opinion, the worst research is that which stays unread on the shelf.

 

 

Our hyper-connected qualitative world

qualitative neurons and connections

 

We live in a world of deep qualitative data.

 

It’s often proposed that we are very quantitatively literate. We are exposed to numbers and statistics frequently in news reports, at work, when driving, with fitness apps etc. So we are actually pretty good at understanding things like percentages, fractions, and making sense of them quickly. It’s a good reason why people like to see graphs and numerical summaries of data in reports and presentations: it’s a near universal language that people can quickly understand.

 

But I believe we are also really good at qualitative understanding.

 

Bohn and Short in a 2009 study estimated that “The average American consumes 100,500 words of information in a single day”, comprised of conversations, TV shows, news, written articles, books… It sounds like a staggering amount of qualitative data to be exposed to, basically a whole PhD thesis every single day!

 

Obviously, we don’t digest and process all of this, people are extremely good at filtering this data; ignoring adverts, skim reading websites to get to the articles we are interested in and skim reading those, and of course, summarising the gist of conversations with a few words and feelings. That’s why I argue that we are nearly all qualitative experts, summarising and making connections with qualitative life all the time.


And those connections are the most important thing, and the skill that socially astute humans do so well. We can pick up on unspoken qualitative nuances when someone tells us something, and understand the context of a news article based on the author and what is being reported. Words we hear such as ‘economy’ and ‘cancer’ and ‘earthquake’ are imbued with meaning for us, connecting to other things such as ‘my job’ and ‘fear’ and ‘buildings’.

 

This neural network of meaning is a key part of our qualitative understanding of the world, and whether we want to challenge these by some type of Derridan deconstruction of our associations between language and meaning, they form a key part of our daily prejudices and understanding of the world in which we live.

 

For me, a key problem with qualitative analysis is that it struggles to preserve or record these connections and lived associations. I touched on this issue of reductionism in the last blog post article on structuring unstructured qualitative data, but it can be considered a major weakness of qualitative analysis software. Essentially, one removes these connected meanings from the data, and puts it as a binary category, or at best, represents it on a scale.

 

Incidentally, this debate about scaling and quantifying qualitative data has been going on for at least 70 years from Guttman, who even in this 1944 article notes that there has been ‘considerable discussion concerning the utility of such orderings’. What frustrates me at the moment is that while some qualitative analysis software can help with scaling this data, or even presenting it in a 2 or 3 dimensional scale by applying attributes such as weighting, it still is a crude approximation of the complex neural connections of meaning that deep qualitative data possesses.

 

In my experiments getting people with no formal qualitative or research experience to try qualitative analysis with Quirkos, I am always impressed at how quickly people take to it, and can start to code and assign meaning to qualitative text from articles or interviews. It’s something we do all the time, and most people don’t seem to have a problem categorising qualitative themes. However, many people soon find the activity restrictive (just like trained researchers do) and worry about how well a basic category can represent some of the more complex meanings in the data.

 

Perhaps one day there will be practical computers and software that ape the neural networks that make us all such good qualitative beings, and can automatically understand qualitative connections. But until then, the best way of analysing data seems to be to tap into any one of these freely available neural networks (i.e. a person) and use their lived experience in a qualitative world in partnership with a simple software tool to summarise complex data for others to digest.

 

After all, whatever reports and articles we create will have to compete with the other 100,000 words our readers are consuming that day!

 

 

Structuring unstructured data

 

The terms ‘unstructured data’ and ‘qualitative data’ are often used interchangeably, but unstructured data is becoming more commonly associated with data mining and big data approaches to text analytics. Here the comparison is drawn with databases of data where we have a defined field and known value and the loosely structured (especially to a computer) world of language, discussion and comment. A qualitative researcher lives in a realm of unstructured data, the person they might be interviewing doesn’t have a happy/sad sign above their head, the researcher (or friend) must listen and interpret their interactions and speech to make a categorisation based on the available evidence.


At their core, all qualitative analysis software systems are based around defining and coding: selecting a piece of text, and assigning it to a category (or categories). However, it is easy to see this process as being ‘reductionist’: essentially removing a piece of data from it’s context, and defining it as a one-dimensional attribute. This text is about freedom. This text is about liberty. Regardless of the analytical insight of the researcher in deciding what relevant themes should be, and then filtering a sentence into that category, the final product appears to be a series of lists of sections of text.


This process leads to difficult questions such as, is this approach still qualitative? Without the nuanced connections between complicated topics and lived experiences, can we still call something that has been reduced to a binary yes/no association qualitative? Does this remove or abstract researchers from the data? Isn't this a way of quantifying qualitative data?


While such debates are similarly multifaceted, I would usually argue that this process of structuring qualitative data does begin to categorise and quantify it, and it does remove researchers from their data. But I also think that for most analytical tasks, this is OK, if not essential! Lee and Fielding (1996) say that “coding, like linking in hypertext, is a form of data reduction, and for many qualitative researchers is an important strategy which they would use irrespective of the availability of software”. When a researcher turns a life into 1 year ethnography, or a 1 hour interview, that is a form of data reduction. So is turning an audio transcript into text, and so is skim reading and highlighted printed versions of that text.


It’s important to keep an eye on the end game for most researchers: producing a well evidenced, accurate summary of a complex issue. Most research, as a formula to predict the world or a journal article describing it, is a communication exercise that (purely by the laws of entropy if not practicality) must be briefer than the sum of it’s parts. Yet we should also be much more aware that we are doing this, and together with our personal reflexivity think about the methodological reflexivity, and acknowledge what is being lost or given prominence in our chosen process.


Our brains are extremely good at comprehending the complex web of qualitative connections that make everyday life, and even for experienced researchers our intuitive insight into these processes often seem to go beyond any attempt to rationalise them. A structuralist approach to qualitative data can not only help as an aide-mémoir, but also to demonstrate our process to others, and challenge our own assumptions.


In general I would agree with Kelle (1997) that “the danger of methodological biases and distortion arising from the use of certain software packages is overemphasized in current discussions”. It’s not the tool, it’s how you use it!

How to set up a free online mixed methods survey

It’s quick and easy to set up an on-line survey to collect feedback or research data in a digital format that means you can quickly get straight to analysing the data. Unfortunately, most packages like SurveyMonkey, SurveyGizmo and Kwiksurveys, while all compatible with Quirkos, require a paying subscription before you can actually export any of your data and analyse it.

 

However, there are two great free platforms we recommend that allow you to run a mixed-method survey, and easily bring all your data into Quirkos to explore and analyse. In this article, we'll go through a step by step guide to setting up a survey in eSurv, and exporting the data to Quirkos

 

eSurv.org

This is a completely free platform, funded by contributions from Universities, but is available for any use. There are no locked features, or restrictions on responses, and it has an easy to use on-line survey design. There are customisable templates, and you can have custom exit pages too.

Once you have signed up for an account, you will be presented with the screen above, and will be able to get going with your first survey. The first page allows you to name the survey, and set up the title and page description, all have options for changing the text formatting. Just make sure you click on the verification link in the e-mail sent to you, which will allow you to access all the features.

 

The next screen shows a series of templates you can use to set the style of your survey. Choose one that you like the look of, and you have the option of customising it further with your logo or other colour schemes. Click next.

Now you are ready to start adding questions.

 

The options box on the right shows all the different types of questions available, and each one has many customisation options at the bottom of the screen. For example, the single text box option can be made to accept only numerical answers, and you can change the maximum length and display size of the box. All questions can be made mandatory, with a custom 'warning' if someone does not fill in that dialogue.

 

The drag and drop ranking feature is a nice option, and pretty much all the multiple-choice and closed question formats you might want are represented.

 

When you have chosen the title and settings for each question, you can click on the 'Save & Add Next' button on the top right to quickly add a series of questions, or 'Save & Close' if you are done.

 

There are also Logic options to add certain questions only in response to certain answers (for example, Please tell us why you didn't like this product). It is of course possible to edit the questions and rearrange them using the drag icon in the main questionnaire overview.

 

You can test the survey to see how it looks, and when happy click the launch button to make it available to respondents. This also gives you a QR code linking to the survey, allowing smartphone users to complete the survey from a link on posters or printed documents. While you can customise the link title, the web address is always in the format of "https://eSurv.org?u=survey_title".

 

You can have a large number of surveys on the go at once, and manage them all from the 'Home' screen, which also shows you how many responses you have had.

 

Once you are ready to analyse your data, open the survey and click on the export button. This gives the options above to select which questions and respondents you want to export, and a date range (useful if you only want to put in new responses). For best use in Quirkos, select the Compact and .csv File format options, and then click download.

 

exported csv file in excel

The only step you probably want to take before bringing the data into Quirkos is to remove the first row (highlighted above). By default eSurv creates a row which numbers the questions, but it’s usually easier to have the questions themselves as the title, not just the number. Just delete the first row starting with ‘Question’ and this will remove the question numbers, and Quirkos will see the first row with the actual question names. Just save any changes in Excel/LibreOffice making sure you save using the CSV (Comma delimited) format, and ignore the warning that ‘some features may be lost’ and choose ‘Yes’ to keep using that format. You can also remove any columns here that you don’t want (for example e-mail address if it was not provided) but you can also do this in Quirkos.

 

In Quirkos, start a new Structured Questions project, and select the Import from CSV option from the bottom right 'Add Source' (+) button. Select the file you saved in the previous step, and you will get a preview of the data looking like the screenshot above. Here you have the option to choose which question you want to use for the Source Title (say a name, or respondent ID) and any you might want to ignore, such as IP address. Then make sure that open ended questions are selected as Question, and Property is associated with any discrete or numerical categories. Click import, and voilà!

 

Should you get new responses, you can add them in the same way to an existing project with the same structure, just make sure when exporting from eSurv that you select the newest responses to export, and don't duplicate older ones.

 

Now you can use Quirkos to go through and code any of the qualitative text elements, while using the properties and quantitative data  to compare respondents and generate summaries. So for example, you can see the comments that people with negative ratings made side by side by comments from positive feedback, or compare people from different age ranges.

 

If you need even more customisation of your survey, the open-source platform LimeSurvey, while not as easy to use as eSurv, gives you a vast array of customisability options. LimeService.com allows 25 responses a month for free, but we have our own unrestricted installation available free of charge to our customers – just ask if you need it!

 

p.s  I've also done a video tutorial covering setting up and using eSurv, and exporting the results into Quirkos.

Bringing survey data and mixed-method research into Quirkos

quirkos spreadsheet

 

Later today we are releasing a small update for Quirkos, which adds an important feature users have been requesting: the ability to quickly bring in quantitative and qualitative data from any spreadsheet, or online survey tool such as SurveyMonkey or LimeSurvey.

 

Now, users can bring in mixed-method data in one click, with the ability to analyse and compare qualitative and quantitative data together. If you have a survey with discrete and quantitative data (such as age, location, or Likert scales) you can use them to stratify and compare open-ended qualitative answers (the Any other comments? Or How can we improve this service?).

 

Not only will this make bringing data into Quirkos a lot quicker, it will provide a neat workflow for people wanting to understand the qualitative aspects of their data. Now they can code and develop frameworks to understand comments and written data sources, which may hold the key to understanding something important that isn’t shown in the quantitative data.

 

import csv dialogue quirkos

In Quirkos, this functionality is provided as a new option in the ‘Add Source’ button on the bottom left of a project. Users should create a new ‘Structured Question’ project, which gives the same Questions as sections in the qualitative text of the source. The discrete and quantitative data will be imported as source properties which describe each response in the survey.

 

To bring spreadsheet or tabulated data into Quirkos, you need to have it in CSV format (comma separated variables) which is a standard file format that most platforms can use to export data. If that format is not supported by your data collection workflow, as long as it can be imported into Excel or another spreadsheet package such as Google Docs or LibreOffice Calc. All these packages allow you to save a table of data in CSV format, and you should select the default comma, not tab separated format. The first row, should be the titles you want the properties and questions to be.

 

Quirkos will try and automatically guess which columns represent discrete properties (such as name or age) and which ones are sentences. It does this in a simple way: any row titles with a question, such as “How did you feel about this event?” will become a long-text qualitative question and answer, or if the answer contains spaces like a sentence structure. Otherwise, it will suggest import as a source property for a value like age or name. If this does not come through as you wish, there is the drop-down option to change how that row is imported.

 

This provides 4 options. Source Title is the name you wish to give each source in the project. This might be a name, or a ID number – and you can only select one property to be the source title. Property is for source properties, the quantitative or discrete data that describes the source. Question is for the open ended qualitative text sections, and finally there is an option for ‘Ignore’ if there was a field or value you did not want to bring into the project.


It is possible to keep adding more and more sources in this way, for example if you had later additions to a survey. However, it will also create duplicates of data already in the project (in case something changed) so make sure that a new CSV file being imported doesn’t contain the old responses.

 

If you already have Quirkos, all you need to do is download the new installer for version 1.2 for Windows or Mac, and follow the install procedure. This will install the new version over the old one (v1.1) and there will be no changes to your shortcuts, projects or license. The update is free for everyone, even if you are using the free-trial, and once again, there are no compatibility problems with older project files.

 

Qualitative evaluations: methods, data and analysis

reports on a shelf

Evaluating programmes and projects are an essential part of the feedback loop that should lead to better services. In fact, programmes should be designed with evaluations in mind, to make sure that there are defined and measurable outcomes.

 

While most evaluations generally include numerical analysis, qualitative data is often used along-side the quantitative, to show richness of project impact, and put a human voice in the process. Especially when a project doesn’t meet targets, or have the desired level of impact, comments from project managers and service users usually give the most information into what went wrong (or right) and why.

 

For smaller pilot and feasibility projects, qualitative data is often the mainstay of the evaluation data, when numerical data wouldn’t reach statistical analysis, or when it is too early in a programme to measure intended impact. For example, a programme looking at obesity reductions might not be able to demonstrate a lower number of diabetes referrals at first, but qualitative insight in the first year or few months of the project might show how well messages from the project are being received, or if targeted groups are talking about changing their behaviour. When goals like this are long term (and in public health and community interventions they often are) it’s important to continuously assess the precursors to impact: namely engagement, and this is usually best done in a qualitative way.

 

So, what is best practice for qualitative evaluations? Fortunately, there are some really good guides and overviews that can help teams choose the right qualitative approach. Vaterlaus and Higgenbotham give a great overview of qualitative evaluation methods, while Professor Frank Vanclay talks at a wider level about qualitative evaluations, and innovative ways to capture stories. However, there was also a nice ‘tick-box’ style guide produced by the old Public Health Resource Unit which can still be found at this link. Essentially, the tool suggests 10 questions that can be used to assess the quality of a qualitative based-evaluation – really useful when looking at evaluations that come from other fields or departments.

 

But my contention is that the appraisal tool above is best implemented as a guide for producing qualitative evaluations. If you start by considering the best approach, how you are going to demonstrate rigour, choosing appropriate methods and recruitment, you’ll get a better report at the end of it. I’d like to discuss and expand on some of the questions used to assess the rigour of the qualitative work, because this is something that often worries people about qualitative research, and these steps help demystify good practice.

 

  1. The process: Start by planning the whole evaluation from the outset: What do you plan to do? All the rest will then fall into place.
     
  2. The research questions: what are they and why were these chosen? Are the questions going to give the evaluation the data it needs, and will the methods capture that correctly?
     
  3. Recruitment: who did you choose, and why? Who didn’t take part, and how did you find people? What gaps are there likely to be in representing the target group, and how can you compensate for this? Were there any ethical considerations, how was consent gained, and what was the relationship between the participants and the researcher(s)? Did they have any reason to be biased or not truthful?
     
  4. The data: how did you know that enough had been collected? (Usually when you are starting to hear the same things over and over – saturation) How was it recorded, transcribed, and was it of good quality? Were people willing to give detailed answers?
     
  5. Analysis: make sure you describe how it was done, and what techniques were used (such as discourse or thematic analysis). How does the report choose which quotes to reproduce, and are there contradictions reported in the data? What was the role of the researcher – should they declare a bias, and were multiple views sought in the interpretation of the data?
     
  6. Findings: do they meet the aims and research questions? If not, what needs to be done next time? Are there clear findings and action points, appropriate to improving the project?

 

Then the final step for me is the most important of all: SHARE! Don't let it end up on a dusty shelf! Evaluations are usually seen as a tedious but necessary internal process, but they can be so useful to people as case-studies and learning tools in organisations and groups you might never have thought of. This is especially true if there are things that went wrong, help someone in another local authority not make the same mistakes!

 

At the moment the best UK repositories of evaluations are based around health and economic benefits but that doesn’t stop you putting the report on your organisation’s website – if someone is looking for a similar project, search engines will do the leg work for you. That evaluation might save someone a lot of time and money, and it goes without saying, look for any similar work before you start a project, you might get some good ideas, and stop yourself falling into the same pit-holes!

 

Qualitative research on the Scottish Referendum using Quirkos

quirkos overlap or cluster view of bias in the media

 

We've now put up the summary report for our qualitative research project on the Scottish Referendum, which we analysed using Quirkos. You can download the PDF of the 10 page report from the link above, I hope you find something interesting in there! The full title is "Overview of a qualitative study on the impact of the 2014 referendum for Scottish independence in Edinburgh, and views of the political process" and here's the summary findings:

 

"The interviews revealed a great depth of understanding of a wide range of political issues, and a nuanced understanding of many arguments for and against independence. Many people described some uncertainty about which way to vote, but it did not seem that anyone had changed their mind over the course of the campaigning.


There was a general negative opinion towards the general political system, especially Westminster, from both yes and no voters. Participants had varying opinions on political leaders and parties, even though some people were active members of political parties. Yes and No supporters both felt that the No campaign was poorly run, and used too many negative messages, this feeling was especially strong in No voters.
The most important concerns for responders was about public finances, financial stability of an independent Scotland, the issue of currency for Scotland was often mentioned, but often with distrust of politicians comments on the subject. Westminster induced austerity and the future of the NHS also featured as important policy considerations.


People expressed generally negative views of the media portrayal of the referendum, most feeling that newspapers and especially the BBC had been biased, although No supporters were more likely to find the media balanced.


In general, people felt that the process had been good for Scotland, even No supporters, and there was general support for greater devolution of powers. People had seen the process as being very positive for the SNP, and nearly all respondents felt the Yes campaign had been well run. People expressed a negative view of the Labour party during the campaign, although voters also mentioned strong criticism of Labour’s wider policy position in recent years. People had generally positive opinions of Nicola Sturgeon, mixed reactions to Alex Salmond, and generally negative comments on Ed Miliband’s public image, while also stating that this should not be an important factor for voters. People believed that the polls would be correct in predicting a swing from Labour to the SNP in Scotland.


Many expressed a belief that the level of debate in Edinburgh had been good, and that the Yes campaign was very visible. Respondents were positive about the inclusion of voters from the age of 16, were surprised at how much support the Yes campaign generated, and some felt that a future referendum would be successful in gaining independence for Scotland."

 

The report also contains some information about the coding process using Quirkos:

 

"The interviews together lasted 6.5 hours and once transcribed comprised just under 58000 words, an average of 4800 words per interview. 75 themes were used to code the project, with 3160 coding events logged, although each text may cover multiple coding events. In total, 87% of the text was coded with at least one topic. The coding took an experienced coder approximately 7 hours (over a three day period) once any breaks longer than 5 minutes were removed, an average of one code every 8 seconds."

 

Personally, I've been really happy doing this project with Quirkos, and especially with how quick it took to do the coding. Obviously, with any qualitative analysis process there is a lot of reading, thinking and mind-changing that happens from setting the research questions to writing up a report. However, I really do think that Quirkos makes the coding and exploration process quicker, and I do love how much one can play with the data, just looking to see how much keywords come up, or whether there are connections between certain themes.


In this project, the cluster views (one for media bias shown above) were really revealing, and sometimes surprising. But the side-by-side queries were also really useful for looking to see differences in opinions between Yes and No supporters, and also to demonstrate there was little difference in the quotes from men and women – they seemed to largely care about the same issues, and used similar language.


Feel free to see for yourself though, all the transcripts, as well as the coded project file can be downloaded from our workshop materials pages, so do let me know if Quirkos lets you have a different view on the data!

 

 

6 meta-categories for qualitative coding and analysis

rating for qualitative codes

When doing analysis and coding in a qualitative research project, it is easy to become completely focused on the thematic framework, and deciding what a section of text is about. However, qualitative analysis software is a useful tool for organising more than just the topics in the text, they can also be used for deeper contextual and meta-level analysis of the coding and data.


Because you can pretty much record and categorise anything you can think of, and assign multiple codes to one section of text, it often helps to have codes about the analysis that help with managing quotes later, and assisting in deeper conceptual issues. So some coders use some sort of ranking system so they can find the best quotes quickly. Or you can have a category for quotes that challenge your research questions, or seem to contradict other sources or findings. Here are 6 suggestions for these meta-level codes you could create in your qualitative project (be it Quirkos, Nvivo, Atlas-ti or anything!):

 

 

Rating
I always have a node I call ‘Key Quotes’ where I keep track of the best verbatim snippets from the text or interview. It’s for the excited feeling you get when someone you interviewed sums up a problem or your research question in exactly the right way, and you know that you are going to end up using that quote in an article. Or even for the title of the article!


However, another way you can manage quotes is to give them a ranking scheme. This was suggested to me by a PhD student at Edinburgh, who gives quotes a ranking from 1-5, with each ‘star-rating’ as a separate code. That way, it’s easy to cross reference, and find all the best quotes on a particular topic. If there aren’t any 5* quotes, you can work down to look at the 4 star, or 3 star quotes. It’s a quick way to find the ‘best’ content, or show who is saying the best stuff. Obviously, you can do this with as little or much detail as you like, ranking from 1-10 or just having ‘Good’ and ‘Bad’ quotes.


Now, this might sound like a laborious process, effectively adding another layer of coding. However, once you are in the habit, it really takes very little extra time and can make writing up a lot quicker (especially with large projects). By using the keyboard shortcuts in Quirkos, it will only take a second more. Just assign the keyboard numbers 1-5 to the appropriate ranking code, and because Quirkos keeps the highlighted section of text active after coding, you can quickly add to multiple categories. Drag and drop onto your themes, and hit a number on the keyboard to rank it. Done!

 

 

Contradictions
It is sometimes useful to record in one place the contradictions in the project – this might be within the source, where one person contradicts themselves, or if a statement contradicts something said by another respondent. You could even have a separate code for each type of contradiction. Keeping track of these can not only help you see difficult sections of data you might want to review again, but also show when people are being unsure or even deceptive in their answers on a difficult subject. The overlap view in Quirkos could quickly show you what topics people were contradicting themselves about – maybe a particular event, or difficult subject, and the query views can show you if particular people were contradicting themselves more than others.

 

 

Ambiguities
In qualitative interview data where people are talking in an informal way about their stories and lives, people often say things where the meaning isn’t clear – especially to an external party. By collating ambiguous statements, the researcher has the ability to go back at the end of the source and see if each meaning is any clearer, or just flag quotes that might be useful, but might be at risk of being misinterpreted by the coder.

 

 

Not-sures
Slightly different from ambiguities: these are occasions when the meaning is clear enough, but the coder is not 100% sure that it belongs in a particular category. This often happens during a grounded theory process where one category might be too vague and needs to be split into multiple codes, or when a code could be about two different things.


Having a not-sure category can really help the speed of the coding process. Rather than worrying about how to define a section of text, and then having sleepless nights about the accuracy of your coding, tag it as ‘Not sure’ and come back to it at the end. You might have a better idea where they all belong after you have coded some more sources, and you’ll have a record of which topics are unclear. If you are not sure about a large number of quotes assigned to the ‘feelings’ Quirk (again, shown by clustering in the overlap view in Quirkos), you might want to consider breaking them out into an ‘emotions’ and ‘opinions’ category later!

 

 

Challenges
I know how tempting it can be to go through qualitative analysis as if it were a tick-box exercise, trying to find quotes that back up the research hypothesis. We’ve talked about reflexivity before in this blog, but it is easy to go through large amounts of data and pick out the bits that fit what you believe or are looking for. I think that a good defence against this tendency is to specifically look for quotes that challenge you, your assumptions or the research questions. Having a Quirk or node that logs all of these challenges lets you make sure you are catching them (and not glossing over them) and secondly provides a way to do a validity assessment at the end of coding: Do these quotes suggest your hypothesis is wrong? Can you find a reason that these quotes or individuals don’t fit your theory? Usually these are the most revealing parts of qualitative research.

 


Absences
Actually, I don’t know a neat way to capture the essence of something that isn’t in the data, but I think it’s an important consideration in the analysis process. With sensitive topics, it is sometimes clear to the researcher that an important issue is being actively avoided, especially if an answer seems to evade the question. These can be at least coded as absences at the interviewer’s question. However, if people are not discussing something that was expected as part of the research question, or was an issue for some people but not others, it is important to record and acknowledge this. Absence of relevant themes is usually best recorded in memos for that source, rather than trying to code non-existent text!

 

 

These are just a few suggestions, if you have any other tips you’d like to share, do send them to daniel@quirkos.com or start a discussion in the forum. As always, good luck with your coding!

 

Free materials for qualitative workshops

qualitative workshop on laptops with quirkos

 

We are running more and more workshops helping people learn qualitative analysis and Quirkos. I always feel that the best way to learn is by doing, and the best way to remember is through play. To this end, we have created two sources of qualitative data that anyone can download and use (with any package) to learn how to use software for qualitative data analysis.

 

These can be found at the workshops folder. There are two different example data sets, which are free for any training use. The first is a basic example project, which is comprised of a set of fictional interviews with people talking about what they generally have for breakfast. This is not really a gripping exposé of a critical social issue, but is short and easy to engage with, and already provides some suprises when it comes to exploring the data. The materials provided include individual transcribed sources of text, in a variety of formats that can be brought into Quirkos. The idea is that users can learn how to bring sources into Quirkos, create a basic coding framework, and get going on coding data.


For the impatient, there is also a 'here's one we created earlier' file, in which all the sources have been added to the project, described age and gender and occupation as source properties, a completed framing codework, and a good amount of coding. This is a good starting point if someone wants to use the various tools to explore coded data and generate outputs. There is also a sample report, demonstrating what a default output looks like when generated by Quirkos, including the 'data' folder, which includes all the pictures for embedding in a report or PowerPoint presentation.

 

This is the example project we most frequently use in workshops. It allows us to quickly cover all the major steps in qualitative analysis with software, with a fun and easy to understand dataset. It also lets us see some connections in the data, for example how people don't describe coffee as a healthy option, and that women for some reason talk about toast much more than men.

 

However, the breakfast example is not real qualitative data - it is short, and fictitious, so for people who come along to our more advanced analysis workshops, we are happy to now make available a much more detailed and lively dataset. We have recently completed a project on the impact on voter opinions in Scotland after the 2014 Referendum for independence. This comprises of 12 semi-structured interviews with voters based in Edinburgh, on their views on the referendum process, and how it has changed their outlook on politics and voting in the run-up to the 2015 General Election in the UK.

 

When we conducted these interviews, we explicitly got consent for them to be made publicly available and used for workshops after they had been transcribed and anonymised. This gives us a much deeper source of data to analyse in workshops, but also allows for anyone to download a rich set of data to use in their own time (again with any qualitative software package) to practice their analytical skills in qualitative research. You can download these interviews and further materials at this link.

 

We hope you will find these resources useful, please acknowledge their origin (ie Quirkos), let us know if you use them in your training and learning process, and if you have any feedback or suggestions.

Upgrade from paper with Quirkos

qualitative analysis with paper

Having been round many market research firms in the last few months, the most striking things is the piles of paper, or at least in the neater offices - shelves of paper!

When we talk to small market research firms about their analysis process, many are doing most of their research by printing out data and transcripts, and coding them with coloured highlighters. Some are adamant that this is the way that works best for them, but others are a little embarrassed at the way they are still using so much time and paper with physical methods.

 

The challenge is clear – the short turn-around time demanded by clients doesn't leave much time for experimenting with new ways of working, and the few we had talked to who had tried qualitative analysis software quickly felt this wasn't something they were able to pick up quickly.

 

So, most of the small Market Research agencies with less than 5 associates (as many as 75% of firms in the UK) are still relying on work-flows that are difficult to share, don't allow for searching across work, and don't have an undo button! Not to mention the ecological impact of all that printing, and the risk to deadlines from an ill placed mug of coffee.

 

That's one of the reasons we created Quirkos, and why we are launching our new campaign this week at the Market Research Society annual conference in London. Just go to our new website, www.upgradefrompaper.com and watch our fun, one minute video about drowning in paper, and how Quirkos can help.

Quirkos isn't like other software, it is designed to mimic the physical action of highlighting and coding text on paper with an intuitive interface that you can use to get coding right away. In fact, we bet you can get coding a project before your printer has got the first source out of the tray.

 

You no longer need days of training to use qualitative analysis software, and Quirkos has all the advantages you'd expect, such as quick searches, full undo-redo capability and lots of flexibility to rearrange your data and framework. But it also has other pleasant surprises: there's no save button, because work is automatically saved after each action. And it creates graphical reports you can share with colleagues or clients.

 

Finally, you can export your work at any stage to Word, and print it out (if you so wish!) with all your coding and annotations as familiar coloured highlights – ideal to share, or just to help ease the transition to digital. It's always comforting to know you can go back to old habits at any time, and not loose the work you've already done!

 

It's obviously not just for market research firms; students, academics and charities who have either not tried any qualitative software before, or found the other options too confusing or expensive can reduce their carbon footprint and save on their department's printing costs!

 

So take the leap, and try it out for a month, completely free, on us. Upgrade from paper to Quirkos, and get a clear picture of your research!

 

www.upgradefrompaper.com


p.s. All the drawings in our video were done by our very own Kristin Schroeder! Not bad, eh?

The dangers of data mining for text

 Alexandre Dulaunoy CC - flickr.com/photos/adulau/12528646393

There is an interesting new article out, which looks at some of the commonly used algorithms in data mining, and finds that they are generally not very accurate, or even reproducible.

 

Specifically, the study by Lancichinetti et al. (2015) looks at automated topic classification using the commonly used latent Dirichlet allocation algorithm (LDA), a machine learning process which uses a probabilistic approach to categorise and filter large groups of text. Essentially this is a common approach used in data mining.

 

But the Lancichinetti et al. (2015) article finds that, even using a well structured source of data, such as Wikipedia, the results are – to put it mildly, disappointing. Around 20% of the time, the results did not come back the same, and when looking at a more complex group of scientific articles, reliability was as low as 55%.

 

As the authors point out, there has been little attempt to test the accuracy and validity of these data mining approaches, but they caution that users should be cautious about relying on inferences using these methods. They then go-on to describe a method that produces much better levels of reliability, yet until now, most analysis would have had this unknown level of inaccuracy: even if the test had been re-run with the same data, there is a good chance the results would have been different!

 

This underlines one of the perils with statistical attempts to mine large amounts of text data automatically: it's too easy to do without really knowing what you are doing. There is still no reliable alternative to having a trained researcher and their brain (or even an average person off the street) reading through text and telling you what it is about. The forums I engage with are full of people asking how they can do qualitative analysis automatically, and if there is some software that will do all their transcription for them – but the realistic answer is nothing like this currently exists.

 

Data mining can be a powerful tool, but it is essentially all based on statistical probabilities, churned out by a computer that doesn't know what it is supposed to be looking at. Data mining is usually a process akin to giving your text to a large number of fairly dumb monkeys on typewriters. Sure, they'll get through the data quickly, but odds are most of it won't be much use! Like monkeys, computers don't have that much intuition, and can't guess what you might be interested in, or what parts are more emotionally important than others.

 

The closest we have come so far is probably a system like IBM's Watson computer, a natural language processing machine which requires a supercomputer with 2,880 CPU cores, 16 terabytes of ram (16,384GB), and is essentially doing the same thing – a really really large number of dumb monkeys, and a process that picks the best looking stats from a lot of numbers. If loads of really smart researchers programme it for months, it can then win a TV show like Jeopardy. But if you wanted to win Family Feud, you'd have to programme it again.

 

Now, a statisical overview can be a good place to start, but researchers need to understand what is going on, look at the results intelligently, and work out what parts of the output don't make sense. And to do this well, you still need to be familiar with some of the source material, and have a good grip on the topics, themes and likely outcomes. Since a human can't read and remember thousands of documents, I still think that for most cases, in-depth reading of a few dozen good sources probably gives better outcomes than statistically scan-reading thousands.

 

Algorithms will improve, as outlined above, and as computers get more powerful and data gets more plentiful, statistical inferences will improve. But until then, most users are better off with a computer as a tool to aid their thought process, not to provide a single statistic answer to a complicated question.

 

Is qualitative data analysis fracturing?

Having been to several international conferences on qualitative research recently, there has been a lot of discussion about the future of qualitative research, and the changes happening in the discipline and society as a whole. A lot of people have been saying that acceptance for qualitative research is growing in general: not only are there a large number of well-established specialist journals, but mainstream publications are accepting more papers based on qualitative approaches.


At the same time, there are more students in the UK at all levels, but especially starting Masters and PhD studies as I’ve noted before. While some of these students will focus solely on qualitative methods, many more will adopt mixed methods approaches, and want to integrate a smaller amount of qualitative data. Thus there is a strong need, especially at the Masters by research level, for software that’s quicker to learn, and can be well integrated into the rest of a project.


There is also the increasing necessity for academic researchers to demonstrate impact for their research, especially as part of the REF. There are challenges involved with doing this with qualitative research, especially summarising large bodies of data, and making them accessible for the general public or for targeted end users such as policy makers or clinicians. Quirkos has been designed to create graphical outputs for these situations, as well as interactive reports that end-users can explore in their own time.


But another common theme has emerged is the possibility of the qualitative field fracturing as it grows. It seems that there are at least three distinct user groups emerging: firstly there are the traditional users of in-depth qualitative research, the general focus of CAQDAS software. They are experts in the field, are experienced with a particular software package, and run projects collecting data with a variety of methods, such as ethnography, interviews, focus groups and document review.


Recently there has been increased interest in text analytics: the application of ‘big data’ to quantify qualitative sources of data. This is especially popular in social media, looking at millions of Tweets, texts, Facebook posts, or blogs on a particular topic. While commonly used in market research, there are also applications in social and political analysis, for example looking at thousands of newspaper articles for portrayal of social trends. This ‘bid data’ quantitative approach has never been a focus of Quirkos’ approach, although there are many tools out there that work in this way.
Finally, there is increasing interest in qualitative analysis from more mainstream users, people who want to do small qualitative research projects as part of their own organisation or business. Increasingly, people working in public sector organisations, HR or legal have text documents they need to manage and gain a deep understanding of.
Increasingly it seems that a one-size-fits-all solution to training and software for qualitative data analysis is not going to be viable. It may even be the case that different factions of approaches and outcomes will emerge. In some ways this may not be too dissimilar to the different methodologies already used within academic research (ie grounded / emergent / framework analysis), but the numbers of ‘researchers’ and the variety of paradigms and fields of inquiry looks to be increasing rapidly.


These are definitely interesting times to be working in qualitative research and qualitative data analysis. My only hope is that if such ‘splintering’ does occur, we keep learning from each other, and we keep challenging ourselves by exposure to alternative ways of working.

 

 

Paper vs. computer assisted qualitative analysis

I recently read a great paper by Rettie et al. (2008) which, although based on a small sample size, found that only 9% of UK market research organisations doing qualitative research were using software to help with qualitative analysis.

 

At first this sounds very low, but it holds true with my own limited experiences with market research firms, and also with academic researchers. The first formal training courses I attended on Qualitative Analysis were conducted by the excellent Health Experiences Research Group at Oxford University, a team I would actually work with later in my career. As an aforementioned computer geek, it was surprising for me to hear Professor Sue Ziebland convincingly argue for a method they defined as the One Sheet of Paper technique, immortalised as OSOP. This is essentially a way to develop a grounded theory or analytical approach by reducing the key themes to a diagram that can be physically summarised on a single piece of paper, a method that is still widely cited to this day.

 

However, the day also contained a series of what felt like ‘confessions’ about how much of people’s Qualitative analysis was paper based: printing out whole transcripts of interviews, highlighting sections, physically cutting and gluing text into flipcharts, and dozens and dozens of multi-coloured Post-it notes! Personally, I think this is a fine method of analysis, as it keeps researchers close to the data and, assuming you have a large enough workspace, it lets you keep dozens of interviews and themes to hand. It’s also very good for team work, as the physicality gets everyone involved in reviewing codes and extracts.

 

In the last project I worked on, looking at evidence use for health decision making we did most of the analysis in Excel, which was actually easier for the whole team to work with than any of the dedicated qualitative analysis software packages. However, we still relied heavily on paper: printing out the interviews and Excel spreadsheets, and using flip-chart paper, post-its and marker pens in group analysis sessions. Believe me, I felt a pang of guilt for all the paper we used in each of these sessions, rainforests be damned! But it kept us inspired, engaged, close to the data and let us work together.

 

So I can quite understand why so many academics and market research organisations choose not to use software packages: at the moment they don’t have the visual connection to the data that paper annotations allow, it’s often difficult to see the different stages of the coding process, and it’s hard to produce reports and outputs that communicate properly.

 

The problem with this approach is the literal paper-trail – how you turn all these iterations of coding schemes and analysis sessions into something you can write up to share with others in order to justify how you made the decisions that led to your conclusions. So I had to file all these flip-charts and annotated sheets, often taking photos of them so they could be shared with colleagues at other universities. It was a slow and time consuming process, but it kept us close to the data.

 

When designing Quirkos, I have tried in some ways to replicate the paper-based analysis process. There’s a touch interface, reports that show all the highlighting in a Word document, and views that keep you close to the data. But I also want to combine this with all the advantages you get from a software package, not least the ability to search, shuffle dozens of documents, have more colours than a whole rainbow of Post-it notes, and the invaluable Undo button!

 

Software can also help keep track of many more topics and sources than most people (especially myself) can remember, and if there are a lot of different themes you want to explore from the data, software is really good at keeping them all in one place and making them easy to find. Working as part of a team, especially if some researchers work remotely or in a different organisation can be much easier with software. E-mailing a file is much easier than sending a huge folder of annotated paper, and combining and comparing analysis can be done at any stage of the project.

 

Qualitative analysis software also lets you take different slices through the data, so you can compare responses grouped by any caracteristics for the sources you have. So it's easy to look at all the comments from people in one location, or between a certain age range. Certainly this is possible to do with qualitative data on paper as well, but the software can remove the need of a lot of paper shuffling, especially when you have a large number of respondents.

 

But most importantly, I think software can allow more experimentation - you can try different themes, easily combine or break them apart, or even start from scratch again, knowing that the old analysis approach you tried is just a few clicks away. I think that the magic undo button also gives researchers more confidence in trying something out, and makes it easier for people to change their mind.

 

Many people I’ve spoken to have asked what the ‘competition’ for Quirkos is like, meaning, what do the other software packages do. But for me the real competitor is the tangible approach and the challenge is to try and have something that is the best of both worlds: a tool that not only apes the paper realm in a virtual space, but acknowledges the need to print out and connect with physical workflows. I often want to review a coded project on paper, printing off and reading in the cafe, and Quirkos makes sure that all your coding can be visually displayed and shared in this way.

 

Everyone has a workflow for qualitative analysis that works for them, their team, and the needs of their project. I think the key is flexibility, and to think about a set of tools that can include paper and software solutions, rather than one approach that is a jack of all trades, and master of none.

 

Analysing text using qualitative software

I'm really happy to see that the talks from the University of Surrey CAQDAS 2014 are now up online (that's 'Computer Assisted Qualitative Data Analysis Software' to you and me). It was a great conference about the current state of software for qualitative analysis, but for me the most interesting talks were from experienced software trainers, about how people actually were using packages in practice.

There were many important findings being shared, but for me one of the most striking was that people spend most of their time coding, and most of what they are coding is text.

In a small survey of CAQDAS users from a qualitative research network in Poland, Haratyk and Kordasiewicz found that 97% of users were coding text, while only 28% were coding images, and 23% directly coding audio. In many ways, the low numbers of people using images and audio are not surprising, but it is a shame. Text is a lot quicker to skip though to find passages compared to audio, and most people (especially researchers) and read a lot faster than people speak. At the moment, most of the software available for qualitative analysis struggles to match audio with meaning, either by syncing up transcripts, or through automatic transcription to help people understand what someone is saying.

Most qualitative researchers use audio as an intermediary stage, to create a recording of a research event, such as in interview or focus group, and have the text typed up word-for-word to analyse. But with this approach you risk losing all of the nuance that we are attuned to hear in the spoken word, emphasis, emotion, sarcasm – and these can subtly or completely transform the meaning of the text. However, since audio is usually much more laborious to work with, I can understand why 97% of people code with text. Still, I always try to keep the audio of an interview close to hand when coding, so that I can listen to any interesting or ambiguous sections, and make sure I am interpreting them fairly.

Since coding text is what most people spend most of their time doing, we spent a lot of time making the text coding process in Quirkos was as good as it could be. We certainly plan to add audio capabilities in the future, but this needs to be carefully done to make sure it connects closely with the text, but can be coded and retrieved as easily as possible.

 

But the main focus of the talk was the gaps in users' theoretical knowledge, that the survey revealed. For example, when asked which analytical framework the researchers used, only 23% described their approach as Grounded Theory. However, when the Grounded Theory approach was described in more detail, 61% of respondents recognised this method as being how they worked. You may recall from the previous top-up, bottom-down blog article that Grounded Theory is essentially finding themes from the text as they appear, rather than having a pre-defined list of what a researcher is looking for. An excellent and detailed overview can be found here.

Did a third of people in this sample really not know what analytical approach they were using? Of course it could be simply that they know it by another name, Emergent Coding for example, or as Dey (1999) laments, there may be “as many versions of grounded theory as there were grounded theorists”.

 

Finally, the study noted users comments on advantages and disadvantages with current software packages. People found that CAQDAS software helped them analyse text faster, and manage lots of different sources. But they also mentioned a difficult learning curve, and licence costs that were more than the monthly salary of a PhD student in Poland. Hopefully Quirkos will be able to help on both of these points...

 

Touching Text

Presenting Quirkos at the CAQDAS 2014 conference this month was the first major public demonstration of Quirkos, and what we are trying to do. It’s fair to say it made quite a splash! But getting to this stage has been part of a long process from an idea that came about many years ago.

Like many geeks on the internet, I’d been amazed by the work done by Jeff Han and colleagues at the University of New York on cheap, multi-touch interfaces. This was 2006, and the video went viral in a time before iPhones and tablets, when it looked like someone had finally worked out how to make the futuristic computer interface from Minority Report which had come out in 2002. Others, such as Johnny Lee at Carnegie Mellon University had worked out how the incredible technology in the controllers for the Wii could make touchscreen interactive whiteboards with a £25 toy.

I’ve always been of the opinion that technology is only interesting when it is cheap: it can’t have an impact when it’s out of reach for a majority of people. Now, none of this stuff was particularly ground-breaking in itself, but these people were like heroes to me, for making something amazing out of bits and pieces that everyone could afford.

Meanwhile, I was trying to do qualitative analysis for my PhD [danfreak.net/thesis.html], and having a iBook that wouldn’t run any of the qualitative analysis packages, I cobbled together my own system: my first attempt at making a better qualitative research system. It was based on a series of unique three letter unique codes I’d insert into a sentence, and a Linux based file search system called ‘Beagle’ which allowed me to see a piece of text I’d assigned with a code across any of the files on my computer. Thus in one search I could see all the relevant bits of text from interviews, focus groups, diaries and notes. It was clunky, but worked, and was the beginning of something with potential.

 

By 2009, I had my first proper research job in Oxford, and was spending my salary trying to make a touchscreen computer out of a £120 netbook and a touchscreen overlay I’d imported from China. In fact, I got through two of these laptops, after short-circuiting the motherboard of one while trying to cram the innards into a thin metal case. What excited me was the potential for a £150 touchscreen computer, with no keyboard, that you used like a ‘tablet’ from Star Trek. Then, while I was doing this, Apple came out with the long-anticipated iPad, which had the distinct advantage of being about ¼ of the thickness and weight!

But while all this was going on in my spare time, at work I was spending all day coding semi-structured interviews for a research project. I was being driven mad with the slow coding process, Nvivo was crashing frequently and corrupting all the work when it did, and using interfaces in the 21st century that were beginning to feel a whole generation behind.

And that’s where the idea came from: me speculating on what qualitative analysis would be like with a touch screen interface. What if you could do it on a giant tablet or digital whiteboard with a team of people? I drew sketches of bubbles (I’ve always liked playing with bubbles) that grew when you added text to them, integrating the interface and the visualisation, and showing relationships between the themes.

 

After this, the idea didn’t really progress until I was working on my next job, at Sheffield Hallam University. Again, qualitative analysis was giving me a headache, this time because we wanted to do analysis with participants and co-researchers, and most of the packages were too difficult to learn and too expensive to afford to let the whole team get involved. A new set of colleagues shared my pain with using current CAQDAS software, and as no-one else seemed to be doing anything about it, I thought it was worth giving a try.

I took a course in programming user interfaces using cross-platform frameworks, and was able to knock up some barely functioning prototypes, at the time called ‘Qualia’. But again, things didn’t really progress until I left my job to focus on it full time, fleshing out the details and hiring the wonderful Adrian Lubik: a programmer who actually knows what he’s doing!

With the project gaining momentum, a better name was needed. Looking around classical Greek and Latin names, I came across ‘kirkos’, the Greek word which is the root of the word ‘circle’. Change the beginning to ‘Qu’ for qualitative, and voilá, Quirkos was born: Qualitative Circles. Something that very neatly summed up what I’d been working towards for nearly a decade.

In June we’ll be releasing the beta version to testers for the first time, and the final version will go on sale in September at a lower price point that means a lot more people can try qualitative research. It’s really exciting to be at this stage, with so much enthusiasm and anticipation building in the market. But it’s also just a beginning; we have a 5 year plan to keep adding unique features and develop Quirkos into something that is innovative at every stage of the research process. It’s been a long journey, but it’s great that so many people are now coming along!

Top-down or bottom-up qualitative coding?

In framework analysis, sometimes described as a top-down or 'a-priori' approach, the researcher decides on the topics of interest they will look for before they start the analysis, usually based on a theory they are looking to test. In inductive coding the researcher takes a more bottom-up approach, starting with the data and a blank-sheet, noting themes as the read through the text.

 

Obviously, many researchers take a pragmatic approach, integrating elements of both. For example it is difficult for a emergent researcher to be completely naïve to the topic before they start, and they will have some idea of what they expect to find. This may create bias in any emergent themes (see previous posts about reflexivity!). Conversely, it is common for researchers to discover additional themes while reading the text, illustrating an unconsidered factor and necessitating the addition of extra topics to an a-proiri framework.

 

I intend to go over these inductive and deductive approaches in more detail in a later post. However, there is also another level in qualitative coding which is top-down or bottom-up: the level of coding. A low 'level' of coding might be to create a set of simple themes, such as happy or sad, or apple, banana and orange. These are sometimes called manifest level codes, and are purely descriptive. A higher level of coding might be something more like 'issues from childhood', fruit, or even 'things that can be juggled'. Here more meaning has been imposed, sometimes referred to as latent level analysis.

 

 

Usually, researchers use an iterative approach, going through the data and themes several times to refine them. But the procedure will be quite different if using a top-down or bottom-up approach to building levels of coding. In one model the researcher starts with broad statements or theories, and breaks them down into more basic observations that support or refute that statement. In the bottom-up approach, the researcher might create dozens of very simple codes, and eventually group them together, find patterns, and infer a higher level of meaning from successive readings.

 

So which approach is best? Obviously, it depends. Not just on how well the topic area is understood, but also the engagement level of the particular researcher. Yet complementary methods can be useful here: the PI of the project, having a solid conceptual understanding of the research issue, can use a top-down approach (in both approaches to the analysis) to test their assumptions. Meanwhile, a researcher who is new to the project or field could be in a good position to start from the bottom-up, and see if they can find answers to the research questions starting from basic observations as they emerge from the text. If the themes and conclusions then independently reach the same starting points, it is a good indication that the inferences are well supported by the text!

 

qualitative data analysis software - Quirkos

 

 

Quirkos is coming...

key

 

Quirkos is intended to be a big step forward for qualitative research. The central idea is to make text analysis so easy, that anyone can do it.

That includes people who don't know what qualitative analysis is, or that it could help them to better understand their world. This could be a council or hospital trust wanting to better understand the needs of people that use their services, or a team developing a new product, wanting feedback from users and consumers.

And for experienced researchers too, the goal was to create software that helps people engage with their data, rather than being a barrier to it. Over the last decade I've used a variety of approaches to analysing qualitative research, and many collegues and I felt that there had to be a better way.

Quirkos aims to make software to easily manage large projects, search them quickly, and keep them secure. To visualise data on the fly, so findings come alive and are sharable with a team of people. And finally to make powerful tools to sort and understand the connections in the data.

After years of planning, these pieces are finally coming together, and the prototype is already something that I prefer using to any of the other qualitative software packages out there. In the next few weeks, the first version of Quirkos will be sent to intreped researchers around the globe to test in their work. A few months later, we'll be ready to share a polished version with the world, and we're really excited that it will work for everyone: with any level of experience, and on pretty much any computer too.

There are a lot of big firsts in Quirkos, and it's going to be exciting sharing them here over the next few weeks!

An overview of qualitative methods

There are a lot of different ways to collect qualitative data, and this article just provides a brief summary of some of the main methods used in qualitative research. Each one is an art in its own right, with various different techniques, definitions, approaches and proponents.

More on each one will follow in later articles, and it’s worth remembering that these need to be paired with the right questions, sampling, and analysis to get good results.

Interviews

Possibly the richest, and most powerful tool: talking to someone directly. The classic definition is “conversations with a purpose“, the idea being that there is something you are interested in, you ask questions about it, and someone gives useful responses.

There are many different styles for example how structured your questions are (this paper has a wonderful and succinct overview in the introduction). These can range from a rigid script where you ask the same questions every time, or completely open discussion, where the researcher and respondent have freedom to shape the conversation. A common middle ground are semi-structured interviews, which often have a topic guide, listing particualar issues to discuss, but will allow questions for clarification, or to follow up on an interesting tangent.

Participant Observation

Often the remit of ethnography or sociology, participant observation usually involves watching, living or even participating in the daily life of research subjects. However, it can also involve just watching people in a certain setting, such as a work meeting, or using a supermarket.

This is probably the most time intensive and potentially problematic method, as it can involve weeks or even years of placement for a researcher, often on their own. However, it does produce some of the richest data, as well as a level of depth that can really help explain complex issues. This chapter is a fine starting point.

Focus groups

A common method used in market research, where a researcher leads a group discussion on a particular topic. However, it is also a powerful tool for social researchers, especially when looking at group dynamics, or the reactions of particular groups of people. It’s obviously important to consider who is chosen for the group, and how the interactions of people in the group affect the outcome (although this might be what you are looking for).

It’s usually a quicker and cheaper way of gauging many reactions and opinions, but requires some skill in the facilitator to make sure everyone’s voice is being heard, and that people stay on track. Also a headache for any transcribers who have to identify different voices from muffled audio recordings!

Participant Diaries

Getting people to write a diary for a research project is a very useful tool, and is commonly used in looking at taboo behaviours such as drug use or sexuality, not just because researchers don’t have to ask difficult questions face-to-face, but that data can be collected over a long period of time. If you are trying to find out how often a particular behaviour occurs, a daily or weekly record is likely to be more accurate than asking someone in a single interview (as in the studies above).

There are other benefits to the diary method: not least that the participant is in control. They can share as much or as little as they like, and only on topics they wish to. It can also be theraputic for some people, and is more time flexible. Diaries can be paper based, electronic, or even on a voice recorder if there are literacy concerns. However, researchers will probably need to talk to people at the beginning and end of the process, and give regular reminders.

Surveys

Probably one of the most common qualitative methods are the open ended questions on surveys, usually by post, on-line, or ‘guided’ by someone with a clipboard. Common challenges here are

  • Encouraging people to write more than one word, but less than an essay
  • Setting questions carefully so they are clear, but not leading
  • Getting a good response rate and
  • Knowing who has and hasn’t responded

The final challenge is to make sure the responses are useful, and integrating them with the rest of the project, especially quantitative data.

Field notes

Sometimes the most overlooked, but most vaulable source of information can be the notes and field diaries of researchers themselves. These can include not just where and when people did interviews or observations, but crucial context, like the people who refused to take part, and whether a interviewee was nervous. It need not just be for ethnographers doing long field work, it can be very helpful in organising thoughts and work in smaller projects with multiple researchers.

As part of a reflexive method, it might contain comments and thoughts from the researcher, so there can be a risk of autobiographical overindulgence. It is also not easy to integrate ‘data’ from a research diary with other sources of information when writing up a project for a particular output.

 

This is just a whistle-stop introduction, but more on each of these to follow…

Blog Archive

 

Quirkos qualitative blog archive

 

This is an archive (up-to September 2017) of articles on qualitative analysis, data and software from our blog:

 

Word clouds and word frequency analysis in qualitative data
 
What’s this blog post about? Well, it’s visualised in the graphic above! In the latest update for Quirkos, we have added a new and much requested feature, word clouds! I'm sure you've used these pretty tools before, they show a random display of all the words in a source of text
 
Announcing Quirkos v1.5
 
We are happy to announce the immediate availability of Quirkos version 1.5! As always, this update is a free upgrade for everyone who has ever brought a licence of Quirkos, so download now and enjoy the new features and improvements
 
An introduction to Interpretative Phenomenological Analysis
 
Interpretative Phenomenological Analysis (IPA) is an increasingly popular approach to qualitative inquiry and essentially an attempt to understand how participants experience and make meaning of their world
 
Against entomologies of coding
 
I was recently privileged to chair a session at ICQI 2017 entitled “The Archaeology of Coding”. It had a fantastic panel of speakers, including
 
Quirkos vs Nvivo: Differences and Similarities
 
I’m often asked ‘How does Quirkos compare to Nvivo?’. Nvivo is by far the largest player in the qualitative software field, and is the product most researchers are familiar with. So when looking at the alternatives like Quirkos
 
Teaching qualitative methods via social media
 
This blog now has nearly 120 posts about all different kinds of qualitative methods, and has grown to hosting thousands of visitors a month. There are lots of other great qualitative blogs around, including
 
Writing Qualitative research papers
 
We’ve actually talked about communicating qualitative research and data to the public before, but never covered writing journal articles based on qualitative research. This can often seem daunting
 
Does software lead to the homogenisation of qualitative research?
 
In the last couple of weeks there has been a really interesting discussion on the Qualrs-L UGA e-mail discussion group about the use of software in qualitative analysis. Part of this was the question of whether qualitative software leads to the ‘homoginisation’ of qualitative research and analysis.
 
Quirkos 1.4.1 is now available for Linux
 
A little later than our Windows and Mac version, we are happy to announce that we have just released Quirkos 1.4.1 for Linux. There are some major changes to the way we release and package our Linux version, so we want to provide some technical details of these, and installation instructions.
 
Quirkos version 1.4.1 is here
 
Since Quirkos version 1.4 came out last year, we have been gathering feedback from dozens of users who have given us suggestions, or reported problems and bugs. This month we are releasing a small update
 
Making the leap from qualitative coding to analysis
 
So you spend weeks or months coding all your qualitative data. Maybe you even did it multiple times, using different frameworks and research paradigms. You've followed our introduction guides and everything is neatly
 
Comparing qualitative software with spreadsheet and word processor software
 
An article was recently posted on the excellent Digital Tools for Qualitative Research blog on how you can use standard spreadsheet software like Excel to do qualitative analysis. There are many other articles describing this kind of approach, for example Susan Eliot or Meyer and Avery (2008). However, it’s also possible to use word processing software
 
Making the most of bad qualitative data
 
A cardinal rule of most research projects is things don’t always go to plan. Qualitative data collection is no difference, and the variability in approaches and respondents means that there is always the potential for things to go awry.
 
Practice Projects and learning Qualitative Data Analysis Software
 
Coding and analysing qualitative data is not only a time consuming, it’s a difficult interpretive exercise which, like learning a musical instrument, gets much better with practice. However, lots of students starting their first major qualitative or mixed method research project will benefit from completing a smaller project first
 
Looking back and looking forward at qualitative analysis in 2017
 
In the month named for Janus, it’s a good time to look back at the last year for Quirkos and qualitative analysis software and look forward to new developments for 2017.
 
How Quirkos can change the way you look at your qualitative data
 
We always get a lot of inquiries in December from departments and projects who are thinking of spending some left-over money at the end of the financial year on a few Quirkos licences
 
Snapshot data and longitudinal qualitative studies
 
In the last blog post, we looked at creating archives of qualitative data that can be used by other researchers (or yourself in the future) for secondary analysis. In that article I postulated that secondary data analysis
 
Archiving qualitative data: will secondary analysis become the norm?
 
Last month, Quirkos was invited to a one day workshop in New York on archiving qualitative data. The event was hosted by Syracuse University
 
Stepping back from qualitative software and reading coded qualitative data
 
There is a lot of concern that qualitative analysis software distances people from their data. Some say that it encourages reductive behaviour, prevents deep reading of the data, and leads to a very quantified type of qualitative analysis
 
Problems with quantitative polling, and answers from qualitative data
 
The results of the US elections this week show a surprising trend: modern quantitative polling keeps failing to predict the outcome of major elections. In the UK this is nothing new,
 
Tips for running effective focus groups
 
In the last blog article I looked at some of the justifications for choosing focus groups as a method in qualitative research. This week, we will focus on some practical tips to make sure that focus groups run smoothly,
 
Considering and planning for qualitative focus groups
 
This is the first in a two-part series on focus groups. This week, we are looking at some of the why you might consider using them in a research project
 
Circles and feedback loops in qualitative research
 
The best qualitative research forms an iterative loop, examining, and then re-examining. There are multiple reads of data, multiple layers of coding, and hopefully, constantly improving theory and insight into the underlying lived world.
 
Triangulation in qualitative research
 
Most qualitative research will be designed to integrate insights from a variety of data sources, methods and interpretations to build a deep picture. Triangulation is the term used to describe this comparison and meshing of different data
 
100 blog articles on qualitative research!
 
Since our regular series of articles started nearly three years ago, we have clocked up 100 blog posts on a wide variety of topics in qualitative research and analysis! These are mainly short overviews...
 
Thinking About Me: Reflexivity in science and qualitative research
 
Reflexivity is a process (and it should be a continuing process) of reflecting on how the researcher could be influencing a research project. In a traditional positivist research paradigm,
 
The importance of keeping open-ended qualitative responses in surveys
 
I once had a very interesting conversation at a MRS event with a market researcher from a major media company. He told me that they were increasingly ‘costing-out’ the qualitative open-ended questions from customer surveys
 
Analytical memos and notes in qualitative data analysis and coding
 
There is a lot more to qualitative coding than just deciding which sections of text belong in which theme. It is a continuing, iterative and often subjective process, which can take weeks or even months. During this time,
 
Starting a qualitative research thesis, and choosing a CAQDAS package
 
For those about to embark on a qualitative Masters or PhD thesis, we salute you! More and more post-graduate students are using qualitative methods in their research projects, or
 
Reflections on qualitative software from KWALON 2016
 
Last week saw a wonderful conference held by the the Dutch network for qualitative research KWALON, based at the Erasmus University, Rotterdam. The theme was no less than the future of Qualitative
 
Include qualitative analysis software in your qualitative courses this year
 
A new term is just beginning, so many lecturers, professors and TAs are looking at their teaching schedule for the next year. Some will be creating new courses, or revising existing modules,
 
Qualitative coding with the head and the heart
 
In the analysis of qualitative data, it can be easy to fall in the habit of creating either very descriptive, or very general theoretical codes. It's often a good idea to take a step
 
10 tips for sharing and communicating qualitative research
 
Writing up and publishing research based on qualitative or mixed methods data is one thing, but most researchers will want to go beyond this, and engage with the wider public and decision makers.
 
Making qualitative analysis software accessible
 
Studies and surveys seem to show that the amount of qualitative research is growing, and that more and more people are using software to help with their qualitative analysis (Woods et al. 2015).
 
Reaching saturation point in qualitative research
 
A common question from newcomers to qualitative research is, what's the right sample size How many people do I need to have in my project to get a good answer for my research
 
Tips for managing mixed method and participant data in Quirkos and CAQDAS software
 
Even if you are working with pure qualitative data, like interview transcripts, focus groups, diaries, research diaries or ethnography, you will probably also have some categorical data about
 
What actually is Grounded Theory A brief introduction
 
“It's where you make up as you go along!” For a lot of students, Grounded Theory is used to describe a qualitative analytical method, where you create a coding
 
Merging and splitting themes in qualitative analysis
 
To merge or to split qualitative codes, that is the question... One of the most asked questions when designing a qualitative coding structure is “How many codes should I
 
Using qualitative analysis software to teach critical thought
 
It's a key part of the curriculum for British secondary school and American high school education to teach critical thought and analysis. It's a vital life skill: the ability to
 
In vivo coding and revealing life from the text
 
Following on from the last blog post on creating weird and wonderful categories to code your qualitative data, I want to talk about an often overlooked way of creating coding topics – using
 
Turning qualitative coding on its head
 
For the first time in ages I attended a workshop on qualitative methods, run by the wonderful Johnny Saldana. Developing software has become a full time (and then some) occupation for me,
 
7 things we learned from ICQI 2016
 
I was lucky enough to attend the ICQI 2016 conference last week in Champaign at the University of Illinois. We managed to speak to a lot of people about using Quirkos, but there were hundreds
 
Workshop exercises for participatory qualitative analysis
 
I am really interested in engaging research participants in the research process. While there is an increasing expectation to get “lay' researchers to set research questions, sit on
 
Quirkos version 1.4 is here!
 
It's been a long time coming, but the latest version of Quirkos is now available, and as always it's a free update for everyone, released simultaneously on Mac, Windows and Linux with
 
Top 10 qualitative research blog posts
 
We've now got more than 70 posts on the official Quirkos blog, on lots of different aspects of qualitative research and using Quirkos in different fields. But it's now getting a bit
 
Participant diaries for qualitative research
 
I've written a little about this before, but I really love participant diaries! In qualitative research, you are often trying to understand the lives, experiences and motivations of
 
Sharing qualitative research data from Quirkos
 
Once you've coded, explored and analysed your qualitative data, it's time to share it with the world. For students, the first step will be supervisors, for researchers it might be peers
 
Tools for critical appraisal of qualitative research
 
I've mentioned before how the general public are very quantitatively literate: we are used to dealing with news containing graphs, percentages, growth rates, and big numbers, and they are common
 
Finding, using and some cautions on secondary qualitative data
 
Many researchers instinctively plan to collect and create new data when starting a research project. However, this is not always needed, and even if you end up having to collect your own
 
Developing and populating a qualitative coding framework in Quirkos
 
In previous blog articles I've looked at some of the methodological considerations in developing a coding framework. This article looks at top-down or bottom-up approaches, whether you
 
Transcribing your own qualitative data
 
In a previous blog article I talked about some of the practicalities and costs involved in using a professional transcribing service to turn your beautifully recorded qualitative interviews and
 
Sampling considerations in qualitative research
 
Two weeks ago I talked about the importance of developing a recruitment strategy when designing a research project. This week we will do a brief overview of sampling for qualitative research,
 
Qualitative evidence for SANDS Lothians
 
Charities and third sector organisations are often sitting on lots of very useful qualitative evidence, and I have already written a short blot post article on some common sources of data that can
 
Recruitment for qualitative research
 
You'll find a lot of information and debate about sampling issues in qualitative research: discussions over “random' or “purposeful' sampling, the merits and
 
Designing a semi-structured interview guide for qualitative interviews
 
Interviews are a frequently used research method in qualitative studies. You will see dozens of papers that state something like We conducted n in-depth semi-structured interviews with
 
An early spring update on Quirkos for 2016
 
About this time last year, I posted an update on Quirkos development for the next year. Even though February continues to be cold and largely snow-drop free in Scotland, why not make it a
 
Recording good audio for qualitative interviews and focus groups
 
Last week's blog post looked at the transcription process, and what's involved in getting qualitative interview or focus-group data transcribed. This week, we are going to step
 
Transcription for qualitative interviews and focus-groups
 
Audio and video give you a level of depth into your data that can't be conveyed by words alone, letting you hear hesitations, sarcasm, and nuances in delivery that can change your
 
Building queries to explore qualitative data
 
So, you've spent days, weeks, or even months coding your qualitative data. Now what Hopefully, just the process of being forced to read through the data, and thinking about the
 
Delivering qualitative market insights with Quirkos
 
To build a well-designed, well-thought-out, and ultimately useful product, today's technology companies must gain a deep understanding of the working mentality of people who will use
 
Using properties to describe your qualitative data sources
 
In Quirkos, the qualitative data you bring into the project is grouped as 'sources'. Each source might be something like an interview transcript, a news article, your own notes and memos, or
 
Starting out in Qualitative Analysis
 
When people are doing their first qualitative analysis project using software, it's difficult to know where to begin. I get a lot of e-mails from people who want some advice in planning
 
Qualitative evidence for evaluations and impact assessments
 
For the last few months we have been working with SANDS Lothians, a local charity offering help and support for families who have lost a baby in miscarriage, stillbirth or soon after birth. They
 
What's in your ideal qualitative analysis software
 
We will soon start work on the next update for Quirkos. We have a number of features people have already requested which we plan to add to the next version, including file merge, memos, and a
 
Teaching qualitative analysis software with Quirkos
 
When people first see Quirkos, we often hear them say My students would love this! The easy learning curve, the visual feedback and the ability to work on Windows or Mac appeal
 
Quirkos is in Toronto!
 
This week's Quirkos blog comes live from the IIQM Qualitative Health Research 2015 conference, in lovely Toronto. It's been fun talking to people who are coming to the city for the first
 
Tips and advice from one year of Quirkos
 
This week marks the one-year anniversary of Quirkos being released to the market! On 6th October 2014, a group of qualitative researchers, academics and business mentors met in a bar in
 
Play and Experimentation in Qualitative Analysis
 
In the last blog post article, I talked about the benefits of visualising qualitative data, not just in the communication and dissemination stage, but also during data analysis. For newcomers to the
 
Freeing qualitative analysis from spreadsheet interfaces
 
The old mantra is that a picture tells a thousand words. You've probably seen Hans Rosling's talks on visualising quantitative data, or maybe even read some of Edward Tufte's books
 
10 reasons to try qualitative analysis with Quirkos
 
Quirkos is the newest qualitative research software product on the market, but what makes it different, and worth giving the one-month free trial a go Here's a guide to the top 10 benefits to
 
Fracturing and choice in qualitative analysis software
 
Fundamental to the belief behind starting Quirkos was a feeling that qualitative research has great value to society, but should be made accessible to more people. One of the problems that we
 
Levels: 3-dimensional node and topic grouping in Quirkos
 
One of the biggest features enabled in the latest release of Quirkos are 'levels', a new way to group and sort your Quirks thematically. While this was always an option in previous
 
Quirkos for Linux!
 
We are excited to announce official Quirkos support for Linux! This is something we have been working on for some time, and have been really encouraged by user demand to support this Free and
 
Quirkos 1.3 is released!
 
We are proud to announce a significant update for Quirkos, that adds significant new features, improves performance, and provides a fresh new look. Major changes include: PDF import Greater
 
Bing Pulse and data collection for market research
 
Judging by the buzz and article sharing going on last week, there was a lot of interest and worry about Microsoft launching their own market research platform. Branded as part of
 
What can CAQDAS do for you The Five-Level QDA
 
I briefly mentioned in my last blog post an interesting new article by Silver and Woolf (2015) on teaching QDA (Qualitative Data Analysis) and CAQDAS (Computer Assisted Qualitative Data
 
The CAQDAS jigsaw: integrating with workflows
 
I'm increasingly seeing qualitative research software as being the middle piece of a jigsaw puzzle that has three stages: collection, coding/exploring, and communication. These steps
 
Using Quirkos for fun and (extremely nerdy) projects
 
This week, something completely different! A guest blog from our own Kristin Schroeder! Most of our blog is a serious and (hopefully) useful exploration of current topics in qualitative
 
Participatory Qualitative Analysis
 
Engaging participants in the research process can be a valuable and insightful endeavour, leading to researchers addressing the right issues, and asking the right questions. Many funding
 
Engaging qualitative research with a quantitative audience.
 
The last two blog post articles were based on a talk I was invited to give at “Mind the Gap', a conference organised by MDH RSA at the University of Sheffield. You can find the
 
Our hyper-connected qualitative world
 
We live in a world of deep qualitative data. It's often proposed that we are very quantitatively literate. We are exposed to numbers and statistics frequently in news reports, at
 
Structuring unstructured data
 
The terms “unstructured data' and “qualitative data' are often used interchangeably, but unstructured data is becoming more commonly associated with data mining and
 
Quirkos workshops in Sheffield
 
On the 23rd and 24th of June, we are running a series of workshops in Sheffield: both at the University of Sheffield, and Sheffield Hallam University. The events are open to students,
 
How to set up a free online mixed methods survey
 
It's quick and easy to set up an on-line survey to collect feedback or research data in a digital format that means you can quickly get straight to analysing the data. Unfortunately, most
 
Bringing survey data and mixed-method research into Quirkos
 
Later today we are releasing a small update for Quirkos, which adds an important feature users have been requesting: the ability to quickly bring in quantitative and qualitative data from any
 
Qualitative evaluations: methods, data and analysis
 
Evaluating programmes and projects are an essential part of the feedback loop that should lead to better services. In fact, programmes should be designed with evaluations in mind, to make sure that
 
Qualitative research on the Scottish Referendum using Quirkos
 
We've now put up the summary report for our qualitative research project on the Scottish Referendum, which we analysed using Quirkos. You can download the PDF of the 10 page report from
 
Why the shift from Labour to the SNP in the 2015 Election in Scotland
 
If the polls are to be believed, Labour are going to loose a lot of Scottish seats in Westminster to the SNP next month. This wave of support seems to come largely out of the referendum last year on
 
6 meta-categories for qualitative coding and analysis
 
When doing analysis and coding in a qualitative research project, it is easy to become completely focused on the thematic framework, and deciding what a section of text is about. However,
 
Free materials for qualitative workshops
 
We are running more and more workshops helping people learn qualitative analysis and Quirkos. I always feel that the best way to learn is by doing, and the best way to remember is through
 
Qualitative data in the UK Public Sector
 
The last research project I worked on with the NIHR was a close collaboration between several universities, local authorities and NHS trusts. We were looking at evidence use by managers in
 
Upgrade from paper with Quirkos
 
Having been round many market research firms in the last few months, the most striking things is the piles of paper, or at least in the neater offices - shelves of paper! When we talk to small
 
Quirkos v1.1 is here!
 
We are excited to announce that the first update for Quirkos can now be downloaded from here! Version 1.1 adds two main new features: batch import, and mutli-language reports. If you
 
Spring software update for Quirkos
 
Even in Edinburgh it's finally beginning to get warmer, and we are planning the first update for Quirkos. This will be a minor release, but will add several features that users have been
 
How to organise notes and memos in Quirkos
 
Many people have asked how they can integrate notes or memos into their project in Quirkos. At the moment, there isn't a dedicated memo feature in the current version of Quirkos (v1.0),
 
The dangers of data mining for text
 
There is an interesting new article out, which looks at some of the commonly used algorithms in data mining, and finds that they are generally not very accurate, or even
 
Help us welcome Kristin to Quirkos!
 
So far, Quirkos users have mostly been based in the academic and university based research areas: perhaps not surprising considering where the project grew from. However, from very early on we got a
 
New Leith offices for Quirkos
 
Just in time for the new year, Quirkos is growing! We now need a bigger office to accomodate new hires, so we've moved to the 'Shore' at Leith, the seafront of Edinburgh.
 
Don't share reports with clients, share your data!
 
When it comes to presenting findings and insight with colleagues and clients, the procedure is usually the same. Create a written summary report, deliver the Powerpoint presentation, field any
 
Quirkos launch workshop
 
This week we had our official launch event for Quirkos, a workshop at the Institute of Education in London, but hosted by the University of Surrey CAQDAS network. It was a great event, with tea and
 
Is qualitative data analysis fracturing
 
Having been to several international conferences on qualitative research recently, there has been a lot of discussion about the future of qualitative research, and the changes happening in the
 
First Quirkos qualitative on-line workshop - 25th Nov 2014
 
Places are filling up now for our London launch and workshop on the 9th of December, but you can still come along for a free lunch by booking at this link. However, we will soon be running
 
QHR2014 and Victoria, BC
 
It's been a busy month, starting with our public launch, and including our first international conference, Qualitative Health Research 2014, hosted by the International Institute for Qualitative
 
Quirkos is launched!
 
It's finally here! From today, anyone can download the full 1.0 release version of Quirkos for Windows or Mac OS X! Versions for Linux and Android will be appearing later in the month, but since
 
Announcing Pricing for Quirkos
 
At the moment, (touch wood!) everything is in place for a launch next week, which is a really exciting place to be after many years of effort. From that day, anyone can download Quirkos, try it free
 
Quirkos is just weeks away!
 
It's been a long time since I've had time to write a blog article, as there are so many things to put in place before Quirkos launches in the next few weeks. But one-by-one everything is
 
Knowing your customers
 
As consumers, it feels like we are bombarded more than ever with opportunities for providing feedback on products and services. While shopping on-line, or even when browsing BBC News we are asked to
 
Using Quirkos for Systematic Reviews and Evidence Synthesis
 
Most of the examples the blog has covered so far have been about using Quirkos for research, especially with interview and participant text sources. However, Quirkos can take any text source you can
 
Getting a foot in the door with qualitative research
 
A quick look at the British Library thesis catalogue suggests that around 800 theses are completed every year in the UK using qualitative methods*. This suggests that 7% of the roughly 10,000 annual
 
Paper vs. computer assisted qualitative analysis
 
I recently read a great paper by Rettie et al. (2008) which, although based on a small sample size, found that only 9% of UK market research organisations doing qualitative research were using
 
Analysing text using qualitative software
 
I'm really happy to see that the talks from the University of Surrey CAQDAS 2014 are now up online (that's 'Computer Assisted Qualitative Data Analysis Software' to you and me). It
 
Quirkos Beta Update!
 
We are busy at Quirkos HQ putting the finishing touches on the Beta version of Quirkos. The Alpha was relased 5 months ago, and during that time we've collected feedback from people who've
 
Evaluating feedback
 
We all know the score: you attend a conference, business event, or training workshop, and at the end of the day you get a little form asking you to evaluate your experience. You can rate the
 
Touching Text
 
Presenting Quirkos at the CAQDAS 2014 conference this month was the first major public demonstration of Quirkos, and what we are trying to do. It's fair to say it made quite a splash! But
 
Top-down or bottom-up qualitative coding
 
In framework analysis, sometimes described as a top-down or 'a-priori' approach, the researcher decides on the topics of interest they will look for before they start the analysis, usually
 
Participatory analysis: closing the loop
 
In participatory research, we try to get away from the idea of researchers doing research on people, and move to a model where they are conducting research with people. The movement comes
 
True cross-platform support
 
Another key aim for Quirkos was to have proper multi-platform support. By that, I mean that it doesn't matter if you are using a desktop or laptop running Windows, a Mac, Linux, or a tablet,
 
10 tips for semi-structured qualitative interviewing
 
Many qualitative researchers spend a lot of time interviewing participants, so here are some quick tips to make interviews go as smooth as possible: before, during and after! 1. Let your
 
Quirkos is coming...
 
Quirkos is intended to be a big step forward for qualitative research. The central idea is to make text analysis so easy, that anyone can do it. That includes people who don't know what
 
An overview of qualitative methods
 
There are a lot of different ways to collect qualitative data, and this article just provides a brief summary of some of the main methods used in qualitative research. Each one is an art in its own
 
Why qualitative research
 
There are lies, damn lies, and statistics It's easy to knock statistics for being misleading, or even misused to support spurious findings. In fact, there seems to be a growing backlash at the
 
What is a Qualitative approach
 
The benefit of having tastier satsumas is difficult to quantify: to turn into a numerical, comparable value. This is essentially what qualitative work does: measure the unquantifiable quality of
 
A new Qualitative Research Blog
 
While hosted by Quirkos, the main aim for this blog is to promote the wider use of qualitative research in general. We will link to other blogs and articles (not just academic), have guest bloggers,