Integrating policy analysis into your qualitative research

qualitative policy analysis

 

It’s easy to get seduced by the excitement of primary data collection, and plan your qualitative research around methods that give you rich data from face-to-face contact with participants. But some research questions may be better illustrated or even mostly answered by analysis of existing documents.

 

This ‘desk-based’ research often doesn’t seem as fun, but can provide a very important wider context that you can’t capture even with direct contact with many relevant participants. But policy analysis is an often overlooked source of important contextual data, especially for social science and societal issues. Now, this may sound boring – who wants to wade through a whole lot of dry government or institutional policy? But not only is there usually a long historical archive of this data available, it can be invaluable for grounding the experiences of respondents in wider context.

 

Usually, interesting social research questions are (or should be) concerns that are addressed (perhaps inadequately) in existing policy and debate. Since social research tends to focus on problems in society, or with the behaviour or life experiences of groups or individuals, participants in qualitative research will often feel their issues should be addressed by the policy of local or national government, or a relevant institution. Remember that large companies and agencies may have their own internal policy that can be relevant, if you can get access to it.

 

Policy discussed at local, state or national level is probably easy to get access to in public record. But it may also be interesting to look at the debate when policy was discussed, to see what issues where controversial and how they were addressed. These should also be available from national archives such as Hansard (in the UK) or the House of the Clark in the USA. You can also do comparisons of policy across countries to get an international perspective, or try to explain differences of policy in certain cultures.

 

Try to also consider not just officially adopted policy, but proposed policy and reports or proposals from lobbying or special interest groups. It’s often a good way to get valuable data and quotes from different forces acting to change policy in your topic area.

 

But there is also a lot of power in integrating your policy and document analysis with original research. You can cross reference topics coming out of participant interviews, and see if they are reflected in policy document. Discourse analysis, and using keyword searches to look for common terms across all your sources can be revealing.

 

Looking at how the media represents these issues and the debates over policy can also be interesting. Make sure that you record which outlet an article comes from, as this can be a useful way to compare different representations of events from media groups with their own particular bias or interpretation.

 


There are of course many different to policy analysis that you can take, including quantitative and mixed-method epidemiologies. While general interpretive qualitative analysis can be revealing, consider also also discourse and meta-systhesis. There’s a short overview video to policy document analysis the from the Manchester Methods Institute here. The following chapter by Ritchie and Spencer is also a good introduction, and for a full textbook try Narrative Policy Analysis: Theory and Practice by Emery Roe (thanks to Dr Chenail for the suggestion!).

 

Qualitative software like Quirkos can help bring all this data from different sources into one project, allowing you to create a common (or separate) coding framework for your analysis. In Quirkos you can use the source properties to define where a piece of data comes from, and then run queries across all types of source, or just a particular type. While any CAQDAS or QDA software will help you manage document analysis, Quirkos is quick to learn and so lets you focus on your data. You can download a free trial here, and student licences are just US$65.

 

 

7 unique things that make Quirkos awesome

quirkos is awesome


Quirkos is now 3 years old!

To celebrate, we’re taking a break from our regular programming of qualitative method posts to remind everyone why Quirkos is the best qualitative analysis software around...

 

1. All the colours!

Obviously I’m going to start with the most important features first. Some qualitative analysis software restricts you to only 8 colours when customising your themes. Quirkos lets you choose from 16 million colours and that may sound daft, but once you have a large coding framework, giving similar shades of colour to similar themes really makes the coding quicker. Many people find they can identify a colour a lot quicker than they can read a label. You can also easily assign meaning to colours: red things being bad, green things for the environment etc.

 

2. Interactive coding

It’s the moment I’ve come to love most when doing training workshops, the ‘Ahhh!’ of the audience when they see the bubbles grow for the first time when you drop text on them. And so quickly you realise that it is a lot more than a gimmick: having the size of the themes represent the coding lets you see not just that you put the code in the right place, but what topics are emerging most from your coding. It makes me feel a lot closer to my data, and seeing the themes evolve is really engaging.

 

download quirkos

 

3. No Save button

Quirkos is constantly saving after each action, so there is no save button in the interface. I think this initially causes some anxiety in users used to setting up an auto-save or worrying they will loose data. But eventually, it becomes so liberating to just focus on your work. If Quirkos or Windows crashes, or even if you pull out the cord on your computer, when you come back to your project it will be just as you left it.

 

4. Quick and free to learn.

We designed Quirkos to be simple, with the main features you need to do deep analysis and reading of your data, and no distractions from flashy or complex features. A lot of people come to Quirkos after despairing at the amount of time it takes to learn other software packages. Most people who do qualitative analysis aren’t interested in learning technical software. They just want to focus on their research ideas and the data.

All our training materials are freely available online, even our monthly webinars which (unlike others) we don’t charge for or require registration. Some CAQDAS packages can require a lot of extra training, a cost in terms of time and money that institutions sometimes forget to factor in.

 

5. True cross-platform freedom

Quirkos not only has the same features and interface on Windows and Mac, but is fully supported on Linux as well. And project files are completely compatible, so you can pick up and work on any computer using any operating system. If you have Windows at work and a Macbook at home, no problem. We are the only CAQDAS software to support all these platforms, and unlike Nvivo, we let you go from Mac to Windows (and back) without changing your files.

 

6. Free updates

When I was working with other qualitative software for my post-doc research, we had serious problems when new versions of the software came out. It would create new (and terrifying bugs), require us to buy a new licence, and made our data no longer compatible with the old version. Since academic organisations aren’t always the most speedy at installing updates, it meant that we always had issues with a collaborator using an older (or newer) version of the software that wasn’t compatible. This frustrated me so much, I have promised this will not happen in Quirkos.


Over the last 3 years we’ve released 6 updated versions of Quirkos now, and they are all free updates, backward and forward compatible. This means that there is no reason for anyone to be stuck using an old version, and even if they didn’t bother to download the free update, they can still collaborate fully with colleagues using different versions.

 

7. Student licences that don’t expire

In the UK, a typical PhD lasts 4 years, in the US the average is 8.2 years. If you are doing teaching as part of your scholarship or are doing doctoral studies part time, this can get even longer. That’s why our student licences don’t expire. I don’t know why our competitors sell 1 or 2 year licences for students – it always annoyed me when I was studying. Unless you are doing your masters, you’ll probably have to buy another one half way through your research. Sure, you can buy last minute after you’ve done all your data collection, but that is a bad way to do iterative qualitative analysis.

 

Our student licences are the same price (or cheaper) than most other one or two year licences, but are yours for life: for your postdoc career and beyond. I don’t want to see people loose access to their data, and it’s no surprise that we sell so many student licences.

 

So try Quirkos for yourself, and see why researchers from more than 120 universities across the world use it to make their their qualitative analysis go a bit smoother. We’ve got a one month free trial of the full, unrestricted version of Quirkos for you to download right here (that’s also the longest free trial offered for CAQDAS!).

 

Preparing data sources for qualitative analysis

preparing qualitative text

 

Qualitative software used to need you to format text files in very specific ways before they could be imported. These days the software is much more capable and means you can import nearly any kind of text data in any kind of formatting, which allows for a lot more flexibility.


However, that easy-going nature can let you get away with some pretty lazy habits. You’ll probably find your analysis (and even data collection and transcription) can go a lot smoother if you’ve set a uniform style or template for your data before hand. This article will cover some of the formatting and meta-data you might want to consider getting in a consistent form before you start it.

 

Part of this should also be a consistent way to record research procedures and your own reflections on the data collection. Sometimes this can be a little adhoc, especially when relying on a research diary, but designing a standard post-interview debriefing form for the interviewer at the same time as creating a semi-structured interview guide can make it much easier to compare interviewer reflections across sources.


So for example you could have a field to record how comfortable the interview setting was, whether the participant was nervous about sharing, if questions were missed or need follow-up. Having these as separate source property fields allows you to compare sources with similar contexts and see if that had an noticeable effect on the participants data.

 

For transcribed interviews, have a standard format for questions and answers, and make sure that it’s clear who is who. Formatting for focus groups demands particular attention to formatting, as some software will help you identify responses from each participant in a group session when done in a particular way. Unfortunately Quirkos doesn’t support this at the moment, but with focus group data it is important to make sure that each transcription is formatted in the same way, and that the identifiers for each user are unique. So for example if you are using initials for each respondent such as:


JT: I’m not sure about that statement.
FA: It doesn’t really speak to me


Make sure that there aren’t people with the same initials in other sessions, and consider having unique participant numbers which will also help better anonymise the data.


A formatting standard is especially important if you have a team project where there are multiple interviewers and transcribers. Make sure they are using the same formatting for pauses, emphasis and identifying speakers. The guide to transcription in a previous blog post covers some of the things you will want to standardise. Some people prefer to read through the transcripts checking for typos and inaccuracies, possibly even while listening to the audio recording of the session. It can be tempting to assume you will pick these up when reading through the data for analysis, but you may find that correcting typos breaks your train of thought too often.


Also consider if your sources will need page, paragraph or sentence numbers in the transcript, and how these will be displayed in your software of choice. Not all software supports the display of line/paragraph numbers, and it is getting increasingly rare to use them to reference sources, since text search on a computer is so fast.


You’ll see a few guides that suggest preparing for your analysis by using a database or spreadsheet to keep track of your participant data. This can help manage who has been interviewed, set dates for interviews, note return of consent forms and keep contact and demographic information. However, all CAQDAS software (not just Quirkos) can store this kind of information about data sources in the project file with the data. It can actually be beneficial to set up your project before-hand in QDA software, and use it to document your data and even keep your research journal before you have collected the data.

 

Doing this in advance also makes sure you plan to collect all the extra data you will need on your sources, and not have to go back and ask someone’s occupation after the interview. There is more detail in this article on data collection and preparation techniques.

 

download qualitative analysis


As we’ve mentioned before, qualitative analysis software can also be used for literature reviews, or even just keeping relevant journal articles and documents together and taggable. However, you can even go further and keep your participant data in the project file, saving time entering the data again once it is collated.


Finally, being well prepared will help at the end of your research as well. Having a consistent style defined before you start data entry and transcription can also make sure that any quotes you use in write-ups and outputs look the same, saving you time tidying up before publication.


If you have any extra tips or tricks on preparing data for analysis, please share them on our Twitter feed @quirkossoftware and we will add them to the debate. And don’t forget to download a free trial of Quirkos, or watch a quick overview video to see how it helps you turn well prepared data into well prepared qualitative analysis.

 

 

Balance and rigour in qualitative analysis frameworks

image by https://www.flickr.com/photos/harmishhk/8642273025

 

Training researchers to use qualitative software and helping people who get stuck with Quirkos, I get to see a lot of people’s coding frameworks. Most of the time they are great, often they are fine but have too many codes, but sometimes they just seem to lack a little balance.


In good quality quantitative research, you should see the researchers have adopted a ‘null hypothesis’ before they start the analysis. In other words, an assumption that there is nothing significant in the data. So statisticians play a little game where they make a declaration that there should be no correlation between variables, and try and prove there is nothing there. Only if they try their hardest and can’t possibly convince themselves there is no relationship are they allowed to go on and conclude that there may be something in the data. This is called rejecting the null hypothesis and may temper the excitement of researchers with big data sets, over enthusiastic for career making discoveries.


Unfortunately, it’s rare to see this approach described in published quantitative analysis. But there’s no reason that a similar approach can’t be used in qualitative research to provide some balance from the researcher’s interpretation and prejudices. Most of the time the researcher will have their own preconception of what they are going to find (or would like to find) in the data, and may even have a particular agenda they are trying to prove. Whether a quantitative or qualitative methodology, this is not a good basis for conducting good impartial research. (Read more about the differences between qualitative and quantitative approaches.)

 

Steps like reflexivity statements, and considering unconscious biases can help improve the neutrality of the research, but it’s something to consider closely during the analysis process itself. Even the coding framework you use to tag and analyse your qualitative data can lead to certain quotes being drawn from the data more than others. 


It’s like trying to balance standing in the middle of a seesaw. If you stand over to one end, it’s easy to keep your balance, as you will just be rooted to the ground on one side. However, standing in the middle is the only way you are challenged, and it’s possible to be influenced by sways and wind from one side to another. Before starting your analysis, researchers should ideally be in this zen like state, where they are ready to let the data tell them the story, rather than trying to tell their own data from selective interpretations.


When reading qualitative data, try to have in your head the opposite view to your research hypothesis. Maybe people love being unemployed, and got rich because of it! The data should really shout out a finding regardless of bias or cherry picking.

 
When you have created a coding framework, have a look through at the tone and coverage. Are there areas which might show any bias to one side of the argument, or a particular interpretation? If you have a code for ‘hates homework’ do you have a code for ‘loves homework’? Are you actively looking for contrary evidence? Usually I try and find a counter example to every quote I might use in a project report. So if I want to show a quote where someone says ‘Walking in the park makes me feel healthy and alive’ I’ll see if there is someone else saying ‘The park makes me nervous and scared’. If you can’t, or at least if the people with the dissenting view is in a minority, you might just be able to accept a dominant hypothesis.

 

Your codes should try and reflect this, and in the same way that you shouldn’t have leading questions “Does your doctor make you feel terrible?” be careful about leading coding topics with language like “Terrible doctors”. There can be a confirmation bias, and you may start looking too hard for text to match the theme. In some types of analysis like discourse or in-vivo coding, reflecting the emotive language your participants use is important. But make sure it is their language and not yours that is reflected in strongly worded theme titles.

 

All qualitative software (Quirkos included) allows you to have a longer description of a theme as well as the short title. So make sure you use it to detail what should belong in a theme, as if you were describing it to someone else to do the coding. When you are going through and coding your data, think to yourself: “Would someone else code in the same way?”

 

download quirkos

 

Even when topics are neutral (or balanced with alternatives) you should also make sure that the text you categorise into these fields is fair. If you are glossing over opinions from people who don’t have a problem with their doctor to focus on the shocking allegations, you are giving primacy to the bad experiences, perhaps without recognising that the majority were good.

 

However, qualitative analysis is not a counting game. One person in your sample with a differing opinion is a significant event to be discussed and explained, not an outlier to be ignored. When presenting the results of qualitative data, the reader has to put a great deal of trust in how the researcher has interpreted the data, and if they are only showing one view point or interpretation they can come across as having a personal bias.

 

So before you write up your research, step back and look again at your coding framework. Does it look like a fair reflection of the data? Is the data you’ve coded into those categories reflective? Would someone else have interpreted and described it in the same way? These questions can really help improve the impartiality, rigour and balance of your qualitative research.

 

A qualitative software tool like Quirkos can help make a balanced framework, because it makes it much easier than pen and Post-It notes to go back and change themes and recode data. Download a free trial and see how it works, and how software kept simple can help you focus on your qualitative data.

 

 

Word clouds and word frequency analysis in qualitative data

word clouds quirkos

 

What’s this blog post about? Well, it’s visualised in the graphic above!

 

In the latest update for Quirkos, we have added a new and much requested feature, word clouds! I'm sure you've used these pretty tools before, they show a random display of all the words in a source of text, where the size of each word is proportional to the number of times it has been counted in the text. There are several free online tools that will generate word clouds for you, Wordle.net being one of the first and most popular.

 

These visualisations are fun, and can be a quick way to give an overview of what your respondents are talking about. They also can reveal some surprises in the data that prompt further investigation. However, there are also some limitations to tools based on word frequency analysis, and these tend to be the reason that you rarely see word clouds used in academic papers. They are a nice start, but no replacement for good, deep qualitative analysis!

 

We've put together some tips for making sure your word clouds present meaningful information, and also some cautions about how they work and their limitations.

 


1. Tweak your stop list!

As these tools count every word in the data, results would normally be dominated by basic words that occur most often, 'the', 'of, 'and' and similar small and usually meaningless words. To make sure that this doesn't swamp the data, most tools will have a list of 'stop' words which should be ignored when displaying the word cloud. That way, more interesting words should be the largest. However, there is always a great deal of variation in what these common words are. They differ greatly between verbal and written language for example (just think how often people might say 'like' or 'um' in speech but not in a typed answer). Each language will also need a corresponding stop list!

 

So Quirkos (and many other tools) offer ways to add or remove words from the stop list when you generate a word cloud. By default, Quirkos takes the most 50 frequent words from the verbal and written British National Corpus of words, but 50 is actually a very small stop list. You will still get very common words like 'think' and 'she' which might be useful to certain projects looking at expressions of opinions or depictions of gender. So it's a good idea to look at the word cloud, and remove words that aren't important to you by adding them to the stop list. Just make sure you record what has been removed for writing up, and what your justification was for excluding it!

 


2. There is no weighting or significance

Since word frequency tools just count the occurrence of each word (one point for each utterance) they really only show one thing: how often a word was said. This sounds obvious, but it doesn't give any indication of how important the use of a word was for each event. So if one person says 'it was a little scary', another says 'it was horrifyingly scary' and another 'it was not scary' the corresponding word count doesn't have any context or weight. So this can look deceptive in something like a word cloud, where the above examples count the negative (not scary) and the minor (little scary) the same way, and 'scary' could look like a significant trend. So remember to always go back and read the data carefully to understand why specific words are being used.

 


3. Derivations don't get counted together

Remember that most word cloud tools are not even really counting words, only combinations of letters. So 'fish', 'fishy' and 'fishes' will all get counted as separate words (as will any typos or mis-spellings). This might not sound important, but if you are trying to draw conclusions just from a word cloud, you could miss the importance of fish to your participants, because the different derivations weren't put together. Yet, sometimes these distinctions in vocabulary are important – obviously 'fishy' can have a negative connotation in terms of something feeling off, or smelling bad – and you don't want to put this in the same category as things that swim. So a researcher is still needed to craft these visualisations, and make decisions about what should be shown and grouped. Speaking of which...

 


4. They won't amalgamate different terms used by participants

It's fascinating how different people have their own terms and language to describe the same thing, and illuminating this can bring colour to qualitative data or show important subtle differences that are important for IPA[[]] or discourse analysis. But when doing any kind of word count analysis, this richness is a problem – as the words are counted separately. Thus neither term 'shiny', 'bright' or 'blinding' may show up often, but if grouped together they could show a significant theme. Whether you want to treat certain synonyms in the same way is up to the researcher, but in a word cloud these distinctions can be masked.

 

Also, don’t forget that unless told otherwise (or sometimes hyphenated), word clouds won’t pick up multiple word phrases like ‘word cloud’ and ‘hot topic’.

 

 

5. Don’t focus on just the large trends


Word clouds tend to make the big language trends very obvious, but this is usually only part of the story. Just as important are words that aren’t there – things you thought would come up, topics people might be hesitant to speak about. A series of word clouds can be a good way to show changes in popular themes over time, like what terms are being used in political speeches or in newspaper headlines. In these cases words dropping out of use are probably just as interesting as the new trends.

 

Download a free trial

 


6. This isn't qualitative analysis

At best, this is quantification of qualitative data, presenting only counting. Since word frequency tools are just count sequences of letters, not even words and their meanings, they are a basic supplemental numerical tool to deep qualitative interpretation (McNaught and Lam 2010). And as with all statistical tools, they are easy to misapply and poorly interpret. You need to know what is being counted, what is being missed (see above), and before drawing any conclusions, make sure you understand the underlying data and how it was collected. However…

 

 

7. Word clouds work best as summaries or discussion pieces


If you need to get across what’s coming out of your research quickly, showing the lexicon of your data in word clouds can be a fun starting point. When they show a clear and surprising trend, the ubiquity and familiarity most audiences have with word clouds make these visualisations engaging and insightful. They should also start triggering questions – why does this phrase appear more? These can be good points to start guiding your audience through the story of your data, and creating interesting discussions.

 

As a final point, word clouds often have a level of authority that you need to be careful about. As the counting of words is seen as non-interpretive and non-subjective, some people may feel they ‘trust’ what is shown by them more than the verbose interpretation of the full rich data. Hopefully with the guidance above, you can persuade your audience that while colourful, word clouds are only a one-dimensional dive into the data. Knowing your data and reading the nuance will be what separates your analysis from a one click feature into a well communicated ‘aha’ moment for your field.

 

 

If you'd like to play with word clouds, why not download a free trial of Quirkos? It also has raw word frequency data, and an easy to use interface to manage, code and explore your qualitative data.

 

 

 

 

Quirkos v1.5 is here

Quirkos 1.5 word cloud

 

We are happy to announce the immediate availability of Quirkos version 1.5! As always, this update is a free upgrade for everyone who has ever brought a licence of Quirkos, so download now and enjoy the new features and improvements.

 

Here’s a summary of the main improvements in this release:

 

Project Merge


You can now bring together multiple projects in Quirkos, merge sources, Quirks and coding from many authors at once. This makes team work much easier, and allows you to bring in coding frameworks or sources from other projects.

 

Word Frequency tools including:
 

Word-clouds! You can now generate customisable Word Clouds, (click on the Report button). Change the shape, word size, rotation, and cut-off for minimum words, or choose which sources to include. There is also a default ‘stop list’ (a, the, of, and) of the most frequent 50 words from the British National Corpus, but this can be completely customised. Save the word-clouds to a standard image file, or as an interactive webpage.referednum wordcloud
A complete word frequency list of the words occurring across all the sources in your project is also generated in this view.

  • Improved Tree view – now shows longer titles, descriptions and fits more Quirks on the screen
  • Tree view now has complete duplicate / merge options
  • Query results by source name – ability to see results from single or multiple sources
  • Query results now show number of quotes returned
  • Query view now has ‘Copy All’ option
  • Improved CSV spreadsheet export – now clearly shows Source Title, and Quirk Name
  • Merge functions now more logical – default behaviour changed so that you select the Quirk you want to be absorbed into a second.
  • Can now merge parent and child Quirks to all levels
  • Hovering mouse over Quirks now shows description, and coding summary across sources
  • Reports now generate MUCH faster, no more crashes for projects with hundreds of Quirks. Image generation of hierarchy and overlap views now off by default, turn on in Project Settings if needed
  • Improved overlap view, with rings indicating number of overlaps
  • Neater pop-up password entry for existing projects
  • Copy and pasting quotes to external programmes now shows source title after each quote
  • Individually imported sources now take file name as source name by default

 

Bug fixes

  • Fixed a bug where Quirks would suddenly grow huge!
  • Fixed a rare crash on Windows when rearranging / merging Quirks in tree view
  • Fixed a rare bug where a Quirk was invisible after being re-arranged
  • Fixed an even rarer bug where deleting a source would stop new coding
  • Save As project now opens the new file after saving, and no longer shows blank screen
  • Reports can now overwrite if saved to the same folder as an earlier export
  • Upgrading to new versions on Windows only creates a backup of the last version, not all previous versions, lots of space savings. (It’s safe to delete these old versions once you are happy with the latest one)

 

Watch the new features demonstrated in the video below:

 

 

There are a few other minor tweaks and improvements, so we do recommend you update straight away. Everyone is eligible, and once again there are no changes to project files, so you can keep going with your work without missing a beat. Do let us know if you have any feedback or suggestions (support@quirkos.com)

 

Download quirkos free qualitative analysis software

 

If you've not tried Quirkos before, it's a perfect time to get started. Just download the full version and you'll get a whole month to play with it for free!

 

An introduction to Interpretative Phenomenological Analysis

introduction to Interpretative Phenomenological Analysis

 

Interpretative Phenomenological Analysis (IPA) is an increasingly popular approach to qualitative inquiry and essentially an attempt to understand how participants experience and make meaning of their world. Although not to be confused with the now ubiquitous style of beer with the same initials (India Pale Ale), Interpretative Phenomenological Analysis is similarly accused of being too frequently and imperfectly brewed (Hefferon and Gil-Rodriguez 2011).



While you will often see it described as a ‘method’ or even an analytical approach, I believe it is better described as more akin to an epistemology, with its own philosophical concepts of explaining the world. Like grounded theory, it has also grown into a bounded approach in its own right, with a certain group of methodologies and analytical techniques which are assumed as the ‘right’ way of doing IPA.



At its heart, interpretative phenomenological analysis is an approach to examining data that tries to see what is important to the participant, how they interpret and view their own lives and experiences. This in itself is not ground-breaking in qualitative studies, however the approach originally grew from psychology, where a distinct psychological interpretation of how the participant perceives their experiences was often applied. So note that while IPA doesn’t stand for Interpretative Psychological Analysis, it could well do.



To understand the rationale for this approach, it is necessary to engage with some of the philosophical underpinnings, and understand two concepts: phenomenology, and hermeneutics. You could boil this down such that:

   1. Things happen (phenomenology)

   2. We interpret this into something that makes sense to us (hermeneutics - from the Greek word for translate)



Building on the shoulders of the Greek thinkers, two 20th century philosophers are often invoked in describing IPA: Husserl and Heidegger. From Husserl we get the concept of all interpretation coming from objects in an external world, and thus the need for ‘bracketing’ our internal assumptions to differentiate what comes from, or can describe, our consciousness. The focus here is on the individual processes of perception and awareness (Larkin 2013). Heidegger introduces the concept of ‘Dasein’ which means ‘there-being’ in German: we are always embedded and engaged in the world. This asks wider questions of what existence means (existentialism) and how we draw meaning to the world.



I’m not going to pretend I’ve read ‘Being and Time‘ or ‘Ideas’ so don’t take my third hand interpretations for granted. However, I always recommend students read Nausea by Sartre, because it is a wonderful novel which is as much about procrastination as it is about existentialism and the perception of objects. It’s also genuinely funny, and you can find Sartre mocking himself and his philosophy with surrealist lines like: “I do not think, therefore I am a moustache”.



Applying all this philosophy to research, we consider looking for significant events in the lives of the people we are studying, and trying to infer through their language how they interpret and make meaning of these events. However, IPA also takes explicit notice of the reflexivity arguments we have discussed before: we can’t dis-embody ourselves (as interpreters) from our own world. Thus, it is important to understand and ‘bracket’ our own assumptions about the world (which are based on our interpretation of phenomenon) from those of the respondent, and IPA is sometimes described as a ‘double hermeneutic’ of both the researcher and participant.



These concepts do not have to lead you down one particular methodological path, but in practice projects intending to use IPA should generally have small sample sizes (perhaps only a few cases), be theoretically open, exploratory rather than testing existing hypotheses, and with a focus on experience. So a good example research question might be ‘How do people with disabilities experience using doctor surgeries?’ rather than ‘Satisfaction with a new access ramp in a GP practice’. In the former example you would also be interested in how participants frame their struggles with access – does it make them feel limited? Angry that they are excluded?



So IPA tends to lead itself to very small, purposive sampling of people who will share a certain experience. This is especially because it usually implies very close reading of the data, looking for great detail in how people describe their experiences – not just a line-by-line reading, but sometimes also reading between the lines. For appropriate methodologies then, focus groups, interviews and participant diaries are frequently applied. Hefferon and Gil-Rodriguez (2011) note that students often try and sample too many people, and ask too many questions. IPA should be very focused on a small number of relevant experiences.



When it comes to interpretation and analysis, a bottom-up, inductive coding approach is often taken. While this should not be confused with the theory building aims of grounded theory, the researcher should similarly try and park or bracket their own pre-existing theories, and let the participant’s data suggest the themes. Thematic analysis is usually applied in an iterative approach where many initial themes are created, and gradually grouped and refined, within and across sources.



Usually this entails line-by-line coding, where each sentence from the transcript is given a short summary or theme – essentially a unique code for every line focusing on the phenomena being discussed (Larking, Watts and Clifton – 2006). Later would come grouping and creating a structure from the themes, either by iterating the process and coding the descriptive themes to a higher level, or having a fresh read though the data.



A lot of qualitative software packages can struggle with this kind of approach, as they are usually designed to manage a relatively small number of themes, rather than one for each line in every source. Quirkos has definitely struggled to work well for this type of analysis, and although we have some small tweaks in the imminent release (v1.5) that will make this bearable for users, it will not be until the full memo features are included in v1.6 that this will really be satisfactory. However, it seems that most users of line-by-line coding and this method of managing IPA use spreadsheet software (so they can have columns for the transcript, summary, subordinate and later superordinate themes) or a word-processor utilising the comment features.

 

However you approach the analysis, the focus should be on the participant’s own interpretation and meaning of their experiences, and you should be able to craft a story for the reader when writing up that connects the themes you have identified to the way the participant describes the phenomenon of interest.



I’m not going to go much into the limitations of the approach here, suffice it to say that you are obviously limited to understanding participant’s meanings of the world through something like the one-dimensional transcript of an interview. What they are willing to share, and how they articulate may not be the complete picture, and other approaches such as discourse analysis may be revealing. Also, make sure that it is really participant’s understandings of experiences you want to examine. It posits a very deep ‘walk two moons in their moccasins‘ approach that is not right for boarder research questions, perhaps when wanting to contrast the broad opinions of a more diverse sample. Brew your IPA right: know what you want to make, use the right ingredients, have patience in the maturation process, and keep tasting as you go along.



As usual, I want to caution the reader against taking anything from my crude summary of IPA as being gospel, and suggest a true reading of the major texts in the field are essential before deciding if this is the right approach for you and your research. I have assembled a small list of references below that should serve as a primer, but there is much to read, and as always with qualitative epistemologies, a great deal of variety of opinion in discourse, theory and application!

 

 

download Quirkos

Finally, don't forget to give Quirkos a try, and see if it can help with your qualitative analysis. We think it's the easiest, most affordable qualitative software out there, so download a one month free trial and see for yourself!



References

Biggerstaff, D. L. & Thompson, A. R. (2008). Qualitative Research in Psychology 5: 173 – 183.
http://wrap.warwick.ac.uk/3488/1/WRAP_Biggrstaff_QRP_submission_revised_final_version_WRAP_doc.pdf

Hefferson, K., Gil_Rodriguez, E., 2011, Methods: Interpretative phenomenological analysis, October 2011, The Psychologist, 24, pp.756-759

Heidegger, M. ( 1962). Being and time (J. Macquarrie & E. Robinson, Trans.). Oxford, UK: Blackwell. (Original work published 1927)

Husserl, E. ( 1931). Ideas: A general introduction to pure phenomenology (W.R. Boyce Gibson, Trans.). London, UK: Allen & Unwin.

IPARG (The Interpretative Phenomenological Analysis Research Group) at Birkbeck college http://www.bbk.ac.uk/psychology/ipa

Larkin, M., Watts, S., & Clifton, E. 2006. Giving voice and making sense in interpretative phenomenological analysis. Qualitative Research in Psychology, 3, 102-120.

Larkin, M., 2013, Interpretative phenomenological analysis - introduction, [accessed online] https://prezi.com/dnprvc2nohjt/interpretative-phenomenological-analysis-introduction/

Smith, J., Jarman, M. & Osborn, M. (1999). Doing interpretative phenomenological analysis. In M. Murray & K. Chamberlain (Eds.) Qualitative health psychology, London: Sage.

Smith J., Flowers P., Larkin M., 2009, Interpretative phenomenological analysis: theory, method and research, London: Sage.
https://us.sagepub.com/sites/default/files/upm-binaries/26759_01_Smith_et_al_Ch_01.pdf

 

 

Archaeologies of coding qualitative data

recoding qualitative data

 

In the last blog post I referenced a workshop session at the International Conference of Qualitative Inquiry entitled the ‘Archaeology of Coding’. Personally I interpreted archaeology of qualitative analysis as being a process of revisiting and examining an older project. Much of the interpretation in the conference panel was around revisiting and iterating coding within a single analytical attempt, and this is very important.


In qualitative analysis it is rarely sufficient to only read through and code your data once. An iterative and cyclical process is preferable, often building on and reconsidering previous rounds of coding to get to higher levels of interpretation. This is one of the ways to interpret an ‘archaeology’ of coding – like Jerusalem, the foundations of each successive city is built on the groundwork of the old. And it does not necessarily involve demolishing the old coding cycle to create a new one – some codes and themes (just like significant buildings in a restructured city) may survive into the new interpretation.


But perhaps there is also a way in which coding archaeology can be more like a dig site: going back down through older layers to uncover something revealing. I allude to this more in the blog post on ‘Top down or bottom up’ coding approaches, because of course you can start your qualitative analysis by identifying large common themes, and then breaking these up into more specific and nuanced insight into the data.

 

But both these iterative techniques are envisaged as part of a single (if long) process of coding. But what about revisiting older research projects? If you get the opportunity to go back and re-examine old qualitative data and analysis?


Secondary data analysis can be very useful, especially when you have additional questions to ask of the data, such as in this example by Notz (2005). But it can also be useful to revisit the same data, question or topic when the context around them changes, for example due to a major event or change in policy.


A good example is our teaching dataset conducted after the referendum for Scottish Independence a few years ago. This looked to see how the debate had influenced voters interpretations of the different political parties and how they would vote in the general election that year. Since then, there has been a referendum on the UK leaving the EU, and another general election. Revisiting this data would be very interesting in retrospect of these events. It is easy to see from the qualitative interviews that voters in Scotland would overwhelmingly vote to stay in the EU. However, it would not be up-to-date enough to show the ‘referendum fatigue’ that was interpreted as a major factor reducing support for the Scottish National Party in the most recent election. Yet examining the historical data in this context can be revealing, and perhaps explain variance in voting patterns in the changing winds of politics and policy in Scotland.

 

While the research questions and analysis framework devised for the original research project would not answer the new questions we have of the data, creating new or appended analytical categories would be insightful. For example, many of the original codes (or Quirks) identifying political parties people were talking about will still be useful, but how they map to policies might be reinterpreted, or higher level themes such as the extent that people perceive a necessity for referendum, or value of remaining part of the EU (which was a big question if Scotland became independent). Actually, if this sounds interesting to you, feel free to re-examine the data – it is freely available in raw and coded formats.

 

Of course, it would be even more valuable to complement the existing qualitative data with new interviews, perhaps even from the same participants to see how their opinions and voting intentions have changed. Longitudinal case studies like this can be very insightful and while difficult to design specifically for this purpose (Calman, Brunton, Molassiotis 2013), can be retroactively extended in some situations.


And of course, this is the real power of archaeology: when it connects patterns and behaviours of the old with the new. This is true whether we are talking about the use of historical buildings, or interpretations of qualitative data. So there can be great, and often unexpected value in revisiting some of your old data. For many people it’s something that the pressures of marking, research grants and the like push to the back burner. But if you get a chance this summer, why not download some quick to learn qualitative analysis software like Quirkos and do a bit of data archaeology of your own?

 

Against Entomologies of Qualitative Coding

Entomologies of qualitative coding - image from Lisa Williams https://www.flickr.com/photos/pixellou/5960183942/in/photostream/


I was recently privileged to chair a session at ICQI 2017 entitled “The Archaeology of Coding”. It had a fantastic panel of speakers, including Charles Vanover, Paul Mihas, Kathy Charmaz and Johnny Saldaña all giving their own take on this topic. I’m going to write about my own interpretation of qualitative coding archaeologies in the next blog post, but for now I wanted to cover an important common issue that all the presenters raised in their presentations: coding is never finished.


In my summary I described this as being like the river in the novel Siddhartha by Herman Hesse: ‘coding is never still’. It should constantly change and evolve, and recoil from attempts to label it as ‘done’ or ‘finished’. Heraclitus said the same thing, “You cannot step twice into the same rivers” for they constantly change and shift (as do we). When we come back to revisit our coding, and even during the process of coding, change is part of the process.


I keep coming back to the image of butterflies in a museum display case: dead, pinned to the board with a neatly assigned label of the genus. It’s tempting to approach qualitative coding with this entomologist’s approach: creating seemingly definitive and static codes that describe one characteristic of the data.


Yet this taxonomy can create a tension, lulling you into feeling that some codes (and frameworks) are still, complete, and don’t need revision and amendment. This might be true, but it usually isn’t! If you are using some type of open-ended coding or grounded theory approach, creating a static code can be beguiling, and interpreted as showing progress. But instead, try and see every code as a place-holder for a better category or description – try not to loose the ability for the data to surprise you, and the temptation to force quotes into narrow categories. Assume that you are never finished with coding.


Unless you are using a very strict interpretation of framework analysis, your first attempt at coding will probably change, evolve as you go through different sources, and take you to a place where you want to try another approach. And your attempts at creating a qualitative classification and coding system might just end up being wrong.


Even in biology, classification attempts are complicated. While the public are still familiar with the different ‘animal kingdom’ groupings, attempts to create a taxonomy in the ‘tree of life’ common descent model are now succeeded by the modern ‘cladistic’ approach, based around common history and derived characteristics of a species. And these approaches also have limitations, since they are so complex and subjective (just like qualitative analysis!).

 

For example, if you use the NCBI Taxonomy browser you will see dozens of entries in square brackets. These are the misclassified organisms which have been currently recognised, species placed in the wrong genus. These problems don’t even include the cases when one species is found to be many unique but significantly separate species on closer study. This has even been found to be the case for the common ‘medicinal’ leech!

 

Trying to turn the endless forms most beautiful of the animal ‘kingdoms’ into neat categories is complex, even when just looking at appearance. And these taxonomic groupings tell us little of the diverse range of behaviour and life behind the dead pinned insects.


In a similar way, when we code and analyse qualitative data, we are attempting to listen to the voices of our respondents, and change the rich multitude of lives and experiences into a few key categories that rise up to us. We often need to recognise the reductive nature of this practice, and keep coming back to the detailed rich data behind it. In a way, this is like the difference between knowing the Latin name for a species of butterfly, and knowing how it flies, it’s favourite flowers, and all the details that actually make them unique, not just a name or number.

 

 

In Siddhartha, the central character finds nirvana listening to the chaotic, blended sound of a river, representing the lives and goals of all the people in his life and the world.


“The river, which consisted of him and his loved ones and of all people, he had ever seen, all of these waves and waters were hurrying, suffering, towards goals, many goals, the waterfall, the lake, the rapids, the sea, and all goals were reached, and every goal was followed by a new one, and the water turned into vapour and rose to the sky, turned into rain and poured down from the sky, turned into a source, a stream, a river, headed forward once again”


Like the river, qualitative analysis can be a circle, with each iteration and reading different from the last, building on the previous work, but always listening to the data, not being quick to judge or categorise. Until we have reached this analytical nirvana, it is difficult to let go of our data, and feel that it is complete. This complex, turbulent flow of information defies our attempts to neatly categorise and label it, and the researcher’s quest for neatness and uncovering the truth under our subjectivity demands a single answer and categorisation scheme. But, just like taxonomy, there may never be a state when categorisation is complete, in a single or multiple interpretation. New discoveries, or new context can change it all.


We, the researcher, are a dynamic and fallible part of that process – we interpret, we miscategorise, we impose bias, we get tired and loose concentration. When we are lazy and quick, we take the comfort of labels and boxes, lulled into conformity by the seductive ease of software and coloured markers. But when we become good qualitative researchers: when we are self-critical and self-reflexive, finally learning to fully listen, then we achieve research nirvana:
 

“Siddhartha listened. He was now nothing but a listener, completely concentrated on listening, completely empty, he felt, that he had now finished learning to listen. Often before, he had heard all this, these many voices in the river, today it sounded new. Already, he could no longer tell the many voices apart, not the happy ones from the weeping ones, not the ones of children from those of men, they all belonged together”

 

Download a free trial of Quirkos today and challenge your qualitative coding!

 

 

 

Quirkos vs Nvivo: Differences and Similarities

quirkos vs nvivoI’m often asked ‘How does Quirkos compare to Nvivo?’. Nvivo is by far the largest player in the qualitative software field, and is the product most researchers are familiar with. So when looking at the alternatives like Quirkos (but also Dedoose, ATLAS.ti, MAXQDA, Transana and many others) people want to know what’s different!

 

In a nutshell, Quirkos has far fewer features than Nvivo, but wraps them up in an easier to use package. So Quirkos does not have support for integrated multimedia, Twitter analysis, quantitative analysis, memos, or hypothesis mapping and a dozen other features. For large projects with thousands of sources, those using multimedia data or requiring powerful statistical analysis, the Pro and Plus versions of Nvivo will be much more suitable.


Our focus with Quirkos has been on providing simple tools for exploring qualitative data that are flexible and easier to use. This means that people can get up and running quicker in Quirkos, and we hear that a lot of people who are turned off by the intimidating interface in Nvivo find Quirkos easer to understand. But the basics of coding and analysing qualitative data are the same.


In Quirkos, you can create and group themes (called Nodes in Nvivo), and use drag and drop to attach sections of text to them. You can perform code and retrieve functions by double clicking on the theme to see text coded to that node. And you can also generate reports of your coded data, with lots of details about your project.


Like Nvivo, we can also handle all the common text formats, such as PDFs, Word files, plain text files, and the ability to copy and paste from any other source like web pages. Quirkos also has tools to import survey data, which is not something supported in the basic version of Nvivo.


While Quirkos doesn’t have ‘matrix coding’ in the same way as Nvivo, we do have side-by-side comparison views, where you can use any demographic or quantitative data about your sources to do powerful sub-set analysis. A lot of people find this more interactive, and we try and minimise the steps and clicks between you and your data.


Although Quirkos doesn’t really have any dedicated tools for quantitative analysis, our spreadsheet export allows you to bring data into Excel, SPSS or R where you have much more control over the statistical models you can run than Nvivo or other mixed-methods tools allow.

 

However, there are also features in Quirkos that Nvivo doesn’t have at the moment. The most popular of these is the Word export function. This creates a standard Word file of your complete transcripts, with your coding shown as color coded highlights. It’s just like using pen and highlighter, but you can print, edit and share with anyone who can open a Word file.


Quirkos also has a constant save feature, unlike Nvivo which has to be set up to save over a certain time period. This means that even in a crash you don’t loose any work, something I know people have had problems with in Nvivo.


Another important differential for some people is that that Quirkos is the same on Windows and Mac. With Nvivo, the Windows and Mac versions have different interfaces, features and file formats. This makes it very difficult to switch between the versions, or collaborate with people on a different platform. We also never charge for our training sessions, and all our online support materials are free to download on our website


And we didn’t mention the thing people love most about Quirkos – the clear visual interface! With your themes represented as colourful, dynamic bubbles, you are always hooked into your data, and have the flexibility to play, explore and drill down into the data.


Of course, it’s best to get some impartial comparisons as well, so you can get reviews from the University of Surrey CAQDAS network here: https://www.surrey.ac.uk/sociology/research/researchcentres/caqdas/support/choosing/


But the best way to decide is for yourself, since your style of working and learning, and what you want to do with the software will always be different. Quirkos won’t always be the best fit for you, and for a lot of people sticking with Nvivo will provide an easier path. And for new users, learning the basics of qualitative analysis in Quirkos will be a great first step, and make transitioning to a more complex package like Nvivo easier in the future. But download our free trial (ours lasts for a whole month, not just 14 days!) and let us know if you have any questions!