Integrating policy analysis into your qualitative research

qualitative policy analysis

 

It’s easy to get seduced by the excitement of primary data collection, and plan your qualitative research around methods that give you rich data from face-to-face contact with participants. But some research questions may be better illustrated or even mostly answered by analysis of existing documents.

 

This ‘desk-based’ research often doesn’t seem as fun, but can provide a very important wider context that you can’t capture even with direct contact with many relevant participants. But policy analysis is an often overlooked source of important contextual data, especially for social science and societal issues. Now, this may sound boring – who wants to wade through a whole lot of dry government or institutional policy? But not only is there usually a long historical archive of this data available, it can be invaluable for grounding the experiences of respondents in wider context.

 

Usually, interesting social research questions are (or should be) concerns that are addressed (perhaps inadequately) in existing policy and debate. Since social research tends to focus on problems in society, or with the behaviour or life experiences of groups or individuals, participants in qualitative research will often feel their issues should be addressed by the policy of local or national government, or a relevant institution. Remember that large companies and agencies may have their own internal policy that can be relevant, if you can get access to it.

 

Policy discussed at local, state or national level is probably easy to get access to in public record. But it may also be interesting to look at the debate when policy was discussed, to see what issues where controversial and how they were addressed. These should also be available from national archives such as Hansard (in the UK) or the House of the Clark in the USA. You can also do comparisons of policy across countries to get an international perspective, or try to explain differences of policy in certain cultures.

 

Try to also consider not just officially adopted policy, but proposed policy and reports or proposals from lobbying or special interest groups. It’s often a good way to get valuable data and quotes from different forces acting to change policy in your topic area.

 

But there is also a lot of power in integrating your policy and document analysis with original research. You can cross reference topics coming out of participant interviews, and see if they are reflected in policy document. Discourse analysis, and using keyword searches to look for common terms across all your sources can be revealing.

 

Looking at how the media represents these issues and the debates over policy can also be interesting. Make sure that you record which outlet an article comes from, as this can be a useful way to compare different representations of events from media groups with their own particular bias or interpretation.

 


There are of course many different to policy analysis that you can take, including quantitative and mixed-method epidemiologies. While general interpretive qualitative analysis can be revealing, consider also also discourse and meta-systhesis. There’s a short overview video to policy document analysis the from the Manchester Methods Institute here. The following chapter by Ritchie and Spencer is also a good introduction, and for a full textbook try Narrative Policy Analysis: Theory and Practice by Emery Roe (thanks to Dr Chenail for the suggestion!).

 

Qualitative software like Quirkos can help bring all this data from different sources into one project, allowing you to create a common (or separate) coding framework for your analysis. In Quirkos you can use the source properties to define where a piece of data comes from, and then run queries across all types of source, or just a particular type. While any CAQDAS or QDA software will help you manage document analysis, Quirkos is quick to learn and so lets you focus on your data. You can download a free trial here, and student licences are just US$65.

 

 

Preparing data sources for qualitative analysis

preparing qualitative text

 

Qualitative software used to need you to format text files in very specific ways before they could be imported. These days the software is much more capable and means you can import nearly any kind of text data in any kind of formatting, which allows for a lot more flexibility.


However, that easy-going nature can let you get away with some pretty lazy habits. You’ll probably find your analysis (and even data collection and transcription) can go a lot smoother if you’ve set a uniform style or template for your data before hand. This article will cover some of the formatting and meta-data you might want to consider getting in a consistent form before you start it.

 

Part of this should also be a consistent way to record research procedures and your own reflections on the data collection. Sometimes this can be a little adhoc, especially when relying on a research diary, but designing a standard post-interview debriefing form for the interviewer at the same time as creating a semi-structured interview guide can make it much easier to compare interviewer reflections across sources.


So for example you could have a field to record how comfortable the interview setting was, whether the participant was nervous about sharing, if questions were missed or need follow-up. Having these as separate source property fields allows you to compare sources with similar contexts and see if that had an noticeable effect on the participants data.

 

For transcribed interviews, have a standard format for questions and answers, and make sure that it’s clear who is who. Formatting for focus groups demands particular attention to formatting, as some software will help you identify responses from each participant in a group session when done in a particular way. Unfortunately Quirkos doesn’t support this at the moment, but with focus group data it is important to make sure that each transcription is formatted in the same way, and that the identifiers for each user are unique. So for example if you are using initials for each respondent such as:


JT: I’m not sure about that statement.
FA: It doesn’t really speak to me


Make sure that there aren’t people with the same initials in other sessions, and consider having unique participant numbers which will also help better anonymise the data.


A formatting standard is especially important if you have a team project where there are multiple interviewers and transcribers. Make sure they are using the same formatting for pauses, emphasis and identifying speakers. The guide to transcription in a previous blog post covers some of the things you will want to standardise. Some people prefer to read through the transcripts checking for typos and inaccuracies, possibly even while listening to the audio recording of the session. It can be tempting to assume you will pick these up when reading through the data for analysis, but you may find that correcting typos breaks your train of thought too often.


Also consider if your sources will need page, paragraph or sentence numbers in the transcript, and how these will be displayed in your software of choice. Not all software supports the display of line/paragraph numbers, and it is getting increasingly rare to use them to reference sources, since text search on a computer is so fast.


You’ll see a few guides that suggest preparing for your analysis by using a database or spreadsheet to keep track of your participant data. This can help manage who has been interviewed, set dates for interviews, note return of consent forms and keep contact and demographic information. However, all CAQDAS software (not just Quirkos) can store this kind of information about data sources in the project file with the data. It can actually be beneficial to set up your project before-hand in QDA software, and use it to document your data and even keep your research journal before you have collected the data.

 

Doing this in advance also makes sure you plan to collect all the extra data you will need on your sources, and not have to go back and ask someone’s occupation after the interview. There is more detail in this article on data collection and preparation techniques.

 

download qualitative analysis


As we’ve mentioned before, qualitative analysis software can also be used for literature reviews, or even just keeping relevant journal articles and documents together and taggable. However, you can even go further and keep your participant data in the project file, saving time entering the data again once it is collated.


Finally, being well prepared will help at the end of your research as well. Having a consistent style defined before you start data entry and transcription can also make sure that any quotes you use in write-ups and outputs look the same, saving you time tidying up before publication.


If you have any extra tips or tricks on preparing data for analysis, please share them on our Twitter feed @quirkossoftware and we will add them to the debate. And don’t forget to download a free trial of Quirkos, or watch a quick overview video to see how it helps you turn well prepared data into well prepared qualitative analysis.

 

 

An introduction to Interpretative Phenomenological Analysis

introduction to Interpretative Phenomenological Analysis

 

Interpretative Phenomenological Analysis (IPA) is an increasingly popular approach to qualitative inquiry and essentially an attempt to understand how participants experience and make meaning of their world. Although not to be confused with the now ubiquitous style of beer with the same initials (India Pale Ale), Interpretative Phenomenological Analysis is similarly accused of being too frequently and imperfectly brewed (Hefferon and Gil-Rodriguez 2011).



While you will often see it described as a ‘method’ or even an analytical approach, I believe it is better described as more akin to an epistemology, with its own philosophical concepts of explaining the world. Like grounded theory, it has also grown into a bounded approach in its own right, with a certain group of methodologies and analytical techniques which are assumed as the ‘right’ way of doing IPA.



At its heart, interpretative phenomenological analysis is an approach to examining data that tries to see what is important to the participant, how they interpret and view their own lives and experiences. This in itself is not ground-breaking in qualitative studies, however the approach originally grew from psychology, where a distinct psychological interpretation of how the participant perceives their experiences was often applied. So note that while IPA doesn’t stand for Interpretative Psychological Analysis, it could well do.



To understand the rationale for this approach, it is necessary to engage with some of the philosophical underpinnings, and understand two concepts: phenomenology, and hermeneutics. You could boil this down such that:

   1. Things happen (phenomenology)

   2. We interpret this into something that makes sense to us (hermeneutics - from the Greek word for translate)



Building on the shoulders of the Greek thinkers, two 20th century philosophers are often invoked in describing IPA: Husserl and Heidegger. From Husserl we get the concept of all interpretation coming from objects in an external world, and thus the need for ‘bracketing’ our internal assumptions to differentiate what comes from, or can describe, our consciousness. The focus here is on the individual processes of perception and awareness (Larkin 2013). Heidegger introduces the concept of ‘Dasein’ which means ‘there-being’ in German: we are always embedded and engaged in the world. This asks wider questions of what existence means (existentialism) and how we draw meaning to the world.



I’m not going to pretend I’ve read ‘Being and Time‘ or ‘Ideas’ so don’t take my third hand interpretations for granted. However, I always recommend students read Nausea by Sartre, because it is a wonderful novel which is as much about procrastination as it is about existentialism and the perception of objects. It’s also genuinely funny, and you can find Sartre mocking himself and his philosophy with surrealist lines like: “I do not think, therefore I am a moustache”.



Applying all this philosophy to research, we consider looking for significant events in the lives of the people we are studying, and trying to infer through their language how they interpret and make meaning of these events. However, IPA also takes explicit notice of the reflexivity arguments we have discussed before: we can’t dis-embody ourselves (as interpreters) from our own world. Thus, it is important to understand and ‘bracket’ our own assumptions about the world (which are based on our interpretation of phenomenon) from those of the respondent, and IPA is sometimes described as a ‘double hermeneutic’ of both the researcher and participant.



These concepts do not have to lead you down one particular methodological path, but in practice projects intending to use IPA should generally have small sample sizes (perhaps only a few cases), be theoretically open, exploratory rather than testing existing hypotheses, and with a focus on experience. So a good example research question might be ‘How do people with disabilities experience using doctor surgeries?’ rather than ‘Satisfaction with a new access ramp in a GP practice’. In the former example you would also be interested in how participants frame their struggles with access – does it make them feel limited? Angry that they are excluded?



So IPA tends to lead itself to very small, purposive sampling of people who will share a certain experience. This is especially because it usually implies very close reading of the data, looking for great detail in how people describe their experiences – not just a line-by-line reading, but sometimes also reading between the lines. For appropriate methodologies then, focus groups, interviews and participant diaries are frequently applied. Hefferon and Gil-Rodriguez (2011) note that students often try and sample too many people, and ask too many questions. IPA should be very focused on a small number of relevant experiences.



When it comes to interpretation and analysis, a bottom-up, inductive coding approach is often taken. While this should not be confused with the theory building aims of grounded theory, the researcher should similarly try and park or bracket their own pre-existing theories, and let the participant’s data suggest the themes. Thematic analysis is usually applied in an iterative approach where many initial themes are created, and gradually grouped and refined, within and across sources.



Usually this entails line-by-line coding, where each sentence from the transcript is given a short summary or theme – essentially a unique code for every line focusing on the phenomena being discussed (Larking, Watts and Clifton – 2006). Later would come grouping and creating a structure from the themes, either by iterating the process and coding the descriptive themes to a higher level, or having a fresh read though the data.



A lot of qualitative software packages can struggle with this kind of approach, as they are usually designed to manage a relatively small number of themes, rather than one for each line in every source. Quirkos has definitely struggled to work well for this type of analysis, and although we have some small tweaks in the imminent release (v1.5) that will make this bearable for users, it will not be until the full memo features are included in v1.6 that this will really be satisfactory. However, it seems that most users of line-by-line coding and this method of managing IPA use spreadsheet software (so they can have columns for the transcript, summary, subordinate and later superordinate themes) or a word-processor utilising the comment features.

 

However you approach the analysis, the focus should be on the participant’s own interpretation and meaning of their experiences, and you should be able to craft a story for the reader when writing up that connects the themes you have identified to the way the participant describes the phenomenon of interest.



I’m not going to go much into the limitations of the approach here, suffice it to say that you are obviously limited to understanding participant’s meanings of the world through something like the one-dimensional transcript of an interview. What they are willing to share, and how they articulate may not be the complete picture, and other approaches such as discourse analysis may be revealing. Also, make sure that it is really participant’s understandings of experiences you want to examine. It posits a very deep ‘walk two moons in their moccasins‘ approach that is not right for boarder research questions, perhaps when wanting to contrast the broad opinions of a more diverse sample. Brew your IPA right: know what you want to make, use the right ingredients, have patience in the maturation process, and keep tasting as you go along.



As usual, I want to caution the reader against taking anything from my crude summary of IPA as being gospel, and suggest a true reading of the major texts in the field are essential before deciding if this is the right approach for you and your research. I have assembled a small list of references below that should serve as a primer, but there is much to read, and as always with qualitative epistemologies, a great deal of variety of opinion in discourse, theory and application!

 

 

download Quirkos

Finally, don't forget to give Quirkos a try, and see if it can help with your qualitative analysis. We think it's the easiest, most affordable qualitative software out there, so download a one month free trial and see for yourself!



References

Biggerstaff, D. L. & Thompson, A. R. (2008). Qualitative Research in Psychology 5: 173 – 183.
http://wrap.warwick.ac.uk/3488/1/WRAP_Biggrstaff_QRP_submission_revised_final_version_WRAP_doc.pdf

Hefferson, K., Gil_Rodriguez, E., 2011, Methods: Interpretative phenomenological analysis, October 2011, The Psychologist, 24, pp.756-759

Heidegger, M. ( 1962). Being and time (J. Macquarrie & E. Robinson, Trans.). Oxford, UK: Blackwell. (Original work published 1927)

Husserl, E. ( 1931). Ideas: A general introduction to pure phenomenology (W.R. Boyce Gibson, Trans.). London, UK: Allen & Unwin.

IPARG (The Interpretative Phenomenological Analysis Research Group) at Birkbeck college http://www.bbk.ac.uk/psychology/ipa

Larkin, M., Watts, S., & Clifton, E. 2006. Giving voice and making sense in interpretative phenomenological analysis. Qualitative Research in Psychology, 3, 102-120.

Larkin, M., 2013, Interpretative phenomenological analysis - introduction, [accessed online] https://prezi.com/dnprvc2nohjt/interpretative-phenomenological-analysis-introduction/

Smith, J., Jarman, M. & Osborn, M. (1999). Doing interpretative phenomenological analysis. In M. Murray & K. Chamberlain (Eds.) Qualitative health psychology, London: Sage.

Smith J., Flowers P., Larkin M., 2009, Interpretative phenomenological analysis: theory, method and research, London: Sage.
https://us.sagepub.com/sites/default/files/upm-binaries/26759_01_Smith_et_al_Ch_01.pdf

 

 

Archaeologies of coding qualitative data

recoding qualitative data

 

In the last blog post I referenced a workshop session at the International Conference of Qualitative Inquiry entitled the ‘Archaeology of Coding’. Personally I interpreted archaeology of qualitative analysis as being a process of revisiting and examining an older project. Much of the interpretation in the conference panel was around revisiting and iterating coding within a single analytical attempt, and this is very important.


In qualitative analysis it is rarely sufficient to only read through and code your data once. An iterative and cyclical process is preferable, often building on and reconsidering previous rounds of coding to get to higher levels of interpretation. This is one of the ways to interpret an ‘archaeology’ of coding – like Jerusalem, the foundations of each successive city is built on the groundwork of the old. And it does not necessarily involve demolishing the old coding cycle to create a new one – some codes and themes (just like significant buildings in a restructured city) may survive into the new interpretation.


But perhaps there is also a way in which coding archaeology can be more like a dig site: going back down through older layers to uncover something revealing. I allude to this more in the blog post on ‘Top down or bottom up’ coding approaches, because of course you can start your qualitative analysis by identifying large common themes, and then breaking these up into more specific and nuanced insight into the data.

 

But both these iterative techniques are envisaged as part of a single (if long) process of coding. But what about revisiting older research projects? If you get the opportunity to go back and re-examine old qualitative data and analysis?


Secondary data analysis can be very useful, especially when you have additional questions to ask of the data, such as in this example by Notz (2005). But it can also be useful to revisit the same data, question or topic when the context around them changes, for example due to a major event or change in policy.


A good example is our teaching dataset conducted after the referendum for Scottish Independence a few years ago. This looked to see how the debate had influenced voters interpretations of the different political parties and how they would vote in the general election that year. Since then, there has been a referendum on the UK leaving the EU, and another general election. Revisiting this data would be very interesting in retrospect of these events. It is easy to see from the qualitative interviews that voters in Scotland would overwhelmingly vote to stay in the EU. However, it would not be up-to-date enough to show the ‘referendum fatigue’ that was interpreted as a major factor reducing support for the Scottish National Party in the most recent election. Yet examining the historical data in this context can be revealing, and perhaps explain variance in voting patterns in the changing winds of politics and policy in Scotland.

 

While the research questions and analysis framework devised for the original research project would not answer the new questions we have of the data, creating new or appended analytical categories would be insightful. For example, many of the original codes (or Quirks) identifying political parties people were talking about will still be useful, but how they map to policies might be reinterpreted, or higher level themes such as the extent that people perceive a necessity for referendum, or value of remaining part of the EU (which was a big question if Scotland became independent). Actually, if this sounds interesting to you, feel free to re-examine the data – it is freely available in raw and coded formats.

 

Of course, it would be even more valuable to complement the existing qualitative data with new interviews, perhaps even from the same participants to see how their opinions and voting intentions have changed. Longitudinal case studies like this can be very insightful and while difficult to design specifically for this purpose (Calman, Brunton, Molassiotis 2013), can be retroactively extended in some situations.


And of course, this is the real power of archaeology: when it connects patterns and behaviours of the old with the new. This is true whether we are talking about the use of historical buildings, or interpretations of qualitative data. So there can be great, and often unexpected value in revisiting some of your old data. For many people it’s something that the pressures of marking, research grants and the like push to the back burner. But if you get a chance this summer, why not download some quick to learn qualitative analysis software like Quirkos and do a bit of data archaeology of your own?

 

What next? Making the leap from coding to analysis

leap coding to analysis

 

So you spend weeks or months coding all your qualitative data. Maybe you even did it multiple times, using different frameworks and research paradigms. You've followed our introduction guides and everything is neatly (or fairly neatly) organised and inter-related, and you can generate huge reports of all your coding work. Good job! But what happens now?

 

It's a question asked by lot of qualitative researchers: after all this bruising manual and intellectual labour, you hit a brick wall. After doing the coding, what is the next step? How to move the analysis forward?

 

The important thing to remember is that coding is not really analysis. Coding is often a precursor to analysis, in the same way that a good filing system is a good start for doing your accounts: if everything is in the right place, the final product will come together much easier. But coding is usually a reductive and low-level action, and it doesn't always bring you to the big picture. That's what the analysis has to do: match up your data to the research questions and allow you to bring everything together. In the words of Zhang and Wildemuth you need to look for “meanings, themes and patterns”

 


Match up your coding to your research questions

Now is a good time to revisit the research question(s) you originally had when you started your analysis. It's easy during the coding process to get excited by unexpected but fascinating insights coming from the data. However, you usually need to reel yourself in at this stage, and explore how the coded data is illuminating the quandaries you set out to explore at the start of the project.

 

Look at the coded framework, and see which nodes or topics are going to help you answer each research question. Then you can either group these together, or start reading through the coded text by theme, probably more than once with an eye for one research question each time. Don't forget, you can still tag and code at this stage, so you can have a category for 'Answers research question 1' and tag useful quotes there.

 

One way to do this in Quirkos is the 'Levels' function, which allows you to assign codes/themes to more than one grouping. You might have some coded categories which would be helpful in answering more than one research question: you can have a level for each research question, and  Quirks/categories can belong to multiple appropriate levels. That way, you can quickly bring up all responses relevant to each research question, without your grouping being non-exclusive. 

 


Analyse your coding structure!

It seems strange to effectively be analysing your analysis, but looking at the coding framework itself gets you to a higher meta-level of analysis. You can grouping themes together to identify larger themes and coding. It might also be useful to match your themes with theory, or recode them again into higher level insights. How you have coded (especially when using grounded theory or emergent coding) can reveal a lot about the data, and your clusterings and groupings, even if chosen for practical purposes, might illuminate important patterns in the data.

 

In Quirkos, you can also use the overlap view to show relationships between themes. This illustrates in a graphical chart how many times sections of text 'overlap' - in that a piece of text has been coded with both themes. So if you have simple codes like 'happy' or 'disappointed' you can what themes have been most coded with disappointment. This can sometimes quickly show surprises in the correlations, and lets you quickly explore possible significant relationships between all of your codes. However, remember that all these metrics are quantitative, so are dependent on the number of times a particular theme has been coded. You need to keep reading the qualitative text to get the right context and weight, which is why Quirkos shows you all the correlating text on the right of the screen in this view.

 

side comparison view in Quirkos software

 


Compare and contrast

Another good way to make your explorations more analytical is to try and identify and explain differences: in how people describe key words or experiences, what language they use, or how their opinions are converging or diverging from other respondents. Look back at each of the themes, and see how different people are responding, and most importantly, if you can explain the difference through demographics or difference life experiences.

 

In Quirkos this process can be assisted with the query view, which allows you to see responses from particular groups of sources. So you might want to look at differences between the responses of men and women, as shown below. Quirkos provides a side-by-side view to let you read through the quotes, comparing the different responses. This is possible in other software too, but requires a little more time to get different windows set up for comparison.

 

overlap cluster view in Quirkos software

 

Match and re-explore the literature

It's also a good time to revisit the literature. Look back at the key articles you are drawing from, and see how well your data is supporting or contradicting their theory or assumptions. It's a really good idea to do this (not just at the end) because situating your finding in the literature is the hallmark of a well written article or thesis, and will make clear the contribution your study has made to the field. But always be looking for an extra level of analysis, try and grow a hypothesis of why your research differs or comes to the same conclusions – is there something in the focus or methodology that would explain the patterns?

 


Keep asking 'Why'

Just like an inquisitive 6 year old, keep asking 'Why?'! You should have multiple levels of Why, with explanations in qualitative focus usually explaining individual, then group, and all the way up to societal levels of causation. Think of the maxim 'Who said What, and Why?'. The coding shows the 'What', exploring the detail and experiences of the respondents is the 'Who', the Why needs to explore not just their own reasoning, but how this connects to other actors in the system. Sometimes this causation is obvious to the respondent, especially if articulated because they were always asked 'why' in the interview! However analysis sometimes requires a deeper detective type reading, getting to the motivations as well as actions of the participants.

 


Don't panic!

Your work was not in vain. Even if you end up for some reason scrapping your coding framework and starting again, you will have become so much more engaged with your data by reading it through so closely, and this will be a great help knowing how to take the data forward. Some people even discover that coding data was not the right approach for their project, and use it very little in the final analysis process. Instead they may just be able to pull together important findings in their head, the time taken to code the data having made key findings pop out from the page.

 

And if things still seem stuck, take a break, step back and print out your data and try and read it from a fresh angle. Wherever possible, discuss with others, as a different perspective can come not just from other people's ideas, but just the process of having to verbally articulate what you are seeing in the data.

 


Also remember to check out Quirkos, a software tool that helps constantly visualise your qualitative analysis, and thus keep your eye on what is emerging from the data. It's simple to learn, affordably priced, and there is a free trial to download for Windows, Mac and Linux so you can see for yourself if it is the right fit for your qualitative analysis journey. Good luck!

 

 

Circles and feedback loops in qualitative research

qualitative research feedback loops

The best qualitative research forms an iterative loop, examining, and then re-examining. There are multiple reads of data, multiple layers of coding, and hopefully, constantly improving theory and insight into the underlying lived world. During the research process it is best to try to be in a constant state of feedback with your data, and theory.


During your literature review, you may have several cycles through the published literature, with each pass revealing a deeper network of links. You will typically see this when you start going back to ‘seminal’ texts on core concepts from older publications, showing cycles of different interpretations and trends in methodology that are connected. You can see this with paradigm trends like social captial, neo-liberalism and power. It’s possible to see major theorists like Foucault, Chomsky and Butler each create new cycles of debate in the field, building up from the previous literature.


A research project will often have a similar feedback loop between the literature and the data, where the theory influences the research questions and methodology, but engagement with the real ‘folk world’ provides challenge to interpretations of data and the practicalities of data collection. Thus the literature is challenged by the research process and findings, and so a new reading of the literature is demanded to correlate or challenge new interpretations.

 

Thus it’s a mistake to think that a literature review only happens at the beginning of the research process, it is important to engage with theory again, not just at the end of a project when drawing conclusions and writing up, but during the analysis process itself. Especially with qualitative research, the data will rarely neatly fit with one theory or another, but demand a synthesis or new angle on existing research.

 

The coding process is also like this, in that it usually requires many cycles through the data. After reading one source, it can feel like the major themes and codes for the project are clear, and will set the groundwork for the analytic framework. But what if you had started with another source? Would the codes you would have created have been the same? It’s easy to either get complacent with the first codes you start with, worrying that the coding structure gets too complicated if there you keep creating new nodes.

 

However, there will always be sources which contain unique data or express different opinions and experiences that don’t chime with existing codes. And what if this new code actually fits some of the previous data better? You would need to go back to previously analysed data sources and explore them again. This is why most experts will recommend multiple tranches through the data, not just to be consistent and complete, but because there is a feedback loop in the codes and themes themselves. Once you have a first coding structure, the framework itself can be examined and reinterpreted, looking for groupings and higher level interpretations. I’ve talked about this more in this blog article about qualitative coding.


Quirkos is designed to keep researchers deeply embedded in this feedback process, with each coding event subtly changing the dynamics of the coding structure. Connections and coding is shown in real time, so you can always see what is happening, what is being coded most, and thus constantly challenge your interpretation and analysis process.

 

Queries, questions and sub-set analysis should also be easy to run and dynamic, because good qualitative researchers shouldn’t only do interrogation and interpretation of the data at the end of the analysis process, it should be happening throughout it. That way surprises and uncertainties can be identified early, and new readings of the data illuminate these discoveries.

 

In a way, qualitative analysis is never done: and it is not usually a linear process. Even when project practicalities dictate an end point, a coded research project in software like Quirkos sits on your hard drive, awaiting time for secondary analysis, or for the data to be challenged from a different perspective and research question. And to help you when you get there, your data and coding bubbles will immediately show you where you left off – what the biggest themes where, how they connected, and allow you to go to any point in the text to see what was said.

 

And you shouldn’t need to go back and do retraining to use the software again. I hear so many stories of people who have done training courses for major qualitative data analysis software, and when it comes to revisiting their data, the operations are all forgotten. Now, Quirkos may not have as many features as other software, but the focus on keeping things visual and in plain sight means that these should comfortably fit under your thumb again, even after not using it for a long stretch.

 

So download the free trial of Quirkos today, and see how it’s different way of presenting the data helps you continuously engage with your data in fresh ways. Once you start thinking in circles, it’s tough to go back!

 

Triangulation in qualitative research

triangulation facets face qualitative

 

Triangles are my favourite shape,
  Three points where two lines meet

                                                                           alt-J

 

Qualitative methods are sometimes criticised as being subjective, based on single, unreliable sources of data. But with the exception of some case study research, most qualitative research will be designed to integrate insights from a variety of data sources, methods and interpretations to build a deep picture. Triangulation is the term used to describe this comparison and meshing of different data, be it combining quantitative with qualitative, or ‘qual on qual’.


I don’t think of a data in qualitative research as being a static and definite thing. It’s not like a point of data on a plot of graph: qualitative data has more depth and context than that. In triangulation, we think of two points of data that move towards an intersection. In fact, if you are trying to visualise triangulation, consider instead two vectors – directions suggested by two sources of data, that may converge at some point, creating a triangle. This point of intersection is where the researcher has seen a connection between the inference of the world implied by two different sources of data. However, there may be angles that run parallel, or divergent directions that will never cross: not all data will agree and connect, and it’s important to note this too.


You can triangulate almost all the constituent parts of the research process: method, theory, data and investigator.


Data triangulation, (also called participant or source triangulation) is probably the most common, where you try to examine data from different respondents but collected using the same method. If we consider that each participant has a unique and valid world view, the researcher’s job is often to try and look for a pattern or contradictions beyond the individual experience. You might also consider the need to triangulate between data collected at different times, to show changes in lived experience.

 

Since every method has weaknesses or bias, it is common for qualitative research projects to collect data in a variety of different ways to build up a better picture. Thus a project can collect data from the same or different participants using different methods, and use method or between-method triangulation to integrate them. Some qualitative techniques can be very complementary, for example semi-structured interviews can be combined with participant diaries or focus groups, to provide different levels of detail and voice. For example, what people share in a group discussion maybe less private than what they would reveal in a one-to-one interview, but in a group dynamic people can be reminded of issues they might forget to talk about otherwise.


Researchers can also design a mixed-method qualitative and quantitative study where very different methods are triangulated. This may take the form of a quantitative survey, where people rank an experience or service, combined with a qualitative focus group, interview or even open-ended comments. It’s also common to see a validated measure from psychology used to give a metric to something like pain, anxiety or depression, and then combine this with detailed data from a qualitative interview with that person.


In ‘theoretical triangulation’, a variety of different theories are used to interpret the data, such as discourse, narrative and context analysis, and these different ways of dissecting and illuminating the data are compared.


Finally there is ‘investigator triangulation’, where different researchers each conduct separate analysis of the data, and their different interpretations are reconciled or compared. In participatory analysis it’s also possible to have a kind of respondent triangulation, where a researcher is trying to compare their own interpretations of data with that of their respondents.

 

 

While there is a lot written about the theory of triangulation, there is not as much about actually doing it (Jick 1979). In practice, researchers often find it very difficult to DO the triangulation: different data sources tend to be difficult to mesh together, and will have very different discourses and interpretations. If you are seeing ‘anger’ and ‘dissatisfaction’ in interviews with a mental health service, it will be difficult to triangulate such emotions with the formal language of a policy document on service delivery.


In general the qualitative literature cautions against seeing triangulation as a way to improve the validity and reliability of research, since this tends to imply a rather positivist agenda in which there is an absolute truth which triangulation gets us closer to. However, there are plenty that suggest that the quality of qualitative research can be improved in this way, such as Golafshani (2003). So you need to be clear of your own theoretical underpinning: can you get to an ‘absolute’ or ‘relative’ truth through your own interpretations of two types of data? Perhaps rather than positivist this is a pluralist approach, creating multiplicities of understandings while still allowing for comparison.


It’s worth bearing in mind that triangulation and multiple methods isn’t an easy way to make better research. You still need to do all different sources justice: make sure data from each method is being fully analysed, and iteratively coded (if appropriate). You should also keep going back and forth, analysing data from alternate methods in a loop to make sure they are well integrated and considered.

 


Qualitative data analysis software can help with all this, since you will have a lot of data to process in different and complementary ways. In software like Quirkos you can create levels, groups and clusters to keep different analysis stages together, and have quick ways to do sub-set analysis on data from just one method. Check out the features overview or mixed-method analysis with Quirkos for more information about how qualitative research software can help manage triangulation.

 


References and further reading

Carter et al. 2014, The use of triangulation in qualitative research, Oncology Nursing Forum, 41(5), https://www.ncbi.nlm.nih.gov/pubmed/25158659

 

Denzin, 1978 The Research Act: A Theoretical Introduction to Sociological Methods, McGraw-Hill, New York.

 

Golafshani, N., 2003, Understanding reliability and validity in qualitative research, 8(4), http://nsuworks.nova.edu/cgi/viewcontent.cgi?article=1870&context=tqr


Bekhet A, Zauszniewski J, 2012, Methodological triangulation: an approach to
understanding data, Nurse Researcher, 20 (2), http://journals.rcni.com/doi/pdfplus/10.7748/nr2012.11.20.2.40.c9442

 

Jick, 1979, Mixing Qualitative and Quantitative Methods: Triangulation in Action,  Administrative Science Quarterly, 24(4),  https://www.jstor.org/stable/2392366

 

 

100 blog articles on qualitative research!

images by Paul Downey and AngMoKio

 

Since our regular series of articles started nearly three years ago, we have clocked up 100 blog posts on a wide variety of topics in qualitative research and analysis! These are mainly short overviews, aimed at students, newcomers and those looking to refresh their practice. However, they are all referenced with links to full-text academic articles should you need more depth. Some articles also cover practical tips that don't get into the literature, like transcribing without getting back-ache, and hot to write handy semi-strucutred interview guides. These have become the most popular part of our website, and there's now more than 80,000 words in my blog posts, easily the length of a good sized PhD thesis!

 

That's quite a lot to digest, so in addition to the full archive of qualitative research articles, I've put together a 'best-of', with top 5 articles on some of the main topics. These include Epistemology, Qualitative methods, Practicalities of qualitative research, Coding qualitative data, Tips and tricks for using Quirkos, and Qualitative evaluations and market research. Bookmark and share this page, and use it as a reference whenever you get stuck with any aspect of your qualitative research.

 

While some of them are specific to Quirkos (the easiest tool for qualitative research) most of the principles are universal and will work whatever software you are using. But don't forget you can download a free trial of Quirkos at any time, and see for yourself!

 


Epistemology

What is a Qualitative approach?
A basic overview of what constitutes a qualitative research methodology, and the differences between quantitative methods and epistimologies

 

What actually is Grounded Theory? A brief introduction
An overview of applying a grounded theory approach to qualitative research

 

Thinking About Me: Reflexivity in science and qualitative research
How to integrate a continuing reflexive process in a qualitative research project

 

Participatory Qualitative Analysis
Quirkos is designed to facilitate participatory research, and this post explores some of the benefits of including respondents in the interpretation of qualitative data

 

Top-down or bottom-up qualitative coding
Deciding whether to analyse data with high-level theory-driven codes, or smaller descriptive topics (hint – it's probably both!)

 

 


Qualitative methods

An overview of qualitative methods
A brief summary of some of the commonly used approaches to collect qualitative data

 

Starting out in Qualitative Analysis
First things to consider when choosing an analytical strategy

 

10 tips for semi-structured qualitative interviewing
Semi-structured interviews are one of the most commonly adopted qualitative methods, this article provides some hints to make sure they go smoothly, and provide rich data

 

Finding, using and some cautions on secondary qualitative data
Social media analysis is an increasingly popular research tool, but as with all secondary data analysis, requires acknowledging some caveats

 

Participant diaries for qualitative research
Longitudinal and self-recorded data can be a real gold mine for qualitative analysis, find out how it can help your study

 


Practicalities of qualitative research

Transcription for qualitative interviews and focus-groups
Part of a whole series of blog articles on getting qualitative audio transcribed, or doing it yourself, and how to avoid some of the pitfalls

 

Designing a semi-structured interview guide for qualitative interviews
An interview guide can give the researcher confidence and the right level of consistency, but shouldn't be too long or too descriptive...

 

Recruitment for qualitative research
While finding people to take part in your qualitative study can seem daunting, there are many strategies to choose, and should be closely matched with the research objectives

 

Sampling considerations in qualitative research
How do you know if you have the right people in your study? Going beyond snowball sampling for qualitative research

 

Reaching saturation point in qualitative research
You'll frequently hear people talking about getting to data saturation, and this post explains what that means, and how to plan for it

 

 

Coding qualitative data

Developing and populating a qualitative coding framework in Quirkos
How to start out with an analytical coding framework for exploring, dissecting and building up your qualitative data

 

Play and Experimentation in Qualitative Analysis
I feel that great insight often comes from experimenting with qualitative data and trying new ways to examine it, and your analytical approach should allow for this

 

6 meta-categories for qualitative coding and analysis
Don't just think of descriptive codes, use qualitative software to log and keep track of the best quotes, surprises and other meta-categories

 

Turning qualitative coding on its head
Sometimes the most productive way forward is to try a completely new approach. This post outlines several strange but insightful ways to recategorise and examine your qualitative data

 

Merging and splitting themes in qualitative analysis
It's important to have an iterative coding process, and you will usually want to re-examine themes and decide whether they need to be more specific or vague

 

 


Quirkos tips and tricks

Using Quirkos for Systematic Reviews and Evidence Synthesis
Qualitative software makes a great tool for literature reviews, and this article outlines how to sep up a project to make useful reports and outputs

 

How to organise notes and memos in Quirkos
Keeping memos is an important tool during the analytical process, and Quirkos allows you to organise and code memo sources in the same way you work with other data

 

Bringing survey data and mixed-method research into Quirkos
Data from online survey platforms often contains both qualitative and quantitative components, which can be easily brought into Quirkos with a quick tool

 

Levels: 3-dimensional node and topic grouping in Quirkos
When clustering themes isn't comprehensive enough, levels allows you to create grouped categories of themes that go across multiple clustered bubbles

 

10 reasons to try qualitative analysis with Quirkos
Some short tips to make the most of Quirkos, and get going quickly with your qualitative analysis

 

 

Qualitative market research and evaluations

Delivering qualitative market insights with Quirkos
A case study from an LA based market research firm on how Quirkos allowed whole teams to get involved in data interpretation for their client

 

Paper vs. computer assisted qualitative analysis
Many smaller market research firms still do most of their qualitative analysis on paper, but there are huge advantages to agencies and clients to adopt a computer-assisted approach

 

The importance of keeping open-ended qualitative responses in surveys
While many survey designers attempt to reduce costs by removing qualitative answers, these can be a vital source of context and satisfaction for users

 

Qualitative evaluations: methods, data and analysis
Evaluating programmes can take many approaches, but it's important to make sure qualitative depth is one of the methods adopted

 

Evaluating feedback
Feedback on events, satisfaction and engagement is a vital source of knowledge for improvement, and Quirkos lets you quickly segment this to identify trends and problems

 

 

 

Analytical memos and notes in qualitative data analysis and coding

Image adapted from https://commons.wikimedia.org/wiki/File:Male_forehead-01_ies.jpg - Frank Vincentz

There is a lot more to qualitative coding than just deciding which sections of text belong in which theme. It is a continuing, iterative and often subjective process, which can take weeks or even months. During this time, it’s almost essential to be recording your thoughts, reflecting on the process, and keeping yourself writing and thinking about the bigger picture. Writing doesn’t start after the analysis process, in qualitative research it often should precede, follow and run in parallel to a iterative interpretation.


The standard way to do this is either through a research journal (which is also vital during the data collection process) or through analytic memos. Memos create an important extra level of narrative: an interface between the participant’s data, the researcher’s interpretation and wider theory.


You can also use memos as part of a summary process, to articulate your interpretations of the data in a more concise format, or even throw the data wider and larger by drawing from larger theory.


It’s also a good cognitive exercise: regularly make yourself write what you are thinking, and keep yourself articulating yourself. It will make writing up at the end a lot easier in the end! Memos can be a very flexible tool, and qualitative software can help keep these notes organised. Here are 9 different ways you might use memos as part of your work-flow for qualitative data analysis:

 

Surprises and intrigue
This is probably the most obvious way to use memos: note during your reading and coding things that are especially interesting, challenging or significant in the data. It’s important to do more than just ‘tag’ these sections, reflect to yourself (and others) why these sections or statements stand out.

 

Points where you are not sure
Another common use of memos is to record sections of the data that are ambiguous, could be interpreted in different ways, or just plain don’t fit neatly in to existing codes or interpretations. But again, this should be more than just ‘flagging’ bits that need to be looked at again later, it’s important to record why the section is different: sometimes the act of having to describe the section can help comprehension and illuminate the underlying causation.

 

Discussion with other researchers
Large qualitative research projects will often have multiple people coding and analysing the data. This can help to spread the workload, but also allows for a plurality of interpretations, and peer-checking of assumptions and interpretations. Thus memos are very important in a team project, as they can be used to explain why one researcher interpreted or coded sources in a certain way, and flag up ambiguous or interesting sections for discussion.

 

Paper-trail
Even if you are not working as part of a team, it can be useful to keep memos to explain your coding and analytical choices. This may be important to your supervisors (or viva panel) as part of a research thesis, and can be seen as good practice for sharing findings in which you are transparent about your interpretations. There are also some people with a positivist/quantitative outlook who find qualitative research difficult to trust because of the large amount of seemingly subjective interpretation. Memos which detail your decision making process can help ‘show your working out’ and justify your choices to others.

 

Challenging or confirming theory
This is another common use of memos, to discuss how the data either supports or challenges theory. It is unusual for respondents to neatly say something like “I don’t think my life fits with the classical structure of an Aeschylean tragedy” should this happen to be your theoretical approach! This means you need to make these observations and higher interpretation, and note how particular statements will influence your interpretations and conclusions. If someone says something that turns your theoretical framework on its head, note it, but also use the memos as a space to record context that might be used later to explain this outlier. Memos like this might also help you identify patterns in the data that weren’t immediately obvious.

 

Questioning and critiquing the data/sources
Respondents will not always say what they mean, and sometimes there is an unspoken agenda below the surface. Depending on the analytical approach, an important role of the researcher is often to draw deeper inferences which may be implied or hinted at by the discourse. Sometimes, participants will outright contradict themselves, or suggest answers which seem to be at odds with the rest of what they have shared. It’s also a great place to note the unsaid. You can’t code data that isn’t there, but sometimes it’s really obvious that a respondent is avoiding discussing a particular issue (or person). Memos can note this observation, and discuss why topics might be uncomfrotable or left out in the narrative.


Part of an iterative process
Most qualitative research does not follow a linear structure, it is iterative and researchers go back and re-examine the data at different stages in the process. Memos should be no different, they can be analysed themselves, and should be revisited and reviewed as you go along to show changes in thought, or wider patterns that are emerging.


Record your prejudices and assumptions
There is a lot of discussion in the literature about the importance of reflexivity in qualitative research, and recognising the influence of the non-neutral researcher voice. Too often, this does not go further than a short reflexivity/positionality statement, but should really be a constantly reconsidered part of the analytical process. Memos can be used as a prompt and record of your reflexive process, how the data is challenges your prejudices, or how you might be introducing bias in the interpretation of the data.


Personal thoughts and future directions
As you go through the data, you may be noticing interesting observations which are tangential, but might form the basis of a follow-on research project or reinterpretation of the data. Keeping memos as you go along will allow you to draw from this again and remember what excited you about the data in the first place.

 

 

Qualitative analysis software can help with the memo process, keeping them all in the same place, and allowing you to see all your memos together, or connected to the relevant section of data. However, most of the major software packages (Quirkos included) don’t exactly forefront the memo tools, so it is important to remember they are there and use them consistently through the analytical process.

 

Memos in Quirkos are best done using a separate source which you edit and write your memos in. Keeping your notes like this allows you to code your memos in the same way you would with your other data, and use the source properties to include or exclude your memos in reports and outputs as needed. However, it can be a little awkward to flip between the memo and active source, and there is currently no way to attach memos to a particular coding event. However, this is something we are working on for the next major release, and this should help researchers to keep better notes of their process as they go along. More detail on qualitative memos in Quirkos can be found in this blog post article.

 

 

There is a one-month free trial of Quirkos, and it is so simple to use that you should be able to get going just by watching one of our short intro videos, or the built-in guide. We are also here to help at any stage of your process, with advice about the best way to record your analytical memos, coding frameworks or anything else. Don’t be shy, and get in touch!

 


References and further reading:


Chapman, Y., Francis, K., 2008. Memoing in qualitative research, Journal of Research in Nursing, 13(1). http://jrn.sagepub.com/content/13/1/68.short?rss=1&ssource=mfc

 

Gibbs, G., 2002, Writing as Analysis, http://onlineqda.hud.ac.uk/Intro_QDA/writing_analysis.php

Saldana, J., 2015, The Coding Manual for Qualitative Researchers, Writing Analytic Memos about Narritative and Visual Data, Sage, London. https://books.google.co.uk/books?id=ZhxiCgAAQBAJ

 

 

Qualitative coding with the head and the heart

qualitative coding head and heart

 

In the analysis of qualitative data, it can be easy to fall in the habit of creating either very descriptive, or very general theoretical codes. It’s often a good idea to take a step back, and examine your coding framework, challenging yourself to look at the data in a fresh way. There are some more suggestions for how to do this in a blog post article about turning coding strategies on their head. But while in Delhi recently to deliver some training on using Quirkos, I was struck by a couple of exhibits at the National Museum which in a roundabout way made me think about coding qualitative data, and getting the balance right between analytical and emotional coding frameworks.

 

There were several depictions of Hindu deities trampling a dwarf called Apasmāra, who represented ignorance. I loved this focus of minimising ignorance, but it’s important to note that in Hindu mythology, ignorance should not be killed or completely vanquished, lest knowledge become too easy to obtain without effort.

 

Another sculpture depicted Yogini Vrishanna, a female deity that had taken the bull-head form. It was apparently common for deities to periodically take on an animal head to prevent over-intellectualism, and allow more instinctive, animalistic behaviour!

 

I was fascinated between this balance being depicted between venerating study and thought, but at the same time warning against over thinking. I think this is a message that we should really take to heart when coding qualitative data. It’s very easy to create coding themes that are often far too simple and descriptive to give much insight in the data: to treat the analysis as purely a categorization exercise. When this happens, students often create codes that are basically an index of low-level themes in a text. While this is often a useful first step, it’s important to go beyond this, and create codes (or better yet, a whole phase of coding) which are more interpretive, and require a little more thinking.

 

However, it’s also possible to go too far in the opposite direction and over-think your codes. Either this comes from looking at the data too tightly, focusing on very narrow and niche themes, or from the over-intellectualising that Yogini Vrishanna was trying to avoid above. When the researcher has their head deeply in the theory (and lets be honest this is an occupational hazard for those in the humanities and social sciences), there is a tendency to create very complicated high-level themes. Are respondents really talking about ‘social capital’, ‘non-capitalocentric ethics’ or ‘epistemic activism’? Or are these labels which the researcher has imposed on the data?

 

These might be the times we have to put on our imaginary animal head, and try to be more inductive and spontaneous with our coding. But it also requires coding less from the head, and more from the heart. In most qualitative research we are attempting to understand the world through the perspective of our respondents, and most people are emotional beings, acting not just for rational reasons.

 

If our interpretations are too close to the academic, and not the lived experiences of our study communities, we risk missing the true picture. Sometimes we need a this theoretical overview to see more complex trends, but they should never be too far from the data in a single qualitative study. Be true to both your head and your heart in equal measure, and don’t be afraid to go back and look at your data again with a different animal head on!

 

If you need help to organise and visualise all the different stages of coding, try using qualitative analysis software like Quirkos! Download a free trial, and see for yourself...

 

Merging and splitting themes in qualitative analysis

split and merge qual codes

To merge or to split qualitative codes, that is the question…

 

One of the most asked questions when designing a qualitative coding structure is ‘How many codes should I have?’. It’s easy to start out a project thinking that just a few themes will cover the research questions, but sooner or later qualitative analysis tends towards ballooning thematic structure, and before you’ve even started you might have a framework with dozens of codes. And while going through and analysing the data, you might end up with another couple dozen more. So it’s quite common for researchers to end up with more than a hundred codes (or sometimes hundreds)!

 

This can be alarming for students doing qualitative analysis for the first time, but I would argue it’s fine in most situations. While it can be confusing and disorienting if you are using paper and highlighters, when using CAQDAS software a large number of themes can be quite manageable. However, this itself can be a problem, since qualitative software makes it almost too easy to create an unwieldy number of codes. While some restraint is always advisable, when I am running workshops I usually advise new coders not to worry, since with the software it is easier to merge codes later, than split them apart.

 

I’m going to use the example of Quirkos here, but the same principal applies to any qualitative data analysis package. When you are going through and analysing your qualitative text sources, reading and coding them is the most time consuming part. If you create a new code for a theme half way through coding your data because you can see it is becoming important, you will have to go back to the beginning and read through the already coded sources to make sure you have complete coverage. That’s why it’s normally easier to think through codes before starting a code/read through.

 

Of course there is some methodological variance here: if you are doing certain types of grounded theory this may not apply as you will want to create themes on the fly. It’s also worth noting that good qualitative coding is an iterative process, and you should expect to go through the data several times anyway. Usually each time you do this you will look at the code structure in a different way – maybe creating a more higher-level, theory driven coding framework on each pass.

 

However, there is another way that QDA software helps you manage your qualitative themes: since it is simple to merge smaller codes together under a more general heading. In Quirkos, just right click on the code bubble you want to keep, and you will see the dialogue below:

 

merging qualitative codes in quirkos


Then select from the drop down list of other themes in your project which topic you want to merge into the Quirk you selected first. That’s it! All the coded text in the second bubble will get added to the first one, and it will keep the name of that code, appended with “(merged)” so you can identify it.

 

Since it is so easy to merge topics in qualitative software, I generally suggest that people aren’t afraid to create a large number of very specific topics, knowing they can merge them together later. For example, if you are create a code for when people are talking about eating out at a restaurant, why not start with separate codes for Fast food, Mexican, Chinese, Haute cuisine etc - since you can always merge them later under the generic ‘Restaurant’ theme if you decide you don’t need that much detail.

 

It is also possible to retroactively split broad codes into smaller categories, but this is a much more engaged process. To do this in Quirkos, I would start by taking the code you want to expand (say Restaurant) and make sure it is a top level code – in other words is not a subcategory of another code. Then, create the codes you want to break out (for example Thai, Vegetarian, Organic) and make them sub categories of the main node. Then, double click on the top Quirk, and you will get a list of all the text coded to the top node (Restaurant). From this view in Quirkos, you can drag and drop each code into the relevant subcategory (eg Organic, Thai):


splitting qualitative codes in quirkos


Once you have gone through and recoded all the quotes into new codes, you can either delete the quotes from the top level code (Restaurant) one by one (by right clicking on the highlight stripe), or remove all quotes from that node by deleting the top-node entirely. If you still want to have a Restaurant Quirk at the top to contain the sub categories, just recreate it, and add the sub-categories to it. That way you will have a blank ‘Restaurant’ theme to keep the subcategories (Thai, Organic) together.

 

So to summarise, don’t be afraid to have too many codes in CAQDAS software – use the flexibility it gives you to experiment. While there is always too much of a good thing, the software will help you see all the coding options at once, so you can decide the best place to categorise each quote. With the ability to merge, and even split apart codes with a little effort, it’s always possible to adjust  your coding framework later – in fact you should anticipate the need to do this as you refine your interpretations. You can also save your project at one stage of the coding, and go back to that point if you need to revert to an earlier state to try a different approach. For more information about large or small coding strategies, this blog post article goes into more depth.


If you want to see how this works in Quirkos, just download the free trial and try for yourself. Quirkos makes operations like merge and split really easy, and the software is designed to be intuitive, visual and colourful. So give it a try, and always contact us if you have any questions or suggestions on how we can make common operations like this quicker and simpler!

 

 

Turning qualitative coding on its head

CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=248747


For the first time in ages I attended a workshop on qualitative methods, run by the wonderful Johnny Saldaña. Developing software has become a full time (and then some) occupation for me, which means I have little scope for my own professional development as a qualitative researcher. This session was not only a welcome change, but also an eye-opening critique to the way that many in the room (myself included) approach coding qualitative data.

 

Professor Saldaña has written an excellent Coding Manual for Qualitative Researchers, and the workshop really brought to life some of the lessons and techniques in the book. Fundamental to all the approaches was a direct challenge to researchers doing qualitative coding: code different.

 

Like many researchers, I am guilty of taking coding as a reductive, mechanical exercise. My codes tend to be very basic and descriptive – what is often called index coding. My codes are often a summary word of what the sentence or section of text is literally about. From this, I will later take a more ‘grand-stand’ view of the text and codes themselves, looking at connections between themes to create categories that are closer to theory and insight.

 

However, Professor Saldaña gave (to my count) at least 12 different coding frameworks and strategies that were completely unique to me. While I am not going to go into them all here (that’s what the textbook, courses and the companion website is for!) it was not one particular strategy that stuck with me, but the diversity of approaches.

 

It’s easy when you start out with qualitative data analysis to try a simple strategy – after all it can be a time consuming and daunting conceptual process. And when you have worked with a particular approach for many years (and are surrounded by colleagues who have a similar outlook) it is difficult to challenge yourself. But as I have said before, to prevent coding being merely a reductive and descriptive act, it needs to be continuous and iterative. To truly be analysis and interrogate not just the data, but the researcher’s conceptualisation of the data, it must challenge and encourage different readings of the data.

 

For example, Professor Saldaña actually has a background in performance and theatre, and brings some common approaches from that sphere to the coding process: exactly the kind of cross-disciplinary inspiration I love! When an actor or actress is approaching a scene or character, they may engage with the script (which is much like a qualitative transcript) looking at the character's objectives, conflicts, tactics, attitudes, emotions and subtexts. The question is: what is the character trying to do or communicate, and how? This sort of actor-centred approach works really well in qualitative analysis, in which people, narratives and stories are often central to the research question.

 

So if you have an interview with someone, for example on their experience with the adoption process, imagine you are a writer dissecting the motivations of a character in a novel. What are they trying to do? Justify how they would be a good parent (objectives)? Ok, so how are they doing this (tactics)? And what does this reveal about their attitudes and emotions? Is there a subtext here – are they hurt because of a previous rejection?

 

Other techniques talked about the importance of creating codes which were based around emotions, participant’s values, or even actions: for example, can you make all your codes gerunds (words that end in –ing)? While there was a distinct message that researchers can mix and match these different coding categories, it felt to me a really good challenge to try and view the whole data set from one particular view point (for example conflicts) and then step to one side and look again with a different lens.

 

It’s a little like trying to understand a piece of contemporary sculpture: you need to see it up close, far away, and then from different angles to appreciate the different forms and meaning. Looking at qualitative data can be similar – sometimes the whole picture looks so abstract or baffling, that you have to dissect it in different ways. But often the simplest methods of analysis are not going to provide real insight. Analysing a Henry Moore sculpture by the simplest categories (what is the material, size, setting) may not give much more understanding. Cutting up a work into sections like head, torso or leg does little to explore the overall intention or meaning. And certain data or research questions suit particular analytical approaches. If a sculpture is purely abstract, it is not useful to try and look for aspects of human form - even if the eye is constantly looking for such associations.

 

Here, context is everything. Can you get a sense of what the artist wanted to say? Was it an emotion, a political statement, a subtle treatise on conventional beauty? And much like impressionist painting, sometimes a very close reading stops the viewer from being able to see the brush strokes from the trees.

 

Another talk I went to on how researchers use qualitative analysis software, noted that some users assumed that the software and the coding process was either a replacement or better activity than a close reading of the text. While I don’t think that coding qualitative data can ever replace a detailed reading or familiarity with the source text, coding exercises can help read in different ways, and hence allow new interpretations to come to light. Use them to read your data sideways, backwards, and though someone else’s eyes.

 

But software can help manage and make sense of these different readings. If you have different coding categories from different interpretations, you can store these together, and use different bits from each interpretation. But it can also make it easier to experiment, and look at different stages of the process at any time. In Quirkos you can use the Levels feature to group different categories of coding together, and look at any one (or several) of those lenses at a time.

 

Whatever approach you take to coding, try to really challenge yourself, so that you are forced to categorise and thus interpret the data in different ways. And don't be suprsied if the first approach isn't the one that reveals the most insight!

 

There is a lot more on our blog about coding, for example populating a coding framework and coding your codes. There will also be more articles on coding qualitative data to come, so make sure to follow us on Twitter, and if you are looking for simple, unobtrusive software for qualitative analysis check out Quirkos!

 

Workshop exercises for participatory qualitative analysis

participatory workshop

I am really interested in engaging research participants in the research process. While there is an increasing expectation to get ‘lay’ researchers to set research questions, sit on review boards and even ask questions in qualitative studies, it can be more difficult to engage them with the analysis of the research data and this is much rarer in the literature (see Nind 2011).


However, Quirkos was specifically designed to make qualitative text analysis engaging and easy enough that participants could learn to do it with just a little training, and in this blog post article from last year I describe how a small group were able to learn the software and code qualitative data in a two hour session. I am revisiting and expanding on this work for a talk in the Digital Tools thread of the 2016 ICQI conference in Champagne Illinois this month, so wanted to revisit this a little.


When I had attempted to engage participants in qualitative analysis before, it had been fairly limited in scope. The easiest way was essentially to present an already complete draft ‘report’ or overview of the analysis and findings, and use that for comment and discussion. While this allows respondents some ability to influence the final outputs and conclusions, the analysis process is still entirely led by the researcher, and they don’t really get a chance to change how data is interpreted. The power dynamics between researcher and participant are not significantly altered.


My feeling is that respondents are often the experts in what is being studied (i.e. their own experiences), and I worry that if presented with all the data, they might rightly conclude “You’ve interpreted this as being about x when it is really about y”.


Yet there are obvious problems that occur when you want respondents to engage with this level of detail. First of all, there is the matter of time: qualitative analysis is extremely time consuming, and in most projects asking someone to analyse all the data is asking for days or even months of commitment. This is not feasible for most respondents, especially if asked to do this work voluntarily parallel to the full time, paid job of the researcher! 


Most approaches in the literature choose to engage a small number of participants in the analysis of some of the data. For example Jackson (2008) uses group exercises successfully with people from different educational backgrounds. The DEPICT model breaks down the work so that the whole dataset is covered, but each team member only has a few transcripts to code (Flicker and Nixon 2015).


However, when it came to run participatory analysis workshops for the research project we did on the Scottish Referendum Project, we had an additional secret weapon: Quirkos! One of the main design briefs for Quirkos was to ensure that it was simple enough to learn that it could be used by research participants with little or no formal training in research or qualitative analysis. The workshops we ran with research-naïve respondents showed that such a software package could be used in such a way.


I was initially really worried about how the process would work practically, and how to create a small realistic task that would be a meaningful part of the analysis process. Before I started, I considered a series of tasks and scenarios that could be used in a participatory analysis workshop to get respondent input into the analysis process. I’ve included some brief details of these below, just in case they are helpful to anyone else considering participatory analysis.

 

 

Blank Sheet


The most basic, and most scary scenario: the coding team is provided with just the raw transcript(s), with no existing topic framework or coded data. They start from scratch, creating their own coding framework, and coding data. This is probably the most time consuming, and conceptually challenging approach, but the most neutral in terms of influence from the researchers. Participants are not provided with any preconceptions of what they should be exploring in the data (although they could be provided with the research questions), and are free to make their own interpretations.

 

 

Framework Creation


Here, I envisage a series of possible exercises where the focus is on not the coding of the data explicitly, but consideration of the coding framework and possible topics of interest. Participants choose topics of significance to them, or that they feel are appearing in the data. Here the process is like grounded theory, participants are given one (or several) transcripts to read, and asked what topics are significant. This works well on large sheets of paper with Post-it notes, but by creating the coding framework directly in the software, participants and researchers can easily utilise the developed framework for coding later. Could exist in several variants:


Emergent Coding
As above: creating a coding framework (probably from scratch, or with some example topics already provided by the researcher)
 

Grouping Exercise
A simpler task would be to present a pre-prepared list of many possible topics of interest created by the researcher, and ask participants to group them either thematically, or by order of importance. This gives respondents an easier start on the coding framework, allowing them to familiarise themselves with the process and topics. It is more restrictive, and plants directions of interest for the participants, but they would remain able to challenge, add to, or exclude topics for examination.
 

Category Prompts
Here the researcher has created a few very broad categories (for example Health, Housing, Family) and the participants are encouraged to populate the framework with more specific sub categories. This approach is a good middle ground, where the researcher can set some broad areas of interest, but participants have say in what direction topics should be explored in detail (say Expensive food, Lack of open space).

After one or more of these exercises, the participants could go on to use the coding framework to code the data themselves, or the researcher can use the contributed topic guide to focus their own coding.

 

 

Coding exercises


In these three exercises, I envisage a scenario where some coding has already been completed, the focus of the session is to look either at coded transcripts (on screen or printout) and discuss how the data has been interpreted. This could take the form of:


Researcher Challenge: Where the researcher asks the participants to justify or explain how they have coded the data
Participant Challenge: Participants examine data coded by researchers, question their rationale and suggest changes
Group Challenge: Participants and researchers code the same transcript separately, and get together to compare, contrast and discuss their results.


With all these approaches, one can apply several overall philosophies:
Individual: Where each respondent or researcher works on their own, adding separately to the total coding of the project
Collaborative: Analysis is done as part of a team, working together
Comparative: Where analysts work separately, but come together to discuss and contrast their work, creating a final dialogue from the input of the whole group.

 

Finally, the team should consider whether the aim of the project is to actually create a direct analysis outcome from these sessions, or if they are exercises which are themselves part of the qualitative data generated from the project. For our sessions, we also recorded, transcribed and analysed the discussion which took place around the coding, which itself also contributed nuanced and valuable insight into the thought processes of the participants. Of course, this leaves the problem of creating an infinite Ouroboros loop of data generation, if respondents were then invited to analyse the transcripts of their own analysis sessions!

 

Which approach, and how far the participatory process is taken will obviously depend on the research project and desires of the researcher. However, my main aim here is to just get people thinking about the possibilities, and if engaging participants in the research process in some way will challenge the assumptions of the research team, or lead to better results, and more relevant and impactful outputs.

 

Here are the slides of my ICQI 2016 talk, and the complete data (raw and coded) and summary report on the Scottish Referendum Project is here. I would welcome more discussion on this, in the forum, by e-mail (daniel@quirkos.com) or in the literature!

 

Don't forget, the new version of Quirkos is now available, for researchers and participants alike to bring their qualitative analysis to life. Download a free trial today!

 

 

Starting out in Qualitative Analysis

Qualitative analysis 101

 

When people are doing their first qualitative analysis project using software, it’s difficult to know where to begin. I get a lot of e-mails from people who want some advice in planning out what they will actually DO in the software, and how that will help them. I am happy to help out individually, because everyone’s project is different. However, here are a few pointers which cover the basics and can help demystify the process. These should actually apply to any software, not just Quirkos!

 

First off: what are you going to be able to do? In a nutshell, you will read through the sources, and for each section that is interesting to you and about a certain topic, you will ‘code’ or ‘tag’ that section of text to that topic. By doing this, the software lets you quickly see all the sections of text, the ‘quotes’ about that topic, across all of your sources. So you can see everything everyone said about ‘Politics’ or ‘Negative’ – or both.

 

You can then look for trends or outliers in the project, by looking at just responses with a particular characteristic like gender. You’ll also be able to search for a keyword, and generate a report with all your coded sections brought together. When you come to write up your qualitative project, the software can help you find quotes on a particular topic, visualise the data or show sub-section analysis.  

 

So here are the basic steps:

 

1.       Bring in your sources.
I’m assuming at this stage that you have the qualitative data you want to work with already. This could be any source of text on your computer. If you can copy and paste it, you can bring it into Quirkos. For this example let’s assume that you have transcripts from interviews: this means that you have already done a series of interviews, transcribed them, and have them in a file (say a Word document or raw text file). I’d suggest that before you bring them in, just have a quick look through and correct them in a Word Processor for typos and misheard words. While you can edit the text in Quirkos later, while using a Word or equivalent you have the advantage of spell checkers and grammar checkers.

 

Now, create a new, unstructured project in Quirkos, and save it somewhere locally on your computer. We don’t recommend you save directly to a network location, or USB stick, as if either of these go down, you will have a problem! Next, bring in the sources using the (+) Add Source button on the bottom right. You can bring in each file one at a time, or a whole folder of files in one go, in which case the file name will become the default source name. Don’t forget, you can always add more sources later, there is no need to bring in everything before you start coding. Now your project file (a little .qrk file you named) will contain all the text sources in one place. With Quirkos files, just backing up and copying this file saves the whole project.

 


2.       Describe your sources
It’s usually a good idea to describe some characteristics of your qualitative sources that you might use later to look for differences or similarities in the data. Often these are basic demographic characteristics like age or gender, but can also be things about the interview, such as the location, or your own notes.

 

To do this in Quirkos, click on the little grid button on the top right of the screen, and use the source properties. The first thing you can do here is change the name of the sources from the default (either a sequential number like ‘Source 7’ or the file name. You can create a property with the square [+] ‘Quickly add a new property’ button. The property (eg Gender) and a single value (eg Male) can be added here. The drop down arrow next to that property can be used later to add extra values.

 

The reason for doing this is that you can later run ‘queries’ which show results from just certain sources that have properties you defined. So you can do a side-by-side comparison of coded responses from men next to women. Don’t forget, you can add properties at any time, so you can even create a variable for ‘these people don’t fit the theory’ after you’ve coded, and try and see what they are saying that makes them different.

 

 

3.       Create your themes
Whatever you call them: themes, nodes, bubbles, topics or Quirks, these are the categories of interest you want to collect quotes about from the text. There are two approaches here, you can try and create all the categories you will use before you start reading and coding the text (this is sometimes called a framework approach), or you can add themes as you go (grounded theory). (For much much more on these approaches, look here and here.)

 

In Quirkos, you create themes as coloured bubbles, which grow in size the more text is added. Just click on the grey (+) button on the top right of the canvas view to add a new theme. You can also change the name, colour, level in this dialogue, or right click on the bubble and select ‘Quirk Properties’ at any time. To group, just drag and drop bubbles on top of each other.

 

 

4.       Do your coding
Essentially, the coding process involves finding every time someone said something about ‘Dieting’ and adding that sentence or paragraph to the ‘Dieting’ bubble or node. This is what is going to take the most time in your analysis (days or weeks) and is still a manual process. It’s best to read through each source in turn, and code it as you go.

 

However, you can also use the keyword search to look for words like ‘Diet’ or ‘eating’ and code from the results. This makes it quicker, but there is the risk of missing segments that use a keyword you didn’t think to search for like ‘cut-down’. The keywords search can help when you (inevitably) decide to add a new topic halfway through, and the first few interviews haven’t been coded for the new themes. You can use the search to look for related terms and find those new segments without having to go over the whole text again.

 

 

5.       Be iterative
Even if you are not using a grounded theory approach, going back over the data a second time, and rethinking codes and how you have categorised things can be really useful. Trust me: even if you know the data pretty well, after reading it all again, you will see some topics in a slightly different light, or will find interesting things you never thought would be there.

 

You may also want to rearrange your codes, especially if you have grouped them. Maybe the name you gave a theme isn’t quite right now: it’s grown or got more specific. Some vague codes like ‘Angry’ might need to be split out into ‘Irate’ and ‘Annoyed’. Depending on your approach, you  will probably constantly tweak and adjust the themes and coding so they best represent the intersection of your research questions and data.

 

 

6.       Explore the data.
Once your qualitative data is all coded, the big advantages of using CAQDAS software come into play. Using the database of your tagged text, you can choose to look at it in anyway: using any of the source properties, who did the coding or when, or whether a result comes from any particular group of codes. This is done using the 'Query' views in Quirkos.

 

In Quirkos there are also a lot of visualisation options that can show you the overall shape and structure of the project, the amount of coding, and connections that are emerging between the sources. You can then use these to help write your outputs, be they journal articles, evaluations or a thesis. Software will generate reports that let you share summaries of the coded data, and include key statistics and overviews of the project.


While it does seem like a lot of work to get to this stage, it can save so much time at the final stages of writing up your project, when you can call up a useful quote quickly. It also can help in the future to have this structured repository of qualitative data, so that secondary analysis or adding to the dataset does not involve re-inventing the wheel!

 

Finally, there is no one-size-fits-all approach, and it's important to find a strategy that fits with your way of working. Before you set out, talk to peers and supervisors, read guides and textbooks, and even go on training courses. While the software can help, it's not a replacement for considered thinking, and you should always have a good idea about what you want to do with the data in the end.

 

 

Using Quirkos for fun and (extremely nerdy) projects

This week, something completely different! A guest blog from our own Kristin Schroeder!

 

Most of our blog is a serious and (hopefully) useful exploration of current topics in qualitative research and how to use Quirkos to help you with your research. However we thought it might be fun to share something a little different.


I first encountered qualitative research in a serious manner when I joined Quirkos in January this year, and to help me get up to speed I tried to code a few things to help me understand the software.
One of the texts I used was a chapter from The Lord of the Rings, because, I thought, with something I already know like the back of my hand I could concentrate on the mechanics of coding without being distracted too much by the content.


I chose ‘The Council of Elrond’ – one of the longest chapters in the book and one often derided for being little more than an extended information dump. Essentially lots and lots of characters (some of whom only appear in this one scene in the whole book) sit around and tell each other about stuff that happened much earlier. It’s probably not Tolkien’s finest writing, and I suppose, most modern editors would demand that all that verbal exposition should either be cut or converted into actual action chapters.


I have always loved the Council chapter, however, as to me it’s part of the fascinating backdrop of the Lord of the Rings. As Tolkien himself puts it in one of his Letters:


“Part of the attraction of the L.R. is, I think, due to the glimpses of a large history in the background: an attraction like that of viewing far off an unvisited island, or seeing the towers of a distant city gleaming in a sunlit mist.”


Of course, if you are a Tolkien fan(atic) you can go off and explore these unvisited islands and distant cities in the Silmarillion and the Histories of Middle Earth, and then bore your friends by smugly explaining all the fascinating First and Second Age references, and just why Elrond won’t accept an Oath from the Fellowship. (Yes, I am guilty of that…)


Looking at the chapter using Quirkos I expected to see bubbles growing around the exchange of news, around power and wisdom, and maybe to get some interesting overlap views on Frodo or Aragorn. However, the topic that surprised me most in this chapter in particular was Friendship.


I coded the topic ‘Friendship’ 29 times – as often as ‘Relaying News’ and ‘History’, and more often even than collective mentions of Elves (27), Humans (19) or the Black Riders (24).


The overlap view of ‘Friendship’ was especially unexpected:

 

The topics ‘Gandalf’ and ‘Friendship’ overlap 22 times, which is not totally surprising since Gandalf does most of the talking throughout the chapter, and he is the only character who knows everyone else in the Council already. But the second most frequent overlap is with Elrond: he intersects with Friendship eight times, which is more often than Frodo who only gets five overlaps with Friendship!


Like most of the Elves in Lord of the Rings, Elrond is rather aloof and even in his own council acts as a remote facilitator for the other characters. Yet, the cluster view on Friendship led me to reconsider his relationship not only with Gandalf (when Gandalf recites the Ring inscription in the Black Speech, he strongly presumes on Elrond’s friendship, and Elrond forgives him because of that friendship) but also with Bilbo.


Re-reading Elrond’s exchanges with Bilbo during the Council, I was struck by the gentle teasing apparent in the hobbit’s reminders of his need for lunch and Elrond’s requests that Bilbo should tell his story without too many embellishments and it need not be in verse. The friendship between Bilbo and Elrond also rather explains how Bilbo had the guts to compose and perform a song about Elrond’s father Eärendil in the previous chapter, something even Aragorn, Elrond’s foster son, described as a daring act.


Perhaps none of this is terribly surprising. Within the unfolding story of the Lord of the Rings, Bilbo has been living in Elrond’s house for 17 years - time enough even for busy Elflords to get to know their house guests. And for readers who grew up with the tale of The Hobbit, Bilbo’s centrality may also not be much of a surprise. For me, however, looking at the chapter using Quirkos opened up a rather pleasing new dimension and led me to reconsider a couple of beloved characters in a new light.

 

 

Participatory Qualitative Analysis

laptops for qualitative analysis

 

Engaging participants in the research process can be a valuable and insightful endeavour, leading to researchers addressing the right issues, and asking the right questions. Many funding boards in the UK (especially in health) make engaging with members of the public, or targets of the research a requirement in publicly funded research.

 

While there are similar obligations to provide dissemination and research outputs that are targeted at ‘lay’ members of the public, the engagement process usually ends in the planning stage. It is rare for researchers to have participants, or even major organisational stakeholders, become part of the analysis process, and use their interpretations to translate the data into meaningful findings.

 

With surprisingly little training, I believe that anyone can do qualitative analysis, and get engaged in actions like coding and topic discovery in qualitative data sets.

 

I’ve written about this before but earlier this year we actually had a chance to try this out with Quirkos. It was one of the main reasons we wanted to design new qualitative analysis software; existing solutions were too difficult to learn for non-expert researchers (and quite a lot of experienced experts too).

 

So when we did our research project on the Scottish Referendum, we invited all of the participants to come along to a series of workshops and try analysing the data themselves. Out of 12, only 3 actually came along, but none of these people had any experience of doing qualitative research before.

 

And they were great at it!

 

In a two hour session, respondents were given a quick overview of how to do coding in Quirkos (in just 15 minutes), and a basic framework of codes they could use to analyse the text. They were free to use these topics, or create their own as they wished – all 3 participants chose to add codes to the existing framework.

 

They were each given transcripts from someone else’s anonymised interview: as these were group sessions, we didn’t want people to be identified while coding their own transcript. Each were 30 minute interviews, around 5000 words in length. In the two hour session, all participants had coded one interview completely, and done most (or all) of the second. One participant was so engrossed in the process, he had to be sent home before he missed his dinner, but took a copy of Quirkos and the data home to keep working on his own computer.

 

The graph below shows how quickly the participants learnt how to code. The y axis shows the number of seconds between each ‘coding event’: every time someone coded a new piece of text (and numbered sequentially along the x axis). The time taken to code starts off high, with questions and missteps meaning each event takes a minute or more. However, the time between events quickly decreases, and in fact the average time for the respondents was to add a code every 20 seconds. This is after any gaps longer than 3 minutes have been removed – these are assumed to be breaks for tea or debate! Each user made at least 140 tags, assigning text to one or more categories.

 

 

So participants can be used as cheap labour to speed up or triangulate the coding process? Well, it can be more than this. The topics they chose to add to the framework (‘love of Scotland’, ‘anti-English feelings’, ‘Scottish Difference’) highlighted their own interpretations of the data, showing their own opinions and variations. It also prompted discussion with other coders, about what they thought about the views of people in the dataset, how they had interpreted the data:


“Suspicion, oh yeah, that’s negative trust. Love of Scotland, oh! I put anti-English feelings which is the opposite! Ours are like inverse pictures of each other’s!”

 

Yes: obviously we recorded and transcribed the discussions and reflections, and analysed them in Quirkos! And these revealed that people expressed familiar issues with reflexivity, reliability and process that could have come from experienced qualitative researchers:


“My view on what the categories mean or what the person is saying might change before the end, so I could have actually read the whole thing through before doing the comments”


“I started adding in categories, and then thinking, ooh, if I’d added that in earlier I could actually have tied it up to such-and-such comment”


“I thought that bit revealed a lot about her political beliefs, and I could feel my emotions entering into my judgement”


“I also didn’t want to leave any comment unclassified, but we could do, couldn’t we? That to me is about the mechanics of using the computer, ticky box thing.”

 

This is probably the most useful part of the project to a researcher: the input of participants can be used as stimulus for additional discussion and data collection, or to challenge the way researchers do their own coding. I found myself being challenged about how I had assigned codes to controversial topics, and researchers could use a more formal triangulation process to compare coding between researchers and participants, thus verifying themes, or identifying and challenging significant differences.

 

Obviously, this is a tiny experimental project, and the experience of 3 well educated, middle-class Scots should not be interpreted as meaning that anyone can (or would want to) do this kind of analysis. But I believe we should do try this kind of approach whenever it is appropriate. For most social research, the experts are the people who are always in the field – the participants who are living these lives every day.

 

You can download the full report, as well as the transcripts and coded data as a Quirkos file from http://www.quirkos.com/workshops/referendum/

 

 

Our hyper-connected qualitative world

qualitative neurons and connections

 

We live in a world of deep qualitative data.

 

It’s often proposed that we are very quantitatively literate. We are exposed to numbers and statistics frequently in news reports, at work, when driving, with fitness apps etc. So we are actually pretty good at understanding things like percentages, fractions, and making sense of them quickly. It’s a good reason why people like to see graphs and numerical summaries of data in reports and presentations: it’s a near universal language that people can quickly understand.

 

But I believe we are also really good at qualitative understanding.

 

Bohn and Short in a 2009 study estimated that “The average American consumes 100,500 words of information in a single day”, comprised of conversations, TV shows, news, written articles, books… It sounds like a staggering amount of qualitative data to be exposed to, basically a whole PhD thesis every single day!

 

Obviously, we don’t digest and process all of this, people are extremely good at filtering this data; ignoring adverts, skim reading websites to get to the articles we are interested in and skim reading those, and of course, summarising the gist of conversations with a few words and feelings. That’s why I argue that we are nearly all qualitative experts, summarising and making connections with qualitative life all the time.


And those connections are the most important thing, and the skill that socially astute humans do so well. We can pick up on unspoken qualitative nuances when someone tells us something, and understand the context of a news article based on the author and what is being reported. Words we hear such as ‘economy’ and ‘cancer’ and ‘earthquake’ are imbued with meaning for us, connecting to other things such as ‘my job’ and ‘fear’ and ‘buildings’.

 

This neural network of meaning is a key part of our qualitative understanding of the world, and whether we want to challenge these by some type of Derridan deconstruction of our associations between language and meaning, they form a key part of our daily prejudices and understanding of the world in which we live.

 

For me, a key problem with qualitative analysis is that it struggles to preserve or record these connections and lived associations. I touched on this issue of reductionism in the last blog post article on structuring unstructured qualitative data, but it can be considered a major weakness of qualitative analysis software. Essentially, one removes these connected meanings from the data, and puts it as a binary category, or at best, represents it on a scale.

 

Incidentally, this debate about scaling and quantifying qualitative data has been going on for at least 70 years from Guttman, who even in this 1944 article notes that there has been ‘considerable discussion concerning the utility of such orderings’. What frustrates me at the moment is that while some qualitative analysis software can help with scaling this data, or even presenting it in a 2 or 3 dimensional scale by applying attributes such as weighting, it still is a crude approximation of the complex neural connections of meaning that deep qualitative data possesses.

 

In my experiments getting people with no formal qualitative or research experience to try qualitative analysis with Quirkos, I am always impressed at how quickly people take to it, and can start to code and assign meaning to qualitative text from articles or interviews. It’s something we do all the time, and most people don’t seem to have a problem categorising qualitative themes. However, many people soon find the activity restrictive (just like trained researchers do) and worry about how well a basic category can represent some of the more complex meanings in the data.

 

Perhaps one day there will be practical computers and software that ape the neural networks that make us all such good qualitative beings, and can automatically understand qualitative connections. But until then, the best way of analysing data seems to be to tap into any one of these freely available neural networks (i.e. a person) and use their lived experience in a qualitative world in partnership with a simple software tool to summarise complex data for others to digest.

 

After all, whatever reports and articles we create will have to compete with the other 100,000 words our readers are consuming that day!

 

 

Structuring unstructured data

 

The terms ‘unstructured data’ and ‘qualitative data’ are often used interchangeably, but unstructured data is becoming more commonly associated with data mining and big data approaches to text analytics. Here the comparison is drawn with databases of data where we have a defined field and known value and the loosely structured (especially to a computer) world of language, discussion and comment. A qualitative researcher lives in a realm of unstructured data, the person they might be interviewing doesn’t have a happy/sad sign above their head, the researcher (or friend) must listen and interpret their interactions and speech to make a categorisation based on the available evidence.


At their core, all qualitative analysis software systems are based around defining and coding: selecting a piece of text, and assigning it to a category (or categories). However, it is easy to see this process as being ‘reductionist’: essentially removing a piece of data from it’s context, and defining it as a one-dimensional attribute. This text is about freedom. This text is about liberty. Regardless of the analytical insight of the researcher in deciding what relevant themes should be, and then filtering a sentence into that category, the final product appears to be a series of lists of sections of text.


This process leads to difficult questions such as, is this approach still qualitative? Without the nuanced connections between complicated topics and lived experiences, can we still call something that has been reduced to a binary yes/no association qualitative? Does this remove or abstract researchers from the data? Isn't this a way of quantifying qualitative data?


While such debates are similarly multifaceted, I would usually argue that this process of structuring qualitative data does begin to categorise and quantify it, and it does remove researchers from their data. But I also think that for most analytical tasks, this is OK, if not essential! Lee and Fielding (1996) say that “coding, like linking in hypertext, is a form of data reduction, and for many qualitative researchers is an important strategy which they would use irrespective of the availability of software”. When a researcher turns a life into 1 year ethnography, or a 1 hour interview, that is a form of data reduction. So is turning an audio transcript into text, and so is skim reading and highlighted printed versions of that text.


It’s important to keep an eye on the end game for most researchers: producing a well evidenced, accurate summary of a complex issue. Most research, as a formula to predict the world or a journal article describing it, is a communication exercise that (purely by the laws of entropy if not practicality) must be briefer than the sum of it’s parts. Yet we should also be much more aware that we are doing this, and together with our personal reflexivity think about the methodological reflexivity, and acknowledge what is being lost or given prominence in our chosen process.


Our brains are extremely good at comprehending the complex web of qualitative connections that make everyday life, and even for experienced researchers our intuitive insight into these processes often seem to go beyond any attempt to rationalise them. A structuralist approach to qualitative data can not only help as an aide-mémoir, but also to demonstrate our process to others, and challenge our own assumptions.


In general I would agree with Kelle (1997) that “the danger of methodological biases and distortion arising from the use of certain software packages is overemphasized in current discussions”. It’s not the tool, it’s how you use it!

Qualitative evaluations: methods, data and analysis

reports on a shelf

Evaluating programmes and projects are an essential part of the feedback loop that should lead to better services. In fact, programmes should be designed with evaluations in mind, to make sure that there are defined and measurable outcomes.

 

While most evaluations generally include numerical analysis, qualitative data is often used along-side the quantitative, to show richness of project impact, and put a human voice in the process. Especially when a project doesn’t meet targets, or have the desired level of impact, comments from project managers and service users usually give the most information into what went wrong (or right) and why.

 

For smaller pilot and feasibility projects, qualitative data is often the mainstay of the evaluation data, when numerical data wouldn’t reach statistical analysis, or when it is too early in a programme to measure intended impact. For example, a programme looking at obesity reductions might not be able to demonstrate a lower number of diabetes referrals at first, but qualitative insight in the first year or few months of the project might show how well messages from the project are being received, or if targeted groups are talking about changing their behaviour. When goals like this are long term (and in public health and community interventions they often are) it’s important to continuously assess the precursors to impact: namely engagement, and this is usually best done in a qualitative way.

 

So, what is best practice for qualitative evaluations? Fortunately, there are some really good guides and overviews that can help teams choose the right qualitative approach. Vaterlaus and Higgenbotham give a great overview of qualitative evaluation methods, while Professor Frank Vanclay talks at a wider level about qualitative evaluations, and innovative ways to capture stories. However, there was also a nice ‘tick-box’ style guide produced by the old Public Health Resource Unit which can still be found at this link. Essentially, the tool suggests 10 questions that can be used to assess the quality of a qualitative based-evaluation – really useful when looking at evaluations that come from other fields or departments.

 

But my contention is that the appraisal tool above is best implemented as a guide for producing qualitative evaluations. If you start by considering the best approach, how you are going to demonstrate rigour, choosing appropriate methods and recruitment, you’ll get a better report at the end of it. I’d like to discuss and expand on some of the questions used to assess the rigour of the qualitative work, because this is something that often worries people about qualitative research, and these steps help demystify good practice.

 

  1. The process: Start by planning the whole evaluation from the outset: What do you plan to do? All the rest will then fall into place.
     
  2. The research questions: what are they and why were these chosen? Are the questions going to give the evaluation the data it needs, and will the methods capture that correctly?
     
  3. Recruitment: who did you choose, and why? Who didn’t take part, and how did you find people? What gaps are there likely to be in representing the target group, and how can you compensate for this? Were there any ethical considerations, how was consent gained, and what was the relationship between the participants and the researcher(s)? Did they have any reason to be biased or not truthful?
     
  4. The data: how did you know that enough had been collected? (Usually when you are starting to hear the same things over and over – saturation) How was it recorded, transcribed, and was it of good quality? Were people willing to give detailed answers?
     
  5. Analysis: make sure you describe how it was done, and what techniques were used (such as discourse or thematic analysis). How does the report choose which quotes to reproduce, and are there contradictions reported in the data? What was the role of the researcher – should they declare a bias, and were multiple views sought in the interpretation of the data?
     
  6. Findings: do they meet the aims and research questions? If not, what needs to be done next time? Are there clear findings and action points, appropriate to improving the project?

 

Then the final step for me is the most important of all: SHARE! Don't let it end up on a dusty shelf! Evaluations are usually seen as a tedious but necessary internal process, but they can be so useful to people as case-studies and learning tools in organisations and groups you might never have thought of. This is especially true if there are things that went wrong, help someone in another local authority not make the same mistakes!

 

At the moment the best UK repositories of evaluations are based around health and economic benefits but that doesn’t stop you putting the report on your organisation’s website – if someone is looking for a similar project, search engines will do the leg work for you. That evaluation might save someone a lot of time and money, and it goes without saying, look for any similar work before you start a project, you might get some good ideas, and stop yourself falling into the same pit-holes!

 

6 meta-categories for qualitative coding and analysis

rating for qualitative codes

When doing analysis and coding in a qualitative research project, it is easy to become completely focused on the thematic framework, and deciding what a section of text is about. However, qualitative analysis software is a useful tool for organising more than just the topics in the text, they can also be used for deeper contextual and meta-level analysis of the coding and data.


Because you can pretty much record and categorise anything you can think of, and assign multiple codes to one section of text, it often helps to have codes about the analysis that help with managing quotes later, and assisting in deeper conceptual issues. So some coders use some sort of ranking system so they can find the best quotes quickly. Or you can have a category for quotes that challenge your research questions, or seem to contradict other sources or findings. Here are 6 suggestions for these meta-level codes you could create in your qualitative project (be it Quirkos, Nvivo, Atlas-ti or anything!):

 

 

Rating
I always have a node I call ‘Key Quotes’ where I keep track of the best verbatim snippets from the text or interview. It’s for the excited feeling you get when someone you interviewed sums up a problem or your research question in exactly the right way, and you know that you are going to end up using that quote in an article. Or even for the title of the article!


However, another way you can manage quotes is to give them a ranking scheme. This was suggested to me by a PhD student at Edinburgh, who gives quotes a ranking from 1-5, with each ‘star-rating’ as a separate code. That way, it’s easy to cross reference, and find all the best quotes on a particular topic. If there aren’t any 5* quotes, you can work down to look at the 4 star, or 3 star quotes. It’s a quick way to find the ‘best’ content, or show who is saying the best stuff. Obviously, you can do this with as little or much detail as you like, ranking from 1-10 or just having ‘Good’ and ‘Bad’ quotes.


Now, this might sound like a laborious process, effectively adding another layer of coding. However, once you are in the habit, it really takes very little extra time and can make writing up a lot quicker (especially with large projects). By using the keyboard shortcuts in Quirkos, it will only take a second more. Just assign the keyboard numbers 1-5 to the appropriate ranking code, and because Quirkos keeps the highlighted section of text active after coding, you can quickly add to multiple categories. Drag and drop onto your themes, and hit a number on the keyboard to rank it. Done!

 

 

Contradictions
It is sometimes useful to record in one place the contradictions in the project – this might be within the source, where one person contradicts themselves, or if a statement contradicts something said by another respondent. You could even have a separate code for each type of contradiction. Keeping track of these can not only help you see difficult sections of data you might want to review again, but also show when people are being unsure or even deceptive in their answers on a difficult subject. The overlap view in Quirkos could quickly show you what topics people were contradicting themselves about – maybe a particular event, or difficult subject, and the query views can show you if particular people were contradicting themselves more than others.

 

 

Ambiguities
In qualitative interview data where people are talking in an informal way about their stories and lives, people often say things where the meaning isn’t clear – especially to an external party. By collating ambiguous statements, the researcher has the ability to go back at the end of the source and see if each meaning is any clearer, or just flag quotes that might be useful, but might be at risk of being misinterpreted by the coder.

 

 

Not-sures
Slightly different from ambiguities: these are occasions when the meaning is clear enough, but the coder is not 100% sure that it belongs in a particular category. This often happens during a grounded theory process where one category might be too vague and needs to be split into multiple codes, or when a code could be about two different things.


Having a not-sure category can really help the speed of the coding process. Rather than worrying about how to define a section of text, and then having sleepless nights about the accuracy of your coding, tag it as ‘Not sure’ and come back to it at the end. You might have a better idea where they all belong after you have coded some more sources, and you’ll have a record of which topics are unclear. If you are not sure about a large number of quotes assigned to the ‘feelings’ Quirk (again, shown by clustering in the overlap view in Quirkos), you might want to consider breaking them out into an ‘emotions’ and ‘opinions’ category later!

 

 

Challenges
I know how tempting it can be to go through qualitative analysis as if it were a tick-box exercise, trying to find quotes that back up the research hypothesis. We’ve talked about reflexivity before in this blog, but it is easy to go through large amounts of data and pick out the bits that fit what you believe or are looking for. I think that a good defence against this tendency is to specifically look for quotes that challenge you, your assumptions or the research questions. Having a Quirk or node that logs all of these challenges lets you make sure you are catching them (and not glossing over them) and secondly provides a way to do a validity assessment at the end of coding: Do these quotes suggest your hypothesis is wrong? Can you find a reason that these quotes or individuals don’t fit your theory? Usually these are the most revealing parts of qualitative research.

 


Absences
Actually, I don’t know a neat way to capture the essence of something that isn’t in the data, but I think it’s an important consideration in the analysis process. With sensitive topics, it is sometimes clear to the researcher that an important issue is being actively avoided, especially if an answer seems to evade the question. These can be at least coded as absences at the interviewer’s question. However, if people are not discussing something that was expected as part of the research question, or was an issue for some people but not others, it is important to record and acknowledge this. Absence of relevant themes is usually best recorded in memos for that source, rather than trying to code non-existent text!

 

 

These are just a few suggestions, if you have any other tips you’d like to share, do send them to daniel@quirkos.com or start a discussion in the forum. As always, good luck with your coding!

 

Free materials for qualitative workshops

qualitative workshop on laptops with quirkos

 

We are running more and more workshops helping people learn qualitative analysis and Quirkos. I always feel that the best way to learn is by doing, and the best way to remember is through play. To this end, we have created two sources of qualitative data that anyone can download and use (with any package) to learn how to use software for qualitative data analysis.

 

These can be found at the workshops folder. There are two different example data sets, which are free for any training use. The first is a basic example project, which is comprised of a set of fictional interviews with people talking about what they generally have for breakfast. This is not really a gripping exposé of a critical social issue, but is short and easy to engage with, and already provides some suprises when it comes to exploring the data. The materials provided include individual transcribed sources of text, in a variety of formats that can be brought into Quirkos. The idea is that users can learn how to bring sources into Quirkos, create a basic coding framework, and get going on coding data.


For the impatient, there is also a 'here's one we created earlier' file, in which all the sources have been added to the project, described age and gender and occupation as source properties, a completed framing codework, and a good amount of coding. This is a good starting point if someone wants to use the various tools to explore coded data and generate outputs. There is also a sample report, demonstrating what a default output looks like when generated by Quirkos, including the 'data' folder, which includes all the pictures for embedding in a report or PowerPoint presentation.

 

This is the example project we most frequently use in workshops. It allows us to quickly cover all the major steps in qualitative analysis with software, with a fun and easy to understand dataset. It also lets us see some connections in the data, for example how people don't describe coffee as a healthy option, and that women for some reason talk about toast much more than men.

 

However, the breakfast example is not real qualitative data - it is short, and fictitious, so for people who come along to our more advanced analysis workshops, we are happy to now make available a much more detailed and lively dataset. We have recently completed a project on the impact on voter opinions in Scotland after the 2014 Referendum for independence. This comprises of 12 semi-structured interviews with voters based in Edinburgh, on their views on the referendum process, and how it has changed their outlook on politics and voting in the run-up to the 2015 General Election in the UK.

 

When we conducted these interviews, we explicitly got consent for them to be made publicly available and used for workshops after they had been transcribed and anonymised. This gives us a much deeper source of data to analyse in workshops, but also allows for anyone to download a rich set of data to use in their own time (again with any qualitative software package) to practice their analytical skills in qualitative research. You can download these interviews and further materials at this link.

 

We hope you will find these resources useful, please acknowledge their origin (ie Quirkos), let us know if you use them in your training and learning process, and if you have any feedback or suggestions.

Upgrade from paper with Quirkos

qualitative analysis with paper

Having been round many market research firms in the last few months, the most striking things is the piles of paper, or at least in the neater offices - shelves of paper!

When we talk to small market research firms about their analysis process, many are doing most of their research by printing out data and transcripts, and coding them with coloured highlighters. Some are adamant that this is the way that works best for them, but others are a little embarrassed at the way they are still using so much time and paper with physical methods.

 

The challenge is clear – the short turn-around time demanded by clients doesn't leave much time for experimenting with new ways of working, and the few we had talked to who had tried qualitative analysis software quickly felt this wasn't something they were able to pick up quickly.

 

So, most of the small Market Research agencies with less than 5 associates (as many as 75% of firms in the UK) are still relying on work-flows that are difficult to share, don't allow for searching across work, and don't have an undo button! Not to mention the ecological impact of all that printing, and the risk to deadlines from an ill placed mug of coffee.

 

That's one of the reasons we created Quirkos, and why we are launching our new campaign this week at the Market Research Society annual conference in London. Just go to our new website, www.upgradefrompaper.com and watch our fun, one minute video about drowning in paper, and how Quirkos can help.

Quirkos isn't like other software, it is designed to mimic the physical action of highlighting and coding text on paper with an intuitive interface that you can use to get coding right away. In fact, we bet you can get coding a project before your printer has got the first source out of the tray.

 

You no longer need days of training to use qualitative analysis software, and Quirkos has all the advantages you'd expect, such as quick searches, full undo-redo capability and lots of flexibility to rearrange your data and framework. But it also has other pleasant surprises: there's no save button, because work is automatically saved after each action. And it creates graphical reports you can share with colleagues or clients.

 

Finally, you can export your work at any stage to Word, and print it out (if you so wish!) with all your coding and annotations as familiar coloured highlights – ideal to share, or just to help ease the transition to digital. It's always comforting to know you can go back to old habits at any time, and not loose the work you've already done!

 

It's obviously not just for market research firms; students, academics and charities who have either not tried any qualitative software before, or found the other options too confusing or expensive can reduce their carbon footprint and save on their department's printing costs!

 

So take the leap, and try it out for a month, completely free, on us. Upgrade from paper to Quirkos, and get a clear picture of your research!

 

www.upgradefrompaper.com


p.s. All the drawings in our video were done by our very own Kristin Schroeder! Not bad, eh?

The dangers of data mining for text

 Alexandre Dulaunoy CC - flickr.com/photos/adulau/12528646393

There is an interesting new article out, which looks at some of the commonly used algorithms in data mining, and finds that they are generally not very accurate, or even reproducible.

 

Specifically, the study by Lancichinetti et al. (2015) looks at automated topic classification using the commonly used latent Dirichlet allocation algorithm (LDA), a machine learning process which uses a probabilistic approach to categorise and filter large groups of text. Essentially this is a common approach used in data mining.

 

But the Lancichinetti et al. (2015) article finds that, even using a well structured source of data, such as Wikipedia, the results are – to put it mildly, disappointing. Around 20% of the time, the results did not come back the same, and when looking at a more complex group of scientific articles, reliability was as low as 55%.

 

As the authors point out, there has been little attempt to test the accuracy and validity of these data mining approaches, but they caution that users should be cautious about relying on inferences using these methods. They then go-on to describe a method that produces much better levels of reliability, yet until now, most analysis would have had this unknown level of inaccuracy: even if the test had been re-run with the same data, there is a good chance the results would have been different!

 

This underlines one of the perils with statistical attempts to mine large amounts of text data automatically: it's too easy to do without really knowing what you are doing. There is still no reliable alternative to having a trained researcher and their brain (or even an average person off the street) reading through text and telling you what it is about. The forums I engage with are full of people asking how they can do qualitative analysis automatically, and if there is some software that will do all their transcription for them – but the realistic answer is nothing like this currently exists.

 

Data mining can be a powerful tool, but it is essentially all based on statistical probabilities, churned out by a computer that doesn't know what it is supposed to be looking at. Data mining is usually a process akin to giving your text to a large number of fairly dumb monkeys on typewriters. Sure, they'll get through the data quickly, but odds are most of it won't be much use! Like monkeys, computers don't have that much intuition, and can't guess what you might be interested in, or what parts are more emotionally important than others.

 

The closest we have come so far is probably a system like IBM's Watson computer, a natural language processing machine which requires a supercomputer with 2,880 CPU cores, 16 terabytes of ram (16,384GB), and is essentially doing the same thing – a really really large number of dumb monkeys, and a process that picks the best looking stats from a lot of numbers. If loads of really smart researchers programme it for months, it can then win a TV show like Jeopardy. But if you wanted to win Family Feud, you'd have to programme it again.

 

Now, a statisical overview can be a good place to start, but researchers need to understand what is going on, look at the results intelligently, and work out what parts of the output don't make sense. And to do this well, you still need to be familiar with some of the source material, and have a good grip on the topics, themes and likely outcomes. Since a human can't read and remember thousands of documents, I still think that for most cases, in-depth reading of a few dozen good sources probably gives better outcomes than statistically scan-reading thousands.

 

Algorithms will improve, as outlined above, and as computers get more powerful and data gets more plentiful, statistical inferences will improve. But until then, most users are better off with a computer as a tool to aid their thought process, not to provide a single statistic answer to a complicated question.

 

Is qualitative data analysis fracturing?

Having been to several international conferences on qualitative research recently, there has been a lot of discussion about the future of qualitative research, and the changes happening in the discipline and society as a whole. A lot of people have been saying that acceptance for qualitative research is growing in general: not only are there a large number of well-established specialist journals, but mainstream publications are accepting more papers based on qualitative approaches.


At the same time, there are more students in the UK at all levels, but especially starting Masters and PhD studies as I’ve noted before. While some of these students will focus solely on qualitative methods, many more will adopt mixed methods approaches, and want to integrate a smaller amount of qualitative data. Thus there is a strong need, especially at the Masters by research level, for software that’s quicker to learn, and can be well integrated into the rest of a project.


There is also the increasing necessity for academic researchers to demonstrate impact for their research, especially as part of the REF. There are challenges involved with doing this with qualitative research, especially summarising large bodies of data, and making them accessible for the general public or for targeted end users such as policy makers or clinicians. Quirkos has been designed to create graphical outputs for these situations, as well as interactive reports that end-users can explore in their own time.


But another common theme has emerged is the possibility of the qualitative field fracturing as it grows. It seems that there are at least three distinct user groups emerging: firstly there are the traditional users of in-depth qualitative research, the general focus of CAQDAS software. They are experts in the field, are experienced with a particular software package, and run projects collecting data with a variety of methods, such as ethnography, interviews, focus groups and document review.


Recently there has been increased interest in text analytics: the application of ‘big data’ to quantify qualitative sources of data. This is especially popular in social media, looking at millions of Tweets, texts, Facebook posts, or blogs on a particular topic. While commonly used in market research, there are also applications in social and political analysis, for example looking at thousands of newspaper articles for portrayal of social trends. This ‘bid data’ quantitative approach has never been a focus of Quirkos’ approach, although there are many tools out there that work in this way.
Finally, there is increasing interest in qualitative analysis from more mainstream users, people who want to do small qualitative research projects as part of their own organisation or business. Increasingly, people working in public sector organisations, HR or legal have text documents they need to manage and gain a deep understanding of.
Increasingly it seems that a one-size-fits-all solution to training and software for qualitative data analysis is not going to be viable. It may even be the case that different factions of approaches and outcomes will emerge. In some ways this may not be too dissimilar to the different methodologies already used within academic research (ie grounded / emergent / framework analysis), but the numbers of ‘researchers’ and the variety of paradigms and fields of inquiry looks to be increasing rapidly.


These are definitely interesting times to be working in qualitative research and qualitative data analysis. My only hope is that if such ‘splintering’ does occur, we keep learning from each other, and we keep challenging ourselves by exposure to alternative ways of working.

 

 

Getting a foot in the door with qualitative research

A foot in a doorA quick look at the British Library thesis catalogue suggests that around 800 theses are completed every year in the UK using qualitative methods*. This suggests that 7% of the roughly 10,000 annual British PhDs completed use qualitative methods. There are likely to be many more masters theses, and even undergraduate dissertations that use some qualitative methods, so the student demand for qualitative training is considerable.

 

Usually, while PhD Research Training Programmes will include good coverage of different qualitative methods and ethical issues, using software for qualitative analysis is often not covered. In my experience it is either left to summer school sessions, annual one-off internal training sessions, but usually training at an external full or two day session at organisations like the University of Surrey CAQDAS programme. Most PhD students (especially in the UK) are at considerable time and financial pressures, so accessing this training is often difficult. Again, it's sometimes difficult to get a foot in the door with qualitative analysis software.

 

Yet there are some good opportunities for qualitative researchers, even outside academia. Obviously market research is a huge employer, and can provide very varied work, changing with every project. Increasingly it seems that the public sector, at both the local and national level are hiring researchers with qualitative experience, especially in organisations like the NHS, where patient satisfaction is becoming an increasingly important metric.

 

Quirkos has been designed with my own experiences in mind, to provide an easy way to get started with qualitative analysis. In fact, I've jokingly referred to it before as a 'gateway' product, easy to start, and hopefully leading to a good experience and a desire to progress to advanced ways of working! We are also going to offer PhD students a discounted 'PhD Pack', which will include a student licence, on-line training, and two academic licences for their supervisors, so that the whole team can see the progress and comment on ongoing analysis.

 

Researching the numbers of students in the UK, I was stunned to find out that the number of full-time PhD students has nearly doubled, from 9,990 in 1997 to 18,075 in 2010 – the last year for which statistics are available. Now, clearly the number of academic positions has not increased at the same rate, (although it has increased over that time period) so the number of available academic jobs has not kept up with supply. Of course, a PhD can lead to many more opportunities, but it is clear that there is great competition for post-doctorate posts. This has been noted by many other commentators, but also in my own experience. Many of my post-doc friends and colleagues are ridiculously intelligent and capable people, but are still in jobs that chronically undervalue their abilities. Between ourselves, we often joke that for academic jobs, it has become a game of 'dead-man's-boots', waiting for a senior academic to retire, starting a chain of departmental promotions that create a new junior position. These posts are also only available after doing several temporary post-doc positions: it is a long process to get your foot in the door, and you often find yourself competing with good friends.

 

It seems to me that many university departments are now scaling back the number of PhD and Masters students they accept, acknowledging the pressure that large student numbers put on supervisors, despite the large amounts of income they bring to the department (especially Masters programmes). However, if widespread, this change is not yet visible in the latest HEFCE data, which dates back to 2010-11, and shows higher numbers of starters, and an increase in (projected) completion rates. Yet there is a huge and growing pool of very bright critical thinkers on the market, and even if academic opportunities are limited, a good number of other doors to get a foot-hold into.

 

* To get these figures, I have only used search terms qualitative AND either interview or “focus group” across titles and abstracts, to make sure that no other uses of the phrase were included: for example genetic qualitative research. Other methods such as ethnography and diaries added only a dozen or so results each. Frustratingly, the EThOS search doesn't let you specify a date range, but including a year (2012) as a search term mostly returns submissions from that year. It's also interesting to note that the number of PhDs mentioning qualitative methods has doubled since 2007, although it's difficult to tell how much of this is due to any increased popularity of qualitative research, or the increase in total submissions noted above, and the increase in digital submissions to the BL system.

Paper vs. computer assisted qualitative analysis

I recently read a great paper by Rettie et al. (2008) which, although based on a small sample size, found that only 9% of UK market research organisations doing qualitative research were using software to help with qualitative analysis.

 

At first this sounds very low, but it holds true with my own limited experiences with market research firms, and also with academic researchers. The first formal training courses I attended on Qualitative Analysis were conducted by the excellent Health Experiences Research Group at Oxford University, a team I would actually work with later in my career. As an aforementioned computer geek, it was surprising for me to hear Professor Sue Ziebland convincingly argue for a method they defined as the One Sheet of Paper technique, immortalised as OSOP. This is essentially a way to develop a grounded theory or analytical approach by reducing the key themes to a diagram that can be physically summarised on a single piece of paper, a method that is still widely cited to this day.

 

However, the day also contained a series of what felt like ‘confessions’ about how much of people’s Qualitative analysis was paper based: printing out whole transcripts of interviews, highlighting sections, physically cutting and gluing text into flipcharts, and dozens and dozens of multi-coloured Post-it notes! Personally, I think this is a fine method of analysis, as it keeps researchers close to the data and, assuming you have a large enough workspace, it lets you keep dozens of interviews and themes to hand. It’s also very good for team work, as the physicality gets everyone involved in reviewing codes and extracts.

 

In the last project I worked on, looking at evidence use for health decision making we did most of the analysis in Excel, which was actually easier for the whole team to work with than any of the dedicated qualitative analysis software packages. However, we still relied heavily on paper: printing out the interviews and Excel spreadsheets, and using flip-chart paper, post-its and marker pens in group analysis sessions. Believe me, I felt a pang of guilt for all the paper we used in each of these sessions, rainforests be damned! But it kept us inspired, engaged, close to the data and let us work together.

 

So I can quite understand why so many academics and market research organisations choose not to use software packages: at the moment they don’t have the visual connection to the data that paper annotations allow, it’s often difficult to see the different stages of the coding process, and it’s hard to produce reports and outputs that communicate properly.

 

The problem with this approach is the literal paper-trail – how you turn all these iterations of coding schemes and analysis sessions into something you can write up to share with others in order to justify how you made the decisions that led to your conclusions. So I had to file all these flip-charts and annotated sheets, often taking photos of them so they could be shared with colleagues at other universities. It was a slow and time consuming process, but it kept us close to the data.

 

When designing Quirkos, I have tried in some ways to replicate the paper-based analysis process. There’s a touch interface, reports that show all the highlighting in a Word document, and views that keep you close to the data. But I also want to combine this with all the advantages you get from a software package, not least the ability to search, shuffle dozens of documents, have more colours than a whole rainbow of Post-it notes, and the invaluable Undo button!

 

Software can also help keep track of many more topics and sources than most people (especially myself) can remember, and if there are a lot of different themes you want to explore from the data, software is really good at keeping them all in one place and making them easy to find. Working as part of a team, especially if some researchers work remotely or in a different organisation can be much easier with software. E-mailing a file is much easier than sending a huge folder of annotated paper, and combining and comparing analysis can be done at any stage of the project.

 

Qualitative analysis software also lets you take different slices through the data, so you can compare responses grouped by any caracteristics for the sources you have. So it's easy to look at all the comments from people in one location, or between a certain age range. Certainly this is possible to do with qualitative data on paper as well, but the software can remove the need of a lot of paper shuffling, especially when you have a large number of respondents.

 

But most importantly, I think software can allow more experimentation - you can try different themes, easily combine or break them apart, or even start from scratch again, knowing that the old analysis approach you tried is just a few clicks away. I think that the magic undo button also gives researchers more confidence in trying something out, and makes it easier for people to change their mind.

 

Many people I’ve spoken to have asked what the ‘competition’ for Quirkos is like, meaning, what do the other software packages do. But for me the real competitor is the tangible approach and the challenge is to try and have something that is the best of both worlds: a tool that not only apes the paper realm in a virtual space, but acknowledges the need to print out and connect with physical workflows. I often want to review a coded project on paper, printing off and reading in the cafe, and Quirkos makes sure that all your coding can be visually displayed and shared in this way.

 

Everyone has a workflow for qualitative analysis that works for them, their team, and the needs of their project. I think the key is flexibility, and to think about a set of tools that can include paper and software solutions, rather than one approach that is a jack of all trades, and master of none.

 

Analysing text using qualitative software

I'm really happy to see that the talks from the University of Surrey CAQDAS 2014 are now up online (that's 'Computer Assisted Qualitative Data Analysis Software' to you and me). It was a great conference about the current state of software for qualitative analysis, but for me the most interesting talks were from experienced software trainers, about how people actually were using packages in practice.

There were many important findings being shared, but for me one of the most striking was that people spend most of their time coding, and most of what they are coding is text.

In a small survey of CAQDAS users from a qualitative research network in Poland, Haratyk and Kordasiewicz found that 97% of users were coding text, while only 28% were coding images, and 23% directly coding audio. In many ways, the low numbers of people using images and audio are not surprising, but it is a shame. Text is a lot quicker to skip though to find passages compared to audio, and most people (especially researchers) and read a lot faster than people speak. At the moment, most of the software available for qualitative analysis struggles to match audio with meaning, either by syncing up transcripts, or through automatic transcription to help people understand what someone is saying.

Most qualitative researchers use audio as an intermediary stage, to create a recording of a research event, such as in interview or focus group, and have the text typed up word-for-word to analyse. But with this approach you risk losing all of the nuance that we are attuned to hear in the spoken word, emphasis, emotion, sarcasm – and these can subtly or completely transform the meaning of the text. However, since audio is usually much more laborious to work with, I can understand why 97% of people code with text. Still, I always try to keep the audio of an interview close to hand when coding, so that I can listen to any interesting or ambiguous sections, and make sure I am interpreting them fairly.

Since coding text is what most people spend most of their time doing, we spent a lot of time making the text coding process in Quirkos was as good as it could be. We certainly plan to add audio capabilities in the future, but this needs to be carefully done to make sure it connects closely with the text, but can be coded and retrieved as easily as possible.

 

But the main focus of the talk was the gaps in users' theoretical knowledge, that the survey revealed. For example, when asked which analytical framework the researchers used, only 23% described their approach as Grounded Theory. However, when the Grounded Theory approach was described in more detail, 61% of respondents recognised this method as being how they worked. You may recall from the previous top-up, bottom-down blog article that Grounded Theory is essentially finding themes from the text as they appear, rather than having a pre-defined list of what a researcher is looking for. An excellent and detailed overview can be found here.

Did a third of people in this sample really not know what analytical approach they were using? Of course it could be simply that they know it by another name, Emergent Coding for example, or as Dey (1999) laments, there may be “as many versions of grounded theory as there were grounded theorists”.

 

Finally, the study noted users comments on advantages and disadvantages with current software packages. People found that CAQDAS software helped them analyse text faster, and manage lots of different sources. But they also mentioned a difficult learning curve, and licence costs that were more than the monthly salary of a PhD student in Poland. Hopefully Quirkos will be able to help on both of these points...

 

Top-down or bottom-up qualitative coding?

In framework analysis, sometimes described as a top-down or 'a-priori' approach, the researcher decides on the topics of interest they will look for before they start the analysis, usually based on a theory they are looking to test. In inductive coding the researcher takes a more bottom-up approach, starting with the data and a blank-sheet, noting themes as the read through the text.

 

Obviously, many researchers take a pragmatic approach, integrating elements of both. For example it is difficult for a emergent researcher to be completely naïve to the topic before they start, and they will have some idea of what they expect to find. This may create bias in any emergent themes (see previous posts about reflexivity!). Conversely, it is common for researchers to discover additional themes while reading the text, illustrating an unconsidered factor and necessitating the addition of extra topics to an a-proiri framework.

 

I intend to go over these inductive and deductive approaches in more detail in a later post. However, there is also another level in qualitative coding which is top-down or bottom-up: the level of coding. A low 'level' of coding might be to create a set of simple themes, such as happy or sad, or apple, banana and orange. These are sometimes called manifest level codes, and are purely descriptive. A higher level of coding might be something more like 'issues from childhood', fruit, or even 'things that can be juggled'. Here more meaning has been imposed, sometimes referred to as latent level analysis.

 

 

Usually, researchers use an iterative approach, going through the data and themes several times to refine them. But the procedure will be quite different if using a top-down or bottom-up approach to building levels of coding. In one model the researcher starts with broad statements or theories, and breaks them down into more basic observations that support or refute that statement. In the bottom-up approach, the researcher might create dozens of very simple codes, and eventually group them together, find patterns, and infer a higher level of meaning from successive readings.

 

So which approach is best? Obviously, it depends. Not just on how well the topic area is understood, but also the engagement level of the particular researcher. Yet complementary methods can be useful here: the PI of the project, having a solid conceptual understanding of the research issue, can use a top-down approach (in both approaches to the analysis) to test their assumptions. Meanwhile, a researcher who is new to the project or field could be in a good position to start from the bottom-up, and see if they can find answers to the research questions starting from basic observations as they emerge from the text. If the themes and conclusions then independently reach the same starting points, it is a good indication that the inferences are well supported by the text!

 

qualitative data analysis software - Quirkos

 

 

Participatory analysis: closing the loop

In participatory research, we try to get away from the idea of researchers doing research on people, and move to a model where they are conducting research with people.

 

The movement comes partly from feminist critiques of epistemology, attacking the pervasive notion that knowledge can only be created by experienced academics, The traditional way of doing research generally disempowers people, as the researchers get to decide what questions to ask, how to interpret and present them, and even what topics are worthy of study in the first place. In participatory research the people who are the focus of the research are seen as the experts, rather than the researchers. At face value, this seems to make sense. After all, who knows more about life on a council estate: someone who has lived there for 20 years, or a middle-class outside researcher?

 

In participatory research, the people who are the subject of the study are often encouraged to be a much greater part of the process, active participants rather than aliens observed from afar. They know they are taking part in the research process, and the research is designed to give them input into what the study should be focusing on. The project can also use research methods that allow people to have more power over what they share, for example by taking photos of their environment, having open group discussions in the community, or using diaries and narratives in lieu of short questionnaires. Groups focused on developing and championing this work include the Participatory Geographies working group of the RGS/IBG, and the Institute of Development Studies at the University of Sussex.

 

This approach is becoming increasingly accepted in mainstream academia, and many funding bodies, including the NIHR, now require all proposals for research projects to have had patient or 'lay-person' involvement in the planning process, to ensure the design of the project is asking the right questions in an appropriate way. Most government funded projects will also stipulate that a summary of findings should be written in a non-technical, freely available format so that everyone involved and affected by the research can access it.

 

Engaging with analysis

Sounds great, right? In a transparent way, non-academics are now involved in everything: choosing which studies are the most important, deciding the focus, choosing the methods and collecting and contributing to the data.

 

But then what? There seems to be a step missing there, what about the analysis?

 

It could be argued that this is the most critical part of the whole process, where researchers summarise, piece together and extrapolate answers from the large mass of data that was collectively gathered. But far too often, this process is a 'black-box' conducted by the researchers themselves, with little if any input from the research participants. It can be a mystery to outsiders, how did researchers come to the particular findings and conclusions from all the different issues that the research revealed? What was discarded? Why was the data interpreted in this way?

 

This process is usually glossed over even in journal articles and final reports, and explaining the process to participants is difficult. Often this is a technical limitation: if you are conducting a muli-factor longitudinal study, the calculation of the statistical analysis is usually beyond all but the most mathematically minded academics, let alone the average Jo.

 

Yet this is also a problem in qualitative research, where participatory methods are often used. Between grounded theory, framework analysis and emergent coding, the approach is complicated and contested even within academia. Furthermore, qualitative analysis is a very lengthy process, with researchers reading and re-reading hundreds or thousands of pages of text: a prospect unappealing to often unpaid research participants.

 

Finally, the existing technical solutions don't seem to help. Software like Nvivo, often used for this type of analysis, is daunting for many researchers without training, and encouraging people from outside the field to try and use it, with all the training and licensing implications of this, makes for an effective brick wall. There are ways to make analysis engaging for everyone, but many research projects don't attempt participation at the analysis stage.

 

Intuitive software to the rescue?

By making qualitative analysis visual and engaging, Quirkos hopes to make participatory analysis a bit more feasible. Users don't require lengthy training, and everyone can have a go. They can make their own topics, analyse their own transcripts (or other people's), and individuals in a large community group can go away and do as little or as much as they like, and the results can be combined, with the team knowing who did what (if desired).

 

It can also become a dynamic group exercise, where with a tablet, large touch surface or projector, everyone can be 'hands on' at once. Rather than doing analysis on flip-charts that someone has to take away and process after the event, the real coding and analysis is done live, on the fly. Everyone can see how the analysis is building, and how the findings are emerging as the bubbles grow. Finally, when it comes to share the findings, rather than long spreadsheets of results, you get a picture – the bubbles tell the story and the issues.

 

Quirkos offers a way to practically and affordably facilitate proper end-to-end participatory research, and finally close the loop to make participation part of every stage in the research process.