Workshop exercises for participatory qualitative analysis

participatory workshop

I am really interested in engaging research participants in the research process. While there is an increasing expectation to get ‘lay’ researchers to set research questions, sit on review boards and even ask questions in qualitative studies, it can be more difficult to engage them with the analysis of the research data and this is much rarer in the literature (see Nind 2011).


However, Quirkos was specifically designed to make qualitative text analysis engaging and easy enough that participants could learn to do it with just a little training, and in this blog post article from last year I describe how a small group were able to learn the software and code qualitative data in a two hour session. I am revisiting and expanding on this work for a talk in the Digital Tools thread of the 2016 ICQI conference in Champagne Illinois this month, so wanted to revisit this a little.


When I had attempted to engage participants in qualitative analysis before, it had been fairly limited in scope. The easiest way was essentially to present an already complete draft ‘report’ or overview of the analysis and findings, and use that for comment and discussion. While this allows respondents some ability to influence the final outputs and conclusions, the analysis process is still entirely led by the researcher, and they don’t really get a chance to change how data is interpreted. The power dynamics between researcher and participant are not significantly altered.


My feeling is that respondents are often the experts in what is being studied (i.e. their own experiences), and I worry that if presented with all the data, they might rightly conclude “You’ve interpreted this as being about x when it is really about y”.


Yet there are obvious problems that occur when you want respondents to engage with this level of detail. First of all, there is the matter of time: qualitative analysis is extremely time consuming, and in most projects asking someone to analyse all the data is asking for days or even months of commitment. This is not feasible for most respondents, especially if asked to do this work voluntarily parallel to the full time, paid job of the researcher! 


Most approaches in the literature choose to engage a small number of participants in the analysis of some of the data. For example Jackson (2008) uses group exercises successfully with people from different educational backgrounds. The DEPICT model breaks down the work so that the whole dataset is covered, but each team member only has a few transcripts to code (Flicker and Nixon 2015).


However, when it came to run participatory analysis workshops for the research project we did on the Scottish Referendum Project, we had an additional secret weapon: Quirkos! One of the main design briefs for Quirkos was to ensure that it was simple enough to learn that it could be used by research participants with little or no formal training in research or qualitative analysis. The workshops we ran with research-naïve respondents showed that such a software package could be used in such a way.


I was initially really worried about how the process would work practically, and how to create a small realistic task that would be a meaningful part of the analysis process. Before I started, I considered a series of tasks and scenarios that could be used in a participatory analysis workshop to get respondent input into the analysis process. I’ve included some brief details of these below, just in case they are helpful to anyone else considering participatory analysis.

 

 

Blank Sheet


The most basic, and most scary scenario: the coding team is provided with just the raw transcript(s), with no existing topic framework or coded data. They start from scratch, creating their own coding framework, and coding data. This is probably the most time consuming, and conceptually challenging approach, but the most neutral in terms of influence from the researchers. Participants are not provided with any preconceptions of what they should be exploring in the data (although they could be provided with the research questions), and are free to make their own interpretations.

 

 

Framework Creation


Here, I envisage a series of possible exercises where the focus is on not the coding of the data explicitly, but consideration of the coding framework and possible topics of interest. Participants choose topics of significance to them, or that they feel are appearing in the data. Here the process is like grounded theory, participants are given one (or several) transcripts to read, and asked what topics are significant. This works well on large sheets of paper with Post-it notes, but by creating the coding framework directly in the software, participants and researchers can easily utilise the developed framework for coding later. Could exist in several variants:


Emergent Coding
As above: creating a coding framework (probably from scratch, or with some example topics already provided by the researcher)
 

Grouping Exercise
A simpler task would be to present a pre-prepared list of many possible topics of interest created by the researcher, and ask participants to group them either thematically, or by order of importance. This gives respondents an easier start on the coding framework, allowing them to familiarise themselves with the process and topics. It is more restrictive, and plants directions of interest for the participants, but they would remain able to challenge, add to, or exclude topics for examination.
 

Category Prompts
Here the researcher has created a few very broad categories (for example Health, Housing, Family) and the participants are encouraged to populate the framework with more specific sub categories. This approach is a good middle ground, where the researcher can set some broad areas of interest, but participants have say in what direction topics should be explored in detail (say Expensive food, Lack of open space).

After one or more of these exercises, the participants could go on to use the coding framework to code the data themselves, or the researcher can use the contributed topic guide to focus their own coding.

 

 

Coding exercises


In these three exercises, I envisage a scenario where some coding has already been completed, the focus of the session is to look either at coded transcripts (on screen or printout) and discuss how the data has been interpreted. This could take the form of:


Researcher Challenge: Where the researcher asks the participants to justify or explain how they have coded the data
Participant Challenge: Participants examine data coded by researchers, question their rationale and suggest changes
Group Challenge: Participants and researchers code the same transcript separately, and get together to compare, contrast and discuss their results.


With all these approaches, one can apply several overall philosophies:
Individual: Where each respondent or researcher works on their own, adding separately to the total coding of the project
Collaborative: Analysis is done as part of a team, working together
Comparative: Where analysts work separately, but come together to discuss and contrast their work, creating a final dialogue from the input of the whole group.

 

Finally, the team should consider whether the aim of the project is to actually create a direct analysis outcome from these sessions, or if they are exercises which are themselves part of the qualitative data generated from the project. For our sessions, we also recorded, transcribed and analysed the discussion which took place around the coding, which itself also contributed nuanced and valuable insight into the thought processes of the participants. Of course, this leaves the problem of creating an infinite Ouroboros loop of data generation, if respondents were then invited to analyse the transcripts of their own analysis sessions!

 

Which approach, and how far the participatory process is taken will obviously depend on the research project and desires of the researcher. However, my main aim here is to just get people thinking about the possibilities, and if engaging participants in the research process in some way will challenge the assumptions of the research team, or lead to better results, and more relevant and impactful outputs.

 

Here are the slides of my ICQI 2016 talk, and the complete data (raw and coded) and summary report on the Scottish Referendum Project is here. I would welcome more discussion on this, in the forum, by e-mail (daniel@quirkos.com) or in the literature!

 

Don't forget, the new version of Quirkos is now available, for researchers and participants alike to bring their qualitative analysis to life. Download a free trial today!

 

 

Quirkos version 1.4 is here!

quirkos version 1.4

It’s been a long time coming, but the latest version of Quirkos is now available, and as always it’s a free update for everyone, released simultaneously on Mac, Windows and Linux with all the new goodies!


The focus of this update has been speed. You won’t see a lot of visible differences in the software, but behind the scenes we have rewritten a lot of Quirkos to make sure it copes better with large qualitative sources and projects, and is much more responsive to use. This has been a much requested improvement, and thanks to all our intrepid beta testers for ensuring it all works smoothly.


In the new version, long coded sources now load in around 1/10th of the time! Search results and hierarchy views load much quicker! Large canvas views display quicker! All this adds up to give a much more snappy and responsive experience, especially when working with large projects. This takes Quirkos to a new professional level, while retaining the engaging and addictive data coding interface.


In addition we have made a few small improvements suggested by users, including:


• Search criteria can be refined or expanded with AND/OR operands
• Reports now include a summary section of your Quirks/codes
• Ability to search source names to quickly find sources
• Searches now display the total number of results
• Direct link to the full manual

 

There are also many bug fixes! Including:
• Password protected files can now be opened across Windows, Mac and Linux
• Fix for importing PDFs which created broken Word exports
• Better and faster CSV import
• Faster Quirk merge operations
• Faster keyword search in password protected files

 

However, we have had to change the .qrk file format so that password protected files can open on any operating system. This means that projects opened or created in version 1.4 cannot be opened in older versions of Quirkos (v1.3.2 and earlier).


I know how annoying this is, but there should be no reason for people to keep using older versions: we make the updates free so that everyone is using the same version. Just make sure everyone in your team updates!

 

When you first open a project file from an older version of Quirkos in 1.4, it will automatically convert it to the new file format, and save a backup copy of the old file. Most users will not notice any difference, and you can obviously keep working with your existing project files. But if you want to share your files with other Quirkos users, make sure they also have upgraded to the latest version, or they will get an error message trying to open a file from version 1.4.

 

All you need to do to get the new version is download and install from our website (www.quirkos.com/get.html) and install to the same location as the old Quirkos. Get going, and let us know if you have any suggestions or feedback! You could see your requests appear in version 1.5!

 

Top 10 qualitative research blog posts

top 10 qualitative blog articles

We've now got more than 70 posts on the official Quirkos blog, on lots of different aspects of qualitative research and using Quirkos in different fields. But it's now getting a bit difficult to navigate, so I wanted to do a quick recap with the 10 most popular articles, based on the number of hits over the last two years.

 

Tools for critical appraisal of qualitative research

A review of tools that can be used to assess the quality of qualitative research.

 

Transcription for qualitative research

The first on a series of posts about transcribing qualitative research, breaking open the process and costs.

 

10 tips for recording good qualitative audio

Some tips for recording interviews and focus-groups for good quality transcription

 

10 tips for semi-structured qualitative interviewing

Some advice to help researchers conduct good interviews, and what to plan for in advance

 

Sampling issues in qualitative research

Issues to consider when sampling, and later recruiting participants in qualitative studies

 

Developing an interview guide for semi-structured interviews

The importance of having a guide to facilitate in-depth qualitative interviews

 

Transcribing your own qualitative data

Last on the transcription trifecta, tips for making transcription a bit easier if you have to do it yourself

 

Participant diaries for qualitative research

Some different approaches to self-report and experience sampling in qualitative research

 

Recruitment for qualitative research

Factors to consider when trying to get participants for qualitative research

 

Engaging qualitative research with a quantitative audience

The importance of packaging and presenting qualitative research in ways that can be understood by quantitative-focused policy makers and journal editors

 

There are a lot more themes to explore in the blog post, including posts on how to use CAQDAS software, and doing your qualitative analysis in Quirkos, the most colourful and intuitive way to explore your qualitative research.

 

 

Participant diaries for qualitative research

participant diaries

 

I’ve written a little about this before, but I really love participant diaries!


In qualitative research, you are often trying to understand the lives, experiences and motivations of other people. Through methods like interviews and focus groups, you can get a one-off insight into people’s own descriptions of themselves. If you want to measure change over a period, you need to schedule a series of meetings, and each of which will be limited by what a participant will recall and share.


However, using diary methodologies, you can get a longer and much more regular insight into lived experiences, plus you also change the researcher-participant power dynamic. Interviews and focus groups can sometimes be a bit of an interrogation, with the researcher asking questions, and participants given the role of answering. With diaries, participants can have more autonomy to share what they want, as well as where and when (Meth 2003).


These techniques are also called self-report or ‘Contemporaneous assessment methods’, but there are actually a lot of different ways you can collect diary entries. There are some great reviews of different diary based methods, (eg Bolger et al. 2003), but let’s look at some of the different approaches.


The most obvious is to give people a little journal or exercise book to write in, and ask them to record on a regular basis any aspects of their day that are relevant to your research topic. If they are expected to make notes on the go, make it a slim pocket sized one. If they are going to write a more traditional diary at the end of each day, make a nice exercise book to work in. I’ve actually found that people end up getting quite attached to their diaries, and will often ask for them back. So make sure you have some way to copy or transcribe them and consider offering to return them once you have investigated them, or you could give back a copy if you wish to keep hold of the real thing.

 

You can also do voice diaries – something I tried in Botswana. We were initially worried that literacy levels in rural areas would mean that participants would either be unable, or reluctant to create written entries. So I offered everyone a small voice recorder, where they could record spoken notes that we would transcribe at the end of the session. While you could give a group of people an inexpensive (~£20) Dictaphone, I actually brought a bunch of cheap no-brand MP3 players which only cost ~£5 each, had a built in voice recorder and headphones, and could work on a single AAA battery (which was easy to find from local shops, since few respondents had electricity for recharging). The audio quality was not great, but perfectly adequate. People really liked these because they could also play music (and had a radio), and they were cheap enough to be lost or left as thank-you gifts at the end of the research.

 

There is also a large literature on ‘experience sampling’ – where participants are prompted at regular or random intervals to record on what they are doing or how they are feeling at that time. Initially this work was done using pagers, (Larson 1989) when participants would be ‘beeped’ at random times during the day and asked to write down what they were doing at the time. More recent studies have used smartphones to both prompt and directly collect responses (Chen et al. 2014).

 

Secondly, there is now a lot of online journal research, both researcher solicited as part of a qualitative research project (Kaun 2015), or collected from people’s blogs and social media posts. This is especially popular in market research when looking at consumer behaviour (Patterson 2005), project evaluation (Cohen et al. 2006).

 

Diary methods can create detailed, and reliable data. One study found that asking participants to record diary entries three times a day to measure stigmatised behaviour like sexual activities found an 89.7% adherence rate (Hensel et al. 2012), far higher than would be expected from traditional survey methods. There is a lot of diary based research in the sexual and mental health literature: for more discussion on the discrepancies and reliability between diary and recall methods, there is a good overview in Coxon (1999) but many studies like Garry et al. (2002) found that diary based methods generated more accurate responses. Note that these kinds of studies tend to be mixed-method, collecting both discrete quantitative data and open ended qualitative comments.

 

Whatever the method you are choosing, it’s important to set up some clear guidelines to follow. Personally I think either a telephone conversation or face-to-face meeting is a good idea to give a chance for participants to ask questions. If you’ve not done research diaries before, it’s a good idea to pilot them with one or two people to make sure you are briefing people clearly, and they can write useful entries for you. The guidelines, (explained and taped to the inside of the diary) should make it clear:

  • What you are interested in hearing about
  • What it will be used for
  • How often you expect people to write
  • How much they should write
  • How to get in touch with you
  • How long they should be writing entries, and how to return the diary.

 

Even if you expressly specify that your participants should write their journals should be written everyday for three weeks, you should be prepared for the fact that many won’t manage this. You’ll have some that start well but lapse, others that forget until the end and do it all in the last day before they see you, and everything in-between. You need to assume this will happen with some or all of your respondents, and consider how this is going to affect how you interpret the data and draw conclusions. It shouldn’t necessarily mean that the data is useless, just that you need to be aware of the limitations when analysing it. There will also be a huge variety in how much people write, despite your guidelines. Some will love the experience, sharing volumes of long entries, others might just write a few sentences, which might still be revealing.

 

For these reasons, diary-like methodologies are usually used in addition to other methods, such as semi-structured interviews (Meth 2003), or detailed surveys. Diaries can be used to triangulate claims made by respondents in different data sources (Schroder 2003) or provide more richness and detail to the individual narrative. From the researchers point of view, the difference between having data where a respondent says they have been bullied, and having an account of a specific incident recorded that day is significant, and gives a great amount of depth and illumination into the underlying issues.

 

Qualitative software - Quirkos

 

However, you also need to carefully consider the confidentiality and other ethical issues. Often participants will share a lot of personal information in diaries, and you must agree how you will deal with this and anonymise it for your research. While many respondents find keeping a qualitative diary a positive and reflexive process, it can be stressful to ask people in difficult situations to reflect on uncomfortable issues. There is also the risk that the diary could be lost, or read by other people mentioned in it, creating a potential disclosure risk to participants. Depending on what you are asking about, it might be wise to ask participants themselves to create anonymised entries, with pseudo-names and places as they write.

 

Last, but not least, what about your own diary? Many researchers will keep a diary, journal or ‘field notes’ during the research process (Altricher and Holly 2004), which can help provide context and reflexivity as well as a good way of recording thoughts on ideas and issues that arise during the data collection process. This is also a valuable source of qualitative data itself, and it’s often useful to include your journal in the analysis process – if not coded, then at least to remind you of your own reflections and experiences during the research journey.

 

So how can you analyse the text of your participant diaries? In Quirkos of course! Quirkos takes all the basics you need to do qualitative analysis, and puts it in a simple, and easy to use package. Try for yourself with a free trial, or find out more about the features and benefits.

 

Sharing qualitative research data from Quirkos

exporting and sharing qualitative data

Once you’ve coded, explored and analysed your qualitative data, it’s time to share it with the world. For students, the first step will be supervisors, for researchers it might be peers or the wider research community, and for market research firms, it will be their clients. Regardless of who the end user of your research is, Quirkos offers a lot of different ways to get your hard earned coding out into the real world.

 

Share your project file
The best, and easiest way to share your coded data is to send your project file to someone. If they have a copy of Quirkos (even the trial) they will be able to explore the project in the same way you can, and you can work on it collaboratively. Files are compatible across Windows, Mac and Linux, and are small enough they can be e-mailed, put on a USB stick or Dropbox as needed.

 

Word export
One of people’s favourite features is the Word export, which creates a standard Word file of your data, with comments and coloured highlights showing your complete coding. This means that pretty much anyone can see your coding, since the file will open in Microsoft Office, LibreOffice/OpenOffice, Google Docs, Pages (on Mac) and many others. It’s also a great way to print out your project if you prefer to read though it on paper, while still being able to see all your coding. If you print the ‘Full Markup’ view, you will still be able to see the name (and author) of the code on a black and white printer!qualitative word export from quirkos


There are two options available in the ‘Project’ button – either ‘Export All Sources as Word Document’ which creates one long file, or ‘Export Each Source’ which creates a separate file for each source in the project in a folder you specify.

 

Reports
So this is the most conventional output in Quirkos, a customisable document which gives a summary of the project, and an ordered list of coded text segments. It also includes graphical views of your coding framework, including the clustered views which show the connections between themes. When generated in Quirkos, you will get a two columned preview, with a view of how the report will look on the left, and all the options for what you want to include in the report on the right.


You can print this directly, save it as a PDF document, or even save as a webpage. This last option creates a report folder that anyone can open, explore and customise in their browser, in the same way as you are able to in the Quirkos report view. This also creates a folder which contains all the images in the report (such as the canvas and overlap views) that you can then include directly in presentations or articles.

quirkos qualitative data report


There are many options available here, including the ability to list all quotes by source (ie everything one person said) or by theme (ie everything everyone said on one topic). You can change how these quotes are formatted (by making the text or highlight into the colour of the Quirk) and the level of detail, such as whether to include the source name, properties and percentage of coding.

 

Sub-set reports (query view)
By default, the report button will generate output of the whole project. But if you want to just get responses from a sub-set of your data, you can generate reports containing only the results of filters from the query view. So you could generate a report that only shows the responses from Men or Women, or by one of the authors in the project.

 

CSV export
Quirkos also gives you the option to export your project as CSV files – a common spreadsheet format which you can open with in Excel, SPSS or equivalents. This allows you to do more quantitative analysis in statistical software, generate graphs of your coding, and conduct more detailed sub-analysis. The CSV export creates a series of files which represent the different tables in the project database, with v_highlight.csv containing your coded quotes. Other files contain the question and answers (in a structured project), a list of all your codes, levels, and source properties (also called metadata).

 

Database editing
For true power users, there is also the option to perform full SQL operations on your project file. Since Quirkos saves all your project data as a standard SQLite database, it’s possible to open and edit it with a number of third party tools such as SQL Browser to perform advanced operations. You can also use standard command line operations (CLI) like SELECT FROM WHERE to explore and edit the database. Our full manual has more details on the database structure. Hopefully, this will also allow for better integration with other qualitative analysis software in the future.

 

If you are interesting in seeing how Quirkos can help with coding and presenting your qualitative data, you can download a one-month free trial and try for yourself. Good luck with your research!

 

Tools for critical appraisal of qualitative research

appraising qualitative data

I've mentioned before how the general public are very quantitatively literate: we are used to dealing with news containing graphs, percentages, growth rates, and big numbers, and they are common enough that people rarely have trouble engaging with them.

 

In many fields of studies this is also true for researchers and those who use evidence professionally. They become accustomed to p-values, common statistical tests, and plot charts. Lots of research is based on quantitative data, and there is a training and familiarity in these methods and data presentation techniques which create a lingua-franca of researchers across disciplines and regions.

 

However, I've found in previous research that many evidence based decision makers are not comfortable with qualitative research. There are many reasons for this, but I frequently hear people essentially say that they don't know how to appraise it. While they can look at a sample size and recruitment technique and a r-square value and get an idea of the limitations of a study, this is much harder for many practitioners to do with qualitative techniques they are less familiar with.

 

But this needn’t be the case, qualitative research is not rocket science, and there are fundamental common values which can be used to assess the quality of a piece of research. This week, a discussion on appraisal of qualitative research was started on Twitter started by the Mental Health group of the 'National Elf Service’ (@Mental_Elf) - an organisation devoted to collating and summarising health evidence for practitioners.

 

People contributed many great suggestions of guides and toolkits that anyone can use to examine and critique a qualitative study, even if the user is not familiar with qualitative methodologies. I frequently come across this barrier to promoting qualitative research in public sector organisations, so was halfway through putting together these resources when I realised they might be useful to others!

 

First of all, David Nunan (@dnunan79) based at the University of Oxford shared an appraisal tool developed at the Centre for Evidence-Based Medicine (@CebmOxford).

 

Lucy Terry (@LucyACTerry) offered specific guidelines for charities from New Philanthropy Capital, with gives five key quality criteria, that the research should be: Valid, Reliable, Confirmable, Reflexive and Responsible.

 

There’s also an article by Kuper et al (2008) which offers guidance on assessing a study using qualitative evidence. As a starting point, they list 6 questions to ask:

  • Was the sample used in the study appropriate to its research question?
  • Were the data collected appropriately?
  • Were the data analysed appropriately?
  • Can I transfer the results of this study to my own setting?
  • Does the study adequately address potential ethical issues, including reflexivity?
  • Overall: is what the researchers did clear?
     

The International Centre for Allied Health Evidence at the University of South Australia has a list of critical apprasial tools, including ones specific to qualitative research. From these, I quite like the checklist format of one developed by the Critical Appraisal Skills Programme, I can imagine this going down well with health commissioners.

 

Another from the Occupational Therapy Evidence-Based Practice Research Group at McMaster University in Canada is more detailed, and is also available in multiple languages and an editable Word document.

 

Finally, Margaret Roller and Paul Lavrakas have a recent textbook (Applied Qualitative Research Design: A Total Quality Framework Approach 2015) that covers many of these issues in research, and detail the Total Quality Framework that can be used for designing, discussing and evaluating qualitative research. The book contains specific chapters on detailing the application of the framework to different projects and methodologies. Margaret Roller also has an article on her excellent blog on weighing the value of qualitative research, which gives an example of the Total Quality Framework.

 

In short, there are a lot of options to choose from, but the take away message from them is that the questions are simple, short, and largely common sense. However, the process of assessing even just a few pieces of qualitative research in this way will quickly get evidence based practitioners into the habit of asking these questions of most projects they come across, hopefully increasing their comfort level in dealing with qualitative studies.

 

The tools are also useful for students, even if they are familiar with qualitative methodologies, as it helps facilitate a critical reading that can give focus to paper discussion groups or literature reviews. Adopting one of the appraisal techniques here (or modifying one) would also be a great start to a systematic review or meta-analysis.

 

Finally, there are a few sources from the Evidence and Ethnicity in Commissioning project I was involved with that might be useful, but if you have any suggestions please let me know, either in the forum or by e-mailing daniel@quirkos.com and I will add these to the list. Don't forget to find out more about using Quirkos for your qualitative analysis and download the free trial.

 

 

Finding, using and some cautions on secondary qualitative data

secondary data analysis

 

Many researchers instinctively plan to collect and create new data when starting a research project. However, this is not always needed, and even if you end up having to collect your own data, looking for other sources already out there can help prevent redundancy and improve your conceptualisation of a project. Broadly you can think of two different types of secondary data: sources collected previously for specific research projects, and data 'scraped' from other sources, such as Hansard transcripts or Tweets.

 

Other data sources include policy documents, news articles, social media posts, other research articles, and repositories of qualitative data collected by other researchers, which might be interviews, focus groups, diaries, videos or other sources. Secondary analysis of qualitative data is not very common, since the data tends to be collected with a very narrow research interest, and of a depth that makes anonymisation and aggregation difficult. However, there is a huge amount of rich data that could be used again for other subjects, see this article by Heaton 2008 for a general overview, and Irwin 2013 for discussion of some of the ethical issues.

 

The advantage with using qualitative data analysis software is that you can keep many disperate sources of evidence together in one place. If you have an article positing a particular theory, you can quickly cross code supportive evidence with sections of text from that article, and evidence why your data does or does not support the literature. Examining social policy? Put the law or government guidelines into your project file, and cross reference statements from participants that challenge or support these dictates in practice. The best research does not exist in isolation: it must engage with both the existing literature and real policy to make an impact.

 


Data from other sources has the advantage that the researcher doesn’t have to spend time and resources collecting and recruiting. However, the constant disadvantage is that the data was not specifically obtained to meet your particular research questions or needs. For example, data from Twitter might give valuable insights into people’s political views. But statements that people make do not always equate with their views (this is true for directly collected research methods as well), so someone may make a controversial statement just to get more followers, or be suppressing their true beliefs if they believe that expressing them will be unpopular. 

 

Each different website has it’s own culture as well, which can affect what people share, and in how much detail. A paper by Hine (2012) shows that posts from the popular UK 'Mumsnet' forum have particular attitudes that are acceptable, and often posters are looking for validation of their behaviour from others. Twitter and Facebook are no exception, their each have different styles and acceptable posts that true internet ethnographers should understand well!

 

Even when using secondary data collected for a specific academic research project, the data might not be suitable for your needs. A great series of qualitative interviews about political views may seem to be a great fit for your research, but might not have asked a key question (for example about respondents parent’s beliefs) which renders the data unusable for your purpose. Additionally, it is usually impossible to identify respondents in secondary data sets to ask follow on questions, since data is anonymised. It’s sometimes even difficult to see the original research questions and interview schedule, and so to find out what questions were asked and for what purpose.

 

But despite all this, it is usually a good idea to look for secondary sources. It might give you an insight into the area of study you hadn’t considered, highlighting interesting issues that other research as picked up on. It might also reduce the amount of data you need to collect: if someone has done something similar in this area, you can design your data collection to highlight relevant gaps, and build on what has been done before (theoretically, all research should do this to a certain degree).

 

I know it’s something I keep reiterating, but it’s really important to understand who your data represents: you need some kind of contextual or demographic data. This is sometimes difficult to find when using data gathered from social media, where people are often given the option to only state very basic data, such as gender, location or age, but many people may not disclose. It can also be a pain to extract comments from social media posts in such a way that the identity of the poster and their posts are kept with it – however there are 3rd party tools that can help with this.

 

When writing up your research, you will also want to make explicit how you found and collected this source of data. For example, if you are searching Twitter for a particular hashtag or phrase, when did you run the search? If you run it the next day, or even minute, the results will be different, and how far back did you include posts? What languages? Are there comments that you excluded - especially if they look like spam or promotional posts? Think about making it replicable: what information would someone need to get the same data as you?

 

You should also try and be as comprehensive as possible. If you are examining newspaper articles for something that has changed over time (such as use of the phrase ‘tactical warfare’) don’t assume all your results will be online. While some projects have digitised newspaper archives from major titles, there are a lot of sources that are still print only, or reside in special databases. You can gain help and access to these from national libraries, such as the British Library.


There are growing repositories of open access data, including qualitative datasets. A good place to start is the UK Data Service, even if you are outside the UK, as it contains links to a number of international stores of qualitative data. Start here, but note that you will generally have to register, or even gain approval to access some datasets. This shouldn’t put you off, but don’t expect to always be able to access the data immediately, and plan to prepare a case for why you should be granted access. In the USA there is a qualitative specific data repository called the Qualitative Data Repository (or QDR) hosted by Syracuse University.

 

If you have found a research article based on interesting data that is not held on a public repository, it is worth contacting the authors anyway to see if they are able to share it. Research based on government funding increasingly comes with stipulations that the data should be made freely available, but this is still a fairly new requirement, and investigators from other projects may still be willing and able to grant access. However, authors can be protective over their data, and may not have acquired consent from participants in such a way that they would be allowed to share the data with third parties. This is something to consider when you do your own work: make sure that you are able to give back to the research community and share your own data in the future.

 


Finally, a note of caution about tailored results. Google, Facebook and other search engines do not show the same results in the same order to all people. They are customised for what they think you will be interested in seeing, based on your own search history and their assumptions of your gender, location, ethnicity, and political leanings. This article explains the impact of the ‘research bubble’ and this will impact the results that you get from social media (especially Facebook).

 

To get around this in a search, you can use a privacy focused search engine (like DuckDuckGo), add &pws=0 to the end of a Google search, or use a browser in ‘Private’ or ‘Incognito’ mode. However, it’s much more difficult to get neutral results from Facebook, so bare this in mind.

 

Hopefully these tips have given you food-for-thought about sources and limitations of secondary data sources, if you have any comments or suggestions, please share them in the forum. Don’t forget that Quirkos has simple copy and paste source generation that allows you to bring in secondary data from lots of different formats and internet feeds, and the visual interface makes coding and exploring them a breeze. Download a free trial from www.quirkos.com/get.html.

 

 

Developing and populating a qualitative coding framework in Quirkos

coding blog

 

In previous blog articles I’ve looked at some of the methodological considerations in developing a coding framework. This article looks at top-down or bottom-up approaches, whether you start with large overarching themes (a-priori) and break them down, or begin with smaller more simple themes, and gradually impose meanings and connections in an inductive approach. There’s a need in this series of articles to talk about the various different approaches which are grouped together as grounded theory, but this will come in a future article.

 

For now, I want to leave the methodological and theoretical debates aside, and look purely at the mechanics of creating the coding framework in qualitative analysis software. While I’m going to be describing the process using Quirkos as the example software, the fundamentals will apply even if you are using Nvivo, MaxQDA, AtlasTi, Dedoose, or most of the other CAQDAS packages out there. It might help to follow this guide with the software of your choice, you can download a free trial of Quirkos right here and get going in minutes.

 

First of all, a slightly guilty confession: I personally always plan out my themes on paper first. This might sound a bit hypocritical coming from someone who designs software for a living, but I find myself being a lot more creative on paper, and there’s something about the physicality of scribbling all over a big sheet of paper that helps me think better. I do this a lot less now that Quirkos lets me physically move themes around the screen, group them by colour and topic, but for a big complicated project it’s normally where I start.

 

But the computer obviously allows you to create and manage hundreds of topics, rearrange and rename them (which is difficult to do on paper, even with pencil and eraser!). It will also make it easy to assign parts of your data to one of the topics, and see all of the data associated with it. While paper notes may help conceptually think through some of the likely topics in the study and connect them to your research questions, I would recommend users to move to a QDA software package fairly early on in their project.

 

Obviously, whether you are taking an a-priori or grounded approach will change whether you will creating most of your themes before you start coding, or adding to them as you go along. Either way, you will need to create your topics/categories/nodes/themes/bubbles or whatever you want to call them. In Quirkos the themes are called ‘Quirks’ informally, and are represented by default as coloured bubbles. You can drag and move these anywhere around the screen, change their colours, and their size increases every time you add some text to them. It’s a neat way to get confirmation and feedback on your coding. In other software packages there will just be a number next to the list of themes that shows how many coding events belong to each topic.

 


In Quirkos, there are actually three different ways to create a bubble theme. The most common is the large (+) button at the top left of a canvas area. This creates a new topic bubble in a random place with a random colour, and automatically opens the Properties dialogue for you to edit it. Here you can change the name, for example to ‘Fish’ and put in a longer description: ‘Things that live in water and lay eggs’ so that the definition is clear to yourself and others. You can also choose the colour, from some 16 million options available. There is also the option to set a ‘level’ for this Quirk bubble, which is a way to group intersecting themes so that one topic can belong to multiple groups. For example, you could create a level called ‘Things in the sea’ that includes Fish, Dolphins and Ships, and another category called ‘Living things’ that has Fish, Dolphins and Lions. In Quirkos, you can change any of these properties at any time by right clicking on the appropriate bubble.

 

quirkos qualitative properties editor

 

Secondly, you can right click anywhere on the ‘canvas’ area that stores your topics to create a new theme bubble at that location. This is useful if you have a little cluster of topics on a similar theme, and you want to create a new related bubble near the other ones. Of course, you can move the bubbles around later, but this makes things a bit easier.

 

If you are creating topics on the fly, you can also create a new category by dragging and dropping text directly onto the same add Quirk button. This creates a new bubble that already contains the text you dragged onto the button. This time, the property dialogue doesn’t immediately pop-up, so that you can keep adding more sections of data to the theme. Don’t forget to name it eventually though!

 

drag and drop qualitative topic creation

 

All software packages allow you to group your themes in some way, usually this is in a list or tree view, where sub-categories are indented below their ‘parent’ node. For example, you might have the parent category ‘Fish’ and the sub-categories ‘Pike’, ‘Salmon’ and ‘Trout’. Further, there might be sub-sub categories, so for example ‘Trout’ might have themes for ‘Brown Trout’, ‘Brook Trout’ and ‘Rainbow Trout’. This is a useful way to group and sort your themes, especially as many qualitative projects end up with dozens or even hundreds of themes.

 

In Quirkos, categories work a little differently. To make a theme a sub-category, just drag and drop that bubble onto the bubble that will be its parent, like stacking them. You will see that the sub-category goes behind the parent bubble, and when you move your mouse over the top category, the others will pop out, automatically arranging like petals from a flower. You can also remove categories just by dragging and pulling it out from the parent just like picking petals from a flower! You can also create sub-sub categories (ie up to three levels depth) but no more than this. When a Quirk has subcategories clustered below it, this is indicated by a white ring inside the bubble. This method of operation makes creating clusters (and changing your mind) very easy and visual.

 

Now, to add something to the topic, you just have to select some text, and drag and drop it onto the bubble or theme. This will work in most software packages, although in some you can also right click within the selected text where you will find a list of codes to assign that section to.


Quirkos, like other software, will show coloured highlighted stripes over the text or in the margin that show in the document which sections have been added to which codes. In Quirkos, you can always see what topic the stripe represents by hovering the mouse cursor over the coloured section, and the topic name will appear in the bottom left of the screen. You can also right-click on the stripe and remove that section of text from the code at any time. Once you have done some coding, in most software packages you can double click on the topic and see everything you’ve coded at this point.

 

Hopefully this should give you confidence to let the software do what it does best: keep track of lots of different topics and what goes in them. How you actually choose which topics and methodology to use in your project is still up to you, but using software helps you keep everything together and gives you a lot of tools for exploring the data later. Don’t forget to read more about the specific features of Quirkos here and download the free trial from here.

 

Transcribing your own qualitative data

diy qualitative transcription

In a previous blog article I talked about some of the practicalities and costs involved in using a professional transcribing service to turn your beautifully recorded qualitative interviews and focus groups into text data ready for analysis. However, hiring a transcriber is expensive, and is often beyond the means of most post-graduate researchers.

 

There are also serious advantages to doing the transcription yourself that make a better end result and get you much closer to your data. In this article I’m going to go through some practical tips that should make doing transcription a little less painful.

 

But first, a little more on the benefits of transcribing your own data. If you were there in the room with the respondent, you asked the questions, and were watching and listening to the participant. Do the transcription soon after the interview and you are likely to remember words that might be muffled in the recording, points that the respondent emphasised by shaking their head – lots of little details to capture.

 

It’s important to remember that transcription is an interpretive act (Bailey 2008), you can’t just convert an interview into a perfect text version of that data. While this might be obvious when working between different languages where translation is required, I would argue that a transcriber always makes subjective decisions about misheard words, how to record pauses and inflictions, or unconsciously changes words or their order.

 

As I’ve mentioned before, you loose a lot of the nuance of an interview when moving to text, and the transcriber has to make choices about how to mitigate this: Was this hesitation or just pausing for breath? How should I indicate the participant banged on the table for emphasis? Capturing this non-verbal communication in a transcript can really change the interpretation in qualitative data, so I like it when this process is in the control of the researcher. For a lot more on these and other issues there is a review of the qualitative transcription literature by Davidson (2009).


 

What do I actually type?

In a word, everything: the questions, the answers, the hesitations and mumbles, and things that were communicated, but not said verbally.

 

First, some guidelines for what the transcription should look like, bearing in mind that there is no one standard. You can use a word processor, or a spreadsheet like Excel. It can be a little more difficult to get formatting right in a spreadsheet, for example you will need to use Shift+Return to make a new paragraph within a cell, and getting it to look right on a printed page is more of a challenge. Yet since interviews and especially focus groups will usually have more than one voice to assign text to, you need some way to structure the data.

 

In a spreadsheet you can use three columns: the first for an occasional time index (so you see where in the audio this section of text occurs), the second for name of voice, and the third widest one for text. While you can use a table to do the same thing in Word, spreadsheets will do auto-complete for your names, making things a bit faster. However, for just a one-on-one interview, it’s easy to just use a Q: / A: formatting for each respondent in a spreadsheet, and put periodic time stamps in brackets at the top of each page.

 

Second, record non-verbal data in a consistent way, usually in square brackets. For example [hesitates], [laughter], [bangs fist on table], or even when [coffee is delivered]. You may choose to use italics or bold type to show when someone puts emphasis on a word, but choose one or the other and be consistent.

 

Next, consider your system for indicating pauses. Usually a short pause is represented by three dots ‘…’ Anything longer is recorded in square brackets and roughly timed [5 second pause]. These pauses can show hesitation in the participant to answer a difficult question, and long pauses may have special meaning. There is actually a whole article on the importance of silences by Poland and Pederson (1998).

 

When you are transcribing, you also need to decide on the level of detail. Will you record every Um, Er, and stutter? In verbal speech these are surprisingly common. Most qualitative research does want this level of detail, but it is obviously more time consuming to type. You’ll often have corrections in the speech as well, commonly “I’ve… I’ll never say that ag... any more”. Do you include the first self correction? It’s clear in the audio the participant was going to say ‘again’ but changed themselves to ‘any more’ - should I record this? Decide on the level of detail early on, and be consistent.

 

Sometimes people can go completely off topic, or there will be a section in the audio where you were complaining about the traffic, ordering coffee, or a phone call interrupted things. If you decide it’s not relevant to capture, just indicate with time markings what happened in square brackets: [cup smashed on the floor, 5min to clear up].

 

Once you are done with an interview, it’s a good idea to listen to it back, reading through the transcript and correcting any mistakes. The first few times you will be surprised at how often you swapped a few words, or got strange typos.

 

 

So how long will it all take?

Starting out with all this can be daunting, especially if you have a large number of interviews to transcribe. A good rule of thumb is that transcribing an interview verbatim will take between 3 and 6 times longer than the audio. So for an hour of recording, it could take as little as three hours, or as much as six to type up.

 

This sounds horrifying, and it is. I’m quite a fast typer, and have done quite a bit of transcription before, but I average between 3x and 4x the audio time. If you are slow at typing, need to pause the audio a lot, or have to put in a lot of extra descriptive detail it can take a lot longer. The tips below should help you get towards the 3x benchmark, but it’s worth planning out your time a little before you begin.

 

If you have twenty interviews each lasting on average 1 hour, you should probably plan for at least 60 hours of transcription time. You are looking at nearly 9 days or two weeks of work at a standard 9-5 work day. I don’t say this to frighten you, just to mentally acclimatise you to the task ahead!

 

It’s also worth noting that transcription is very intensive work. You will be frantically typing as fast as you can, and it requires extreme mental concentration to listen and type simultaneously, while also watching for errors and fixing typos. I don’t think most people could just do two or three hour sessions at a time without going a little crazy! So you need to plan in some breaks, or at least some different non-typing work.

 

If this sounds insurmountable, don’t panic. Just spread out the work, especially if you can do the transcripts after each interview, instead of in one huge batch. This is generally better since you can review one interview before you do the next one, giving you a chance to change how you ask questions and cover any gaps. Transcription can also be quite engrossing (since you can’t possibly do anything else at the same time), and it’s nice to see the hours ticking off.

 

 

 

So how can you make this faster?

You need to set up your computer (or laptop) to be a professional transcribing station, where you can hear the audio, start and stop it easily, and type comfortably for a long period of time.

 

Even if you type really fast, you won’t be able to keep up with the speed that people speak, meaning you will have to frequently start and stop the audio to catch up. Most professionals will use a ‘foot-pedal’ to do this, so that they don’t have to stop typing, come out of the word processing software and pause an audio player. Even if you are playing audio from a dictaphone next to you, going away from the keyboard, stopping and starting the buttons on the dictaphone and coming back to type again quickly becomes tedious.

 

A foot-pedal lets you start and stop the audio by tapping with your foot (or toe) and often has additional buttons to rewind a little (very useful) or fast-forward through the audio. Now, these cost around £30/$40 or more, but can be a worthwhile investment. However, it’s also worth checking to see if you can borrow one from a colleague, or even if your department or library has one for hire.

 

But if you are a cheapskate like me, there are other ways to do this. Did you know that you can have two or more keyboards attached to a computer, and they will both work? An extra keyboard (with a USB connector) can cost as little as £10/$15 if you don’t already have a spare lying around, and can be plugged into a laptop as well. Put it on the floor, and you can set up one of the keys as a ‘global shortcut’ in an audio player like VLC. Here’s a forum detailing how to set up a certain key so that it will start and stop the audio even if you are typing in another programme. Put your second keyboard on the floor, and tap your chosen key with your toe to start and stop! Even if you only use one keyboard, you can set a shortcut in VCL (for example Alt+1), and every time you press that combination it will play or pause the audio, even if VLC player is hidden.

 

There’s another advantage to using VLC: it can slow down your recordings as they are played back! Once your audio is playing, click on the Playback menu item, then Speed. Change to Slower, and listen as your participants magically start talking like sleepy drunks! This helps me more than anything, because I can slow down the speech to a level that means I can type constantly without getting behind. This method does warp the speech, and having the setting too high can make it difficult to understand. However, the less you have to pause and stop the audio to catch up with your typing, the faster your transcription will go.

 

You can also do this with audio software like Audacity. Here, import your audio file, and click on Effect, and Change Tempo. Drag the slider to the left to slow down the speech (try 20% – 50%) without changing the ‘pitch’ so everyone doesn’t end up sounding like Barry White. You can then save the file with your desired speed, and the quality can be a little better than the live speed changes in VLC.

 

General tips for good typing can help too. Watch the screen as you type, not your fingers, so that you can quickly pick up on mistakes. Learn to use all your fingers to type, don’t just ‘hunt and peck’ - a quick typing tutorial might save you hours in the long run if you don’t do this already.

 

Last of all, consider your posture. I’m serious! If you are going to be hunched up and typing for days and days, bad posture is going to make you ache and get stressed. Make sure your desk and chair are the right height for you, try using a proper keyboard if working from a laptop (or at least prop up the laptop to a good angle). Make sure the lighting is good, there is no screen glare, and use a foot rest if this helps the position of your back. Scrunched up on a sofa with a laptop in your lap for 60 hours is a great way to get cramp, back-ache and RSI. Try and take a break at least every half an hour: get up and stretch, especially your hands and arms.

 

So, you have your beautiful and detailed transcripts? Now you can bring them into Quirkos to analyse them! Quirkos is ideal for students doing their first qualitative analysis project, as it makes coding and analysis of text visual, colourful and easy to learn. There’s a free trial on our website, and you can bring in data from lots of different sources to work with.

 

Sampling considerations in qualitative research

sampling crowd image by https://www.flickr.com/photos/jamescridland/613445810/in/photostream/

 

Two weeks ago I talked about the importance of developing a recruitment strategy when designing a research project. This week we will do a brief overview of sampling for qualitative research, but it is a huge and complicated issue. There’s a great chapter ‘Designing and Selecting Samples’ in the book Qualitative Research Practice (Ritchie et al 2013) which goes over many of these methods in detail.

 

Your research questions and methodological approach (ie grounded theory) will guide you to the right sampling methods for your study – there is never a one-size-fits-all approach in qualitative research! For more detail on this, especially on the importance of culturally embedded sampling, there is a well cited article by Luborsky and Rubinstein (1995). But it’s also worth talking to colleagues, supervisors and peers to get advice and feedback on your proposals.

 

Marshall (1996) briefly describes three different approaches to qualitative sampling: judgement/purposeful sampling, theoretical sampling and convenience sampling.

 

But before you choose any approach, you need to decide what you are trying to achieve with your sampling. Do you have a specific group of people that you need to have in your study, or should it be representative of the general population? Are you trying to discover something about a niche, or something that is generalizable to everyone? A lot of qualitative research is about a specific group of people, and Marshall notes:
“This is a more intellectual strategy than the simple demographic stratification of epidemiological studies, though age, gender and social class might be important variables. If the subjects are known to the research, they may be stratified according to known public attitudes or beliefs.”

 

Broadly speaking, convenience, judgement and theoretical sampling can be seen as purposeful – deliberately selecting people of interest in some way. However, randomly selecting people from a large population is still a desirable approach in some qualitative research. Because qualitative studies tend to have a small sample size due to the in-depth nature of engagement with each participant, this can have an impact if you want a representative sample. If you randomly select 15 people, you might by chance end up with more women than men, or a younger than desired sample. That is why qualitative studies may use a little bit of purposeful sampling, finding people to make sure the final profile matches the desired sampling frame. For much more on this, check out the last blog post article on recruitment.

 

Sample size will often also depend on conceptual approach: if you are testing a prior hypothesis, you may be able to get away with a smaller sample size, while a grounded theory approach to develop new insights might need a larger group of respondents to test that the findings are applicable. Here, you are likely to take a ‘theoretical sampling’ approach (Glaser and Strauss 1967) where you specifically choose people who have experiences that would contribute to a theoretical construct. This is often iterative, in that after reviewing the data (for theoretical insights) the researcher goes out again to find other participants the model suggests might be of interest.

 

The convenience sampling approach which Marshal mentions as being the ‘least rigorous technique’ is where researchers target the most ‘easily accessible’ respondents. This could even be friends, family or faculty. This approach can rarely be methodologically justified, and is unlikely to provide a representative sample. However, it is endemic in many fields, especially psychology, where researchers tend to turn to easily accessible psychology students for experiments: skewing the results towards white, rich, well-educated Western students.

 

Now we turn to snowball sampling (Goodman 1961). This is different from purposeful sampling in that new respondents are suggested by others. In general, this is most suited to work with ‘marginalised or hard-to-reach’ populations, where responders are not often forthcoming (Sadler et al 2010). For example, people may not be open about their drug use, political views or living with stigmatising conditions, yet often form closely connected networks. Thus, by gaining trust with one person in the group, others can be recommended to the researcher. However, it is important to note the limitations with this approach. Here, there is the risk of systemic bias: if the first person you recruit is not representive in some way, their referrals may not be either. So you may be looking at people living with HIV/AIDS, and recruit through a support group that is formed entirely of men: they are unlikely to suggest women for the study.

 

For these reasons there are limits to the generalisability and appropriateness of snowball sampling for most subjects of inquiry, and it should not be taken as an easy fix. Yet while many practitioners explain the limitations with snowball research, it can be very well suited for certain kinds of social and action research, this article by Noy (2008) outlines some of the potential benefits to power relations and studying social networks.

 

Finally, there is the issue of sample size and ‘saturation’. This is when there is enough data collected to confidently answer the research questions. For a lot of qualitative research this means collected and coded data as well, especially if using some variant of grounded theory. However, saturation is often a source of anxiety for researchers: see for example the amusingly titled article “Are We There Yet?” by Fusch and Ness (2015). Unlike quantitative studies where a sample size can be determined by the desired effect size and confidence interval in a chosen statistical test, it is more difficult to put an exact number on the right number of participant responses. This is especially because responses are themselves qualitative, not just numbers in a list: so one response may be more data rich than another.

 

While a general rule of thumb would indicate there is no harm in collecting more data than is strictly necessary, there is always a practical limitation, especially in resource and time constrained post-graduate studies. It can also be more difficult to recruit than anticipated, and many projects working with very specific or hard-to-reach groups can struggle to find a large enough sample size. This is not always a disaster, but may require a re-examination of the research questions, to see what insights and conclusions are still obtainable.

 

Generally, researchers should have a target sample size and definition of what data saturation will look like for their project before they begin sampling and recruitment. Don’t forget that qualitative case studies may only include one respondent or data point, and in some situations that can be appropriate. However, getting the sampling approach and sample size right is something that comes with experience, advice and practice.

 

As I always seem to be saying in this blog, it’s also worth considering the intended audience for your research outputs. If you want to publish in a certain journal or academic discipline, it may not be responsive to research based on qualitative methods with small or ‘non-representative’ samples. Silverman (2013 p424) mentions this explicitly with examples of students who had publications rejected for these reasons.

 

So as ever, plan ahead for what you want to achieve for your research project, the questions you want to answer, and work backwards to choose the appropriate methodology, methods and sample for your work. Also, check the companion article about recruitment, most of these issues need to be considered in tandem.

 

Once you have your data, Quirkos can be a great way to analyse it, whether your sample size has one or dozens of respondents! There is a free trial and example data sets to see for yourself if it suits your way of working, and much more information in these pages. We also have a newly relaunched forum, with specific sections on qualitative methodology if you wanted to ask questions, or comment on anything raised in this blog series.