Participant diaries for qualitative research

participant diaries

 

I’ve written a little about this before, but I really love participant diaries!


In qualitative research, you are often trying to understand the lives, experiences and motivations of other people. Through methods like interviews and focus groups, you can get a one-off insight into people’s own descriptions of themselves. If you want to measure change over a period, you need to schedule a series of meetings, and each of which will be limited by what a participant will recall and share.


However, using diary methodologies, you can get a longer and much more regular insight into lived experiences, plus you also change the researcher-participant power dynamic. Interviews and focus groups can sometimes be a bit of an interrogation, with the researcher asking questions, and participants given the role of answering. With diaries, participants can have more autonomy to share what they want, as well as where and when (Meth 2003).


These techniques are also called self-report or ‘Contemporaneous assessment methods’, but there are actually a lot of different ways you can collect diary entries. There are some great reviews of different diary based methods, (eg Bolger et al. 2003), but let’s look at some of the different approaches.


The most obvious is to give people a little journal or exercise book to write in, and ask them to record on a regular basis any aspects of their day that are relevant to your research topic. If they are expected to make notes on the go, make it a slim pocket sized one. If they are going to write a more traditional diary at the end of each day, make a nice exercise book to work in. I’ve actually found that people end up getting quite attached to their diaries, and will often ask for them back. So make sure you have some way to copy or transcribe them and consider offering to return them once you have investigated them, or you could give back a copy if you wish to keep hold of the real thing.

 

You can also do voice diaries – something I tried in Botswana. We were initially worried that literacy levels in rural areas would mean that participants would either be unable, or reluctant to create written entries. So I offered everyone a small voice recorder, where they could record spoken notes that we would transcribe at the end of the session. While you could give a group of people an inexpensive (~£20) Dictaphone, I actually brought a bunch of cheap no-brand MP3 players which only cost ~£5 each, had a built in voice recorder and headphones, and could work on a single AAA battery (which was easy to find from local shops, since few respondents had electricity for recharging). The audio quality was not great, but perfectly adequate. People really liked these because they could also play music (and had a radio), and they were cheap enough to be lost or left as thank-you gifts at the end of the research.

 

There is also a large literature on ‘experience sampling’ – where participants are prompted at regular or random intervals to record on what they are doing or how they are feeling at that time. Initially this work was done using pagers, (Larson 1989) when participants would be ‘beeped’ at random times during the day and asked to write down what they were doing at the time. More recent studies have used smartphones to both prompt and directly collect responses (Chen et al. 2014).

 

Secondly, there is now a lot of online journal research, both researcher solicited as part of a qualitative research project (Kaun 2015), or collected from people’s blogs and social media posts. This is especially popular in market research when looking at consumer behaviour (Patterson 2005), project evaluation (Cohen et al. 2006).

 

Diary methods can create detailed, and reliable data. One study found that asking participants to record diary entries three times a day to measure stigmatised behaviour like sexual activities found an 89.7% adherence rate (Hensel et al. 2012), far higher than would be expected from traditional survey methods. There is a lot of diary based research in the sexual and mental health literature: for more discussion on the discrepancies and reliability between diary and recall methods, there is a good overview in Coxon (1999) but many studies like Garry et al. (2002) found that diary based methods generated more accurate responses. Note that these kinds of studies tend to be mixed-method, collecting both discrete quantitative data and open ended qualitative comments.

 

Whatever the method you are choosing, it’s important to set up some clear guidelines to follow. Personally I think either a telephone conversation or face-to-face meeting is a good idea to give a chance for participants to ask questions. If you’ve not done research diaries before, it’s a good idea to pilot them with one or two people to make sure you are briefing people clearly, and they can write useful entries for you. The guidelines, (explained and taped to the inside of the diary) should make it clear:

  • What you are interested in hearing about
  • What it will be used for
  • How often you expect people to write
  • How much they should write
  • How to get in touch with you
  • How long they should be writing entries, and how to return the diary.

 

Even if you expressly specify that your participants should write their journals should be written everyday for three weeks, you should be prepared for the fact that many won’t manage this. You’ll have some that start well but lapse, others that forget until the end and do it all in the last day before they see you, and everything in-between. You need to assume this will happen with some or all of your respondents, and consider how this is going to affect how you interpret the data and draw conclusions. It shouldn’t necessarily mean that the data is useless, just that you need to be aware of the limitations when analysing it. There will also be a huge variety in how much people write, despite your guidelines. Some will love the experience, sharing volumes of long entries, others might just write a few sentences, which might still be revealing.

 

For these reasons, diary-like methodologies are usually used in addition to other methods, such as semi-structured interviews (Meth 2003), or detailed surveys. Diaries can be used to triangulate claims made by respondents in different data sources (Schroder 2003) or provide more richness and detail to the individual narrative. From the researchers point of view, the difference between having data where a respondent says they have been bullied, and having an account of a specific incident recorded that day is significant, and gives a great amount of depth and illumination into the underlying issues.

 

Qualitative software - Quirkos

 

However, you also need to carefully consider the confidentiality and other ethical issues. Often participants will share a lot of personal information in diaries, and you must agree how you will deal with this and anonymise it for your research. While many respondents find keeping a qualitative diary a positive and reflexive process, it can be stressful to ask people in difficult situations to reflect on uncomfortable issues. There is also the risk that the diary could be lost, or read by other people mentioned in it, creating a potential disclosure risk to participants. Depending on what you are asking about, it might be wise to ask participants themselves to create anonymised entries, with pseudo-names and places as they write.

 

Last, but not least, what about your own diary? Many researchers will keep a diary, journal or ‘field notes’ during the research process (Altricher and Holly 2004), which can help provide context and reflexivity as well as a good way of recording thoughts on ideas and issues that arise during the data collection process. This is also a valuable source of qualitative data itself, and it’s often useful to include your journal in the analysis process – if not coded, then at least to remind you of your own reflections and experiences during the research journey.

 

So how can you analyse the text of your participant diaries? In Quirkos of course! Quirkos takes all the basics you need to do qualitative analysis, and puts it in a simple, and easy to use package. Try for yourself with a free trial, or find out more about the features and benefits.

 

Sharing qualitative research data from Quirkos

exporting and sharing qualitative data

Once you’ve coded, explored and analysed your qualitative data, it’s time to share it with the world. For students, the first step will be supervisors, for researchers it might be peers or the wider research community, and for market research firms, it will be their clients. Regardless of who the end user of your research is, Quirkos offers a lot of different ways to get your hard earned coding out into the real world.

 

Share your project file
The best, and easiest way to share your coded data is to send your project file to someone. If they have a copy of Quirkos (even the trial) they will be able to explore the project in the same way you can, and you can work on it collaboratively. Files are compatible across Windows, Mac and Linux, and are small enough they can be e-mailed, put on a USB stick or Dropbox as needed.

 

Word export
One of people’s favourite features is the Word export, which creates a standard Word file of your data, with comments and coloured highlights showing your complete coding. This means that pretty much anyone can see your coding, since the file will open in Microsoft Office, LibreOffice/OpenOffice, Google Docs, Pages (on Mac) and many others. It’s also a great way to print out your project if you prefer to read though it on paper, while still being able to see all your coding. If you print the ‘Full Markup’ view, you will still be able to see the name (and author) of the code on a black and white printer!qualitative word export from quirkos


There are two options available in the ‘Project’ button – either ‘Export All Sources as Word Document’ which creates one long file, or ‘Export Each Source’ which creates a separate file for each source in the project in a folder you specify.

 

Reports
So this is the most conventional output in Quirkos, a customisable document which gives a summary of the project, and an ordered list of coded text segments. It also includes graphical views of your coding framework, including the clustered views which show the connections between themes. When generated in Quirkos, you will get a two columned preview, with a view of how the report will look on the left, and all the options for what you want to include in the report on the right.


You can print this directly, save it as a PDF document, or even save as a webpage. This last option creates a report folder that anyone can open, explore and customise in their browser, in the same way as you are able to in the Quirkos report view. This also creates a folder which contains all the images in the report (such as the canvas and overlap views) that you can then include directly in presentations or articles.

quirkos qualitative data report


There are many options available here, including the ability to list all quotes by source (ie everything one person said) or by theme (ie everything everyone said on one topic). You can change how these quotes are formatted (by making the text or highlight into the colour of the Quirk) and the level of detail, such as whether to include the source name, properties and percentage of coding.

 

Sub-set reports (query view)
By default, the report button will generate output of the whole project. But if you want to just get responses from a sub-set of your data, you can generate reports containing only the results of filters from the query view. So you could generate a report that only shows the responses from Men or Women, or by one of the authors in the project.

 

CSV export
Quirkos also gives you the option to export your project as CSV files – a common spreadsheet format which you can open with in Excel, SPSS or equivalents. This allows you to do more quantitative analysis in statistical software, generate graphs of your coding, and conduct more detailed sub-analysis. The CSV export creates a series of files which represent the different tables in the project database, with v_highlight.csv containing your coded quotes. Other files contain the question and answers (in a structured project), a list of all your codes, levels, and source properties (also called metadata).

 

Database editing
For true power users, there is also the option to perform full SQL operations on your project file. Since Quirkos saves all your project data as a standard SQLite database, it’s possible to open and edit it with a number of third party tools such as SQL Browser to perform advanced operations. You can also use standard command line operations (CLI) like SELECT FROM WHERE to explore and edit the database. Our full manual has more details on the database structure. Hopefully, this will also allow for better integration with other qualitative analysis software in the future.

 

If you are interesting in seeing how Quirkos can help with coding and presenting your qualitative data, you can download a one-month free trial and try for yourself. Good luck with your research!

 

Tools for critical appraisal of qualitative research

appraising qualitative data

I've mentioned before how the general public are very quantitatively literate: we are used to dealing with news containing graphs, percentages, growth rates, and big numbers, and they are common enough that people rarely have trouble engaging with them.

 

In many fields of studies this is also true for researchers and those who use evidence professionally. They become accustomed to p-values, common statistical tests, and plot charts. Lots of research is based on quantitative data, and there is a training and familiarity in these methods and data presentation techniques which create a lingua-franca of researchers across disciplines and regions.

 

However, I've found in previous research that many evidence based decision makers are not comfortable with qualitative research. There are many reasons for this, but I frequently hear people essentially say that they don't know how to appraise it. While they can look at a sample size and recruitment technique and a r-square value and get an idea of the limitations of a study, this is much harder for many practitioners to do with qualitative techniques they are less familiar with.

 

But this needn’t be the case, qualitative research is not rocket science, and there are fundamental common values which can be used to assess the quality of a piece of research. This week, a discussion on appraisal of qualitative research was started on Twitter started by the Mental Health group of the 'National Elf Service’ (@Mental_Elf) - an organisation devoted to collating and summarising health evidence for practitioners.

 

People contributed many great suggestions of guides and toolkits that anyone can use to examine and critique a qualitative study, even if the user is not familiar with qualitative methodologies. I frequently come across this barrier to promoting qualitative research in public sector organisations, so was halfway through putting together these resources when I realised they might be useful to others!

 

First of all, David Nunan (@dnunan79) based at the University of Oxford shared an appraisal tool developed at the Centre for Evidence-Based Medicine (@CebmOxford).

 

Lucy Terry (@LucyACTerry) offered specific guidelines for charities from New Philanthropy Capital, with gives five key quality criteria, that the research should be: Valid, Reliable, Confirmable, Reflexive and Responsible.

 

There’s also an article by Kuper et al (2008) which offers guidance on assessing a study using qualitative evidence. As a starting point, they list 6 questions to ask:

  • Was the sample used in the study appropriate to its research question?
  • Were the data collected appropriately?
  • Were the data analysed appropriately?
  • Can I transfer the results of this study to my own setting?
  • Does the study adequately address potential ethical issues, including reflexivity?
  • Overall: is what the researchers did clear?
     

The International Centre for Allied Health Evidence at the University of South Australia has a list of critical apprasial tools, including ones specific to qualitative research. From these, I quite like the checklist format of one developed by the Critical Appraisal Skills Programme, I can imagine this going down well with health commissioners.

 

Another from the Occupational Therapy Evidence-Based Practice Research Group at McMaster University in Canada is more detailed, and is also available in multiple languages and an editable Word document.

 

Finally, Margaret Roller and Paul Lavrakas have a recent textbook (Applied Qualitative Research Design: A Total Quality Framework Approach 2015) that covers many of these issues in research, and detail the Total Quality Framework that can be used for designing, discussing and evaluating qualitative research. The book contains specific chapters on detailing the application of the framework to different projects and methodologies. Margaret Roller also has an article on her excellent blog on weighing the value of qualitative research, which gives an example of the Total Quality Framework.

 

In short, there are a lot of options to choose from, but the take away message from them is that the questions are simple, short, and largely common sense. However, the process of assessing even just a few pieces of qualitative research in this way will quickly get evidence based practitioners into the habit of asking these questions of most projects they come across, hopefully increasing their comfort level in dealing with qualitative studies.

 

The tools are also useful for students, even if they are familiar with qualitative methodologies, as it helps facilitate a critical reading that can give focus to paper discussion groups or literature reviews. Adopting one of the appraisal techniques here (or modifying one) would also be a great start to a systematic review or meta-analysis.

 

Finally, there are a few sources from the Evidence and Ethnicity in Commissioning project I was involved with that might be useful, but if you have any suggestions please let me know, either in the forum or by e-mailing daniel@quirkos.com and I will add these to the list. Don't forget to find out more about using Quirkos for your qualitative analysis and download the free trial.

 

 

Finding, using and some cautions on secondary qualitative data

secondary data analysis

 

Many researchers instinctively plan to collect and create new data when starting a research project. However, this is not always needed, and even if you end up having to collect your own data, looking for other sources already out there can help prevent redundancy and improve your conceptualisation of a project. Broadly you can think of two different types of secondary data: sources collected previously for specific research projects, and data 'scraped' from other sources, such as Hansard transcripts or Tweets.

 

Other data sources include policy documents, news articles, social media posts, other research articles, and repositories of qualitative data collected by other researchers, which might be interviews, focus groups, diaries, videos or other sources. Secondary analysis of qualitative data is not very common, since the data tends to be collected with a very narrow research interest, and of a depth that makes anonymisation and aggregation difficult. However, there is a huge amount of rich data that could be used again for other subjects, see this article by Heaton 2008 for a general overview, and Irwin 2013 for discussion of some of the ethical issues.

 

The advantage with using qualitative data analysis software is that you can keep many disperate sources of evidence together in one place. If you have an article positing a particular theory, you can quickly cross code supportive evidence with sections of text from that article, and evidence why your data does or does not support the literature. Examining social policy? Put the law or government guidelines into your project file, and cross reference statements from participants that challenge or support these dictates in practice. The best research does not exist in isolation: it must engage with both the existing literature and real policy to make an impact.

 


Data from other sources has the advantage that the researcher doesn’t have to spend time and resources collecting and recruiting. However, the constant disadvantage is that the data was not specifically obtained to meet your particular research questions or needs. For example, data from Twitter might give valuable insights into people’s political views. But statements that people make do not always equate with their views (this is true for directly collected research methods as well), so someone may make a controversial statement just to get more followers, or be suppressing their true beliefs if they believe that expressing them will be unpopular. 

 

Each different website has it’s own culture as well, which can affect what people share, and in how much detail. A paper by Hine (2012) shows that posts from the popular UK 'Mumsnet' forum have particular attitudes that are acceptable, and often posters are looking for validation of their behaviour from others. Twitter and Facebook are no exception, their each have different styles and acceptable posts that true internet ethnographers should understand well!

 

Even when using secondary data collected for a specific academic research project, the data might not be suitable for your needs. A great series of qualitative interviews about political views may seem to be a great fit for your research, but might not have asked a key question (for example about respondents parent’s beliefs) which renders the data unusable for your purpose. Additionally, it is usually impossible to identify respondents in secondary data sets to ask follow on questions, since data is anonymised. It’s sometimes even difficult to see the original research questions and interview schedule, and so to find out what questions were asked and for what purpose.

 

But despite all this, it is usually a good idea to look for secondary sources. It might give you an insight into the area of study you hadn’t considered, highlighting interesting issues that other research as picked up on. It might also reduce the amount of data you need to collect: if someone has done something similar in this area, you can design your data collection to highlight relevant gaps, and build on what has been done before (theoretically, all research should do this to a certain degree).

 

I know it’s something I keep reiterating, but it’s really important to understand who your data represents: you need some kind of contextual or demographic data. This is sometimes difficult to find when using data gathered from social media, where people are often given the option to only state very basic data, such as gender, location or age, but many people may not disclose. It can also be a pain to extract comments from social media posts in such a way that the identity of the poster and their posts are kept with it – however there are 3rd party tools that can help with this.

 

When writing up your research, you will also want to make explicit how you found and collected this source of data. For example, if you are searching Twitter for a particular hashtag or phrase, when did you run the search? If you run it the next day, or even minute, the results will be different, and how far back did you include posts? What languages? Are there comments that you excluded - especially if they look like spam or promotional posts? Think about making it replicable: what information would someone need to get the same data as you?

 

You should also try and be as comprehensive as possible. If you are examining newspaper articles for something that has changed over time (such as use of the phrase ‘tactical warfare’) don’t assume all your results will be online. While some projects have digitised newspaper archives from major titles, there are a lot of sources that are still print only, or reside in special databases. You can gain help and access to these from national libraries, such as the British Library.


There are growing repositories of open access data, including qualitative datasets. A good place to start is the UK Data Service, even if you are outside the UK, as it contains links to a number of international stores of qualitative data. Start here, but note that you will generally have to register, or even gain approval to access some datasets. This shouldn’t put you off, but don’t expect to always be able to access the data immediately, and plan to prepare a case for why you should be granted access. In the USA there is a qualitative specific data repository called the Qualitative Data Repository (or QDR) hosted by Syracuse University.

 

If you have found a research article based on interesting data that is not held on a public repository, it is worth contacting the authors anyway to see if they are able to share it. Research based on government funding increasingly comes with stipulations that the data should be made freely available, but this is still a fairly new requirement, and investigators from other projects may still be willing and able to grant access. However, authors can be protective over their data, and may not have acquired consent from participants in such a way that they would be allowed to share the data with third parties. This is something to consider when you do your own work: make sure that you are able to give back to the research community and share your own data in the future.

 


Finally, a note of caution about tailored results. Google, Facebook and other search engines do not show the same results in the same order to all people. They are customised for what they think you will be interested in seeing, based on your own search history and their assumptions of your gender, location, ethnicity, and political leanings. This article explains the impact of the ‘research bubble’ and this will impact the results that you get from social media (especially Facebook).

 

To get around this in a search, you can use a privacy focused search engine (like DuckDuckGo), add &pws=0 to the end of a Google search, or use a browser in ‘Private’ or ‘Incognito’ mode. However, it’s much more difficult to get neutral results from Facebook, so bare this in mind.

 

Hopefully these tips have given you food-for-thought about sources and limitations of secondary data sources, if you have any comments or suggestions, please share them in the forum. Don’t forget that Quirkos has simple copy and paste source generation that allows you to bring in secondary data from lots of different formats and internet feeds, and the visual interface makes coding and exploring them a breeze. Download a free trial from www.quirkos.com/get.html.

 

 

Developing and populating a qualitative coding framework in Quirkos

coding blog

 

In previous blog articles I’ve looked at some of the methodological considerations in developing a coding framework. This article looks at top-down or bottom-up approaches, whether you start with large overarching themes (a-priori) and break them down, or begin with smaller more simple themes, and gradually impose meanings and connections in an inductive approach. There’s a need in this series of articles to talk about the various different approaches which are grouped together as grounded theory, but this will come in a future article.

 

For now, I want to leave the methodological and theoretical debates aside, and look purely at the mechanics of creating the coding framework in qualitative analysis software. While I’m going to be describing the process using Quirkos as the example software, the fundamentals will apply even if you are using Nvivo, MaxQDA, AtlasTi, Dedoose, or most of the other CAQDAS packages out there. It might help to follow this guide with the software of your choice, you can download a free trial of Quirkos right here and get going in minutes.

 

First of all, a slightly guilty confession: I personally always plan out my themes on paper first. This might sound a bit hypocritical coming from someone who designs software for a living, but I find myself being a lot more creative on paper, and there’s something about the physicality of scribbling all over a big sheet of paper that helps me think better. I do this a lot less now that Quirkos lets me physically move themes around the screen, group them by colour and topic, but for a big complicated project it’s normally where I start.

 

But the computer obviously allows you to create and manage hundreds of topics, rearrange and rename them (which is difficult to do on paper, even with pencil and eraser!). It will also make it easy to assign parts of your data to one of the topics, and see all of the data associated with it. While paper notes may help conceptually think through some of the likely topics in the study and connect them to your research questions, I would recommend users to move to a QDA software package fairly early on in their project.

 

Obviously, whether you are taking an a-priori or grounded approach will change whether you will creating most of your themes before you start coding, or adding to them as you go along. Either way, you will need to create your topics/categories/nodes/themes/bubbles or whatever you want to call them. In Quirkos the themes are called ‘Quirks’ informally, and are represented by default as coloured bubbles. You can drag and move these anywhere around the screen, change their colours, and their size increases every time you add some text to them. It’s a neat way to get confirmation and feedback on your coding. In other software packages there will just be a number next to the list of themes that shows how many coding events belong to each topic.

 


In Quirkos, there are actually three different ways to create a bubble theme. The most common is the large (+) button at the top left of a canvas area. This creates a new topic bubble in a random place with a random colour, and automatically opens the Properties dialogue for you to edit it. Here you can change the name, for example to ‘Fish’ and put in a longer description: ‘Things that live in water and lay eggs’ so that the definition is clear to yourself and others. You can also choose the colour, from some 16 million options available. There is also the option to set a ‘level’ for this Quirk bubble, which is a way to group intersecting themes so that one topic can belong to multiple groups. For example, you could create a level called ‘Things in the sea’ that includes Fish, Dolphins and Ships, and another category called ‘Living things’ that has Fish, Dolphins and Lions. In Quirkos, you can change any of these properties at any time by right clicking on the appropriate bubble.

 

quirkos qualitative properties editor

 

Secondly, you can right click anywhere on the ‘canvas’ area that stores your topics to create a new theme bubble at that location. This is useful if you have a little cluster of topics on a similar theme, and you want to create a new related bubble near the other ones. Of course, you can move the bubbles around later, but this makes things a bit easier.

 

If you are creating topics on the fly, you can also create a new category by dragging and dropping text directly onto the same add Quirk button. This creates a new bubble that already contains the text you dragged onto the button. This time, the property dialogue doesn’t immediately pop-up, so that you can keep adding more sections of data to the theme. Don’t forget to name it eventually though!

 

drag and drop qualitative topic creation

 

All software packages allow you to group your themes in some way, usually this is in a list or tree view, where sub-categories are indented below their ‘parent’ node. For example, you might have the parent category ‘Fish’ and the sub-categories ‘Pike’, ‘Salmon’ and ‘Trout’. Further, there might be sub-sub categories, so for example ‘Trout’ might have themes for ‘Brown Trout’, ‘Brook Trout’ and ‘Rainbow Trout’. This is a useful way to group and sort your themes, especially as many qualitative projects end up with dozens or even hundreds of themes.

 

In Quirkos, categories work a little differently. To make a theme a sub-category, just drag and drop that bubble onto the bubble that will be its parent, like stacking them. You will see that the sub-category goes behind the parent bubble, and when you move your mouse over the top category, the others will pop out, automatically arranging like petals from a flower. You can also remove categories just by dragging and pulling it out from the parent just like picking petals from a flower! You can also create sub-sub categories (ie up to three levels depth) but no more than this. When a Quirk has subcategories clustered below it, this is indicated by a white ring inside the bubble. This method of operation makes creating clusters (and changing your mind) very easy and visual.

 

Now, to add something to the topic, you just have to select some text, and drag and drop it onto the bubble or theme. This will work in most software packages, although in some you can also right click within the selected text where you will find a list of codes to assign that section to.


Quirkos, like other software, will show coloured highlighted stripes over the text or in the margin that show in the document which sections have been added to which codes. In Quirkos, you can always see what topic the stripe represents by hovering the mouse cursor over the coloured section, and the topic name will appear in the bottom left of the screen. You can also right-click on the stripe and remove that section of text from the code at any time. Once you have done some coding, in most software packages you can double click on the topic and see everything you’ve coded at this point.

 

Hopefully this should give you confidence to let the software do what it does best: keep track of lots of different topics and what goes in them. How you actually choose which topics and methodology to use in your project is still up to you, but using software helps you keep everything together and gives you a lot of tools for exploring the data later. Don’t forget to read more about the specific features of Quirkos here and download the free trial from here.

 

Transcribing your own qualitative data

diy qualitative transcription

In a previous blog article I talked about some of the practicalities and costs involved in using a professional transcribing service to turn your beautifully recorded qualitative interviews and focus groups into text data ready for analysis. However, hiring a transcriber is expensive, and is often beyond the means of most post-graduate researchers.

 

There are also serious advantages to doing the transcription yourself that make a better end result and get you much closer to your data. In this article I’m going to go through some practical tips that should make doing transcription a little less painful.

 

But first, a little more on the benefits of transcribing your own data. If you were there in the room with the respondent, you asked the questions, and were watching and listening to the participant. Do the transcription soon after the interview and you are likely to remember words that might be muffled in the recording, points that the respondent emphasised by shaking their head – lots of little details to capture.

 

It’s important to remember that transcription is an interpretive act (Bailey 2008), you can’t just convert an interview into a perfect text version of that data. While this might be obvious when working between different languages where translation is required, I would argue that a transcriber always makes subjective decisions about misheard words, how to record pauses and inflictions, or unconsciously changes words or their order.

 

As I’ve mentioned before, you loose a lot of the nuance of an interview when moving to text, and the transcriber has to make choices about how to mitigate this: Was this hesitation or just pausing for breath? How should I indicate the participant banged on the table for emphasis? Capturing this non-verbal communication in a transcript can really change the interpretation in qualitative data, so I like it when this process is in the control of the researcher. For a lot more on these and other issues there is a review of the qualitative transcription literature by Davidson (2009).


 

What do I actually type?

In a word, everything: the questions, the answers, the hesitations and mumbles, and things that were communicated, but not said verbally.

 

First, some guidelines for what the transcription should look like, bearing in mind that there is no one standard. You can use a word processor, or a spreadsheet like Excel. It can be a little more difficult to get formatting right in a spreadsheet, for example you will need to use Shift+Return to make a new paragraph within a cell, and getting it to look right on a printed page is more of a challenge. Yet since interviews and especially focus groups will usually have more than one voice to assign text to, you need some way to structure the data.

 

In a spreadsheet you can use three columns: the first for an occasional time index (so you see where in the audio this section of text occurs), the second for name of voice, and the third widest one for text. While you can use a table to do the same thing in Word, spreadsheets will do auto-complete for your names, making things a bit faster. However, for just a one-on-one interview, it’s easy to just use a Q: / A: formatting for each respondent in a spreadsheet, and put periodic time stamps in brackets at the top of each page.

 

Second, record non-verbal data in a consistent way, usually in square brackets. For example [hesitates], [laughter], [bangs fist on table], or even when [coffee is delivered]. You may choose to use italics or bold type to show when someone puts emphasis on a word, but choose one or the other and be consistent.

 

Next, consider your system for indicating pauses. Usually a short pause is represented by three dots ‘…’ Anything longer is recorded in square brackets and roughly timed [5 second pause]. These pauses can show hesitation in the participant to answer a difficult question, and long pauses may have special meaning. There is actually a whole article on the importance of silences by Poland and Pederson (1998).

 

When you are transcribing, you also need to decide on the level of detail. Will you record every Um, Er, and stutter? In verbal speech these are surprisingly common. Most qualitative research does want this level of detail, but it is obviously more time consuming to type. You’ll often have corrections in the speech as well, commonly “I’ve… I’ll never say that ag... any more”. Do you include the first self correction? It’s clear in the audio the participant was going to say ‘again’ but changed themselves to ‘any more’ - should I record this? Decide on the level of detail early on, and be consistent.

 

Sometimes people can go completely off topic, or there will be a section in the audio where you were complaining about the traffic, ordering coffee, or a phone call interrupted things. If you decide it’s not relevant to capture, just indicate with time markings what happened in square brackets: [cup smashed on the floor, 5min to clear up].

 

Once you are done with an interview, it’s a good idea to listen to it back, reading through the transcript and correcting any mistakes. The first few times you will be surprised at how often you swapped a few words, or got strange typos.

 

 

So how long will it all take?

Starting out with all this can be daunting, especially if you have a large number of interviews to transcribe. A good rule of thumb is that transcribing an interview verbatim will take between 3 and 6 times longer than the audio. So for an hour of recording, it could take as little as three hours, or as much as six to type up.

 

This sounds horrifying, and it is. I’m quite a fast typer, and have done quite a bit of transcription before, but I average between 3x and 4x the audio time. If you are slow at typing, need to pause the audio a lot, or have to put in a lot of extra descriptive detail it can take a lot longer. The tips below should help you get towards the 3x benchmark, but it’s worth planning out your time a little before you begin.

 

If you have twenty interviews each lasting on average 1 hour, you should probably plan for at least 60 hours of transcription time. You are looking at nearly 9 days or two weeks of work at a standard 9-5 work day. I don’t say this to frighten you, just to mentally acclimatise you to the task ahead!

 

It’s also worth noting that transcription is very intensive work. You will be frantically typing as fast as you can, and it requires extreme mental concentration to listen and type simultaneously, while also watching for errors and fixing typos. I don’t think most people could just do two or three hour sessions at a time without going a little crazy! So you need to plan in some breaks, or at least some different non-typing work.

 

If this sounds insurmountable, don’t panic. Just spread out the work, especially if you can do the transcripts after each interview, instead of in one huge batch. This is generally better since you can review one interview before you do the next one, giving you a chance to change how you ask questions and cover any gaps. Transcription can also be quite engrossing (since you can’t possibly do anything else at the same time), and it’s nice to see the hours ticking off.

 

 

 

So how can you make this faster?

You need to set up your computer (or laptop) to be a professional transcribing station, where you can hear the audio, start and stop it easily, and type comfortably for a long period of time.

 

Even if you type really fast, you won’t be able to keep up with the speed that people speak, meaning you will have to frequently start and stop the audio to catch up. Most professionals will use a ‘foot-pedal’ to do this, so that they don’t have to stop typing, come out of the word processing software and pause an audio player. Even if you are playing audio from a dictaphone next to you, going away from the keyboard, stopping and starting the buttons on the dictaphone and coming back to type again quickly becomes tedious.

 

A foot-pedal lets you start and stop the audio by tapping with your foot (or toe) and often has additional buttons to rewind a little (very useful) or fast-forward through the audio. Now, these cost around £30/$40 or more, but can be a worthwhile investment. However, it’s also worth checking to see if you can borrow one from a colleague, or even if your department or library has one for hire.

 

But if you are a cheapskate like me, there are other ways to do this. Did you know that you can have two or more keyboards attached to a computer, and they will both work? An extra keyboard (with a USB connector) can cost as little as £10/$15 if you don’t already have a spare lying around, and can be plugged into a laptop as well. Put it on the floor, and you can set up one of the keys as a ‘global shortcut’ in an audio player like VLC. Here’s a forum detailing how to set up a certain key so that it will start and stop the audio even if you are typing in another programme. Put your second keyboard on the floor, and tap your chosen key with your toe to start and stop! Even if you only use one keyboard, you can set a shortcut in VCL (for example Alt+1), and every time you press that combination it will play or pause the audio, even if VLC player is hidden.

 

There’s another advantage to using VLC: it can slow down your recordings as they are played back! Once your audio is playing, click on the Playback menu item, then Speed. Change to Slower, and listen as your participants magically start talking like sleepy drunks! This helps me more than anything, because I can slow down the speech to a level that means I can type constantly without getting behind. This method does warp the speech, and having the setting too high can make it difficult to understand. However, the less you have to pause and stop the audio to catch up with your typing, the faster your transcription will go.

 

You can also do this with audio software like Audacity. Here, import your audio file, and click on Effect, and Change Tempo. Drag the slider to the left to slow down the speech (try 20% – 50%) without changing the ‘pitch’ so everyone doesn’t end up sounding like Barry White. You can then save the file with your desired speed, and the quality can be a little better than the live speed changes in VLC.

 

General tips for good typing can help too. Watch the screen as you type, not your fingers, so that you can quickly pick up on mistakes. Learn to use all your fingers to type, don’t just ‘hunt and peck’ - a quick typing tutorial might save you hours in the long run if you don’t do this already.

 

Last of all, consider your posture. I’m serious! If you are going to be hunched up and typing for days and days, bad posture is going to make you ache and get stressed. Make sure your desk and chair are the right height for you, try using a proper keyboard if working from a laptop (or at least prop up the laptop to a good angle). Make sure the lighting is good, there is no screen glare, and use a foot rest if this helps the position of your back. Scrunched up on a sofa with a laptop in your lap for 60 hours is a great way to get cramp, back-ache and RSI. Try and take a break at least every half an hour: get up and stretch, especially your hands and arms.

 

So, you have your beautiful and detailed transcripts? Now you can bring them into Quirkos to analyse them! Quirkos is ideal for students doing their first qualitative analysis project, as it makes coding and analysis of text visual, colourful and easy to learn. There’s a free trial on our website, and you can bring in data from lots of different sources to work with.

 

Sampling considerations in qualitative research

sampling crowd image by https://www.flickr.com/photos/jamescridland/613445810/in/photostream/

 

Two weeks ago I talked about the importance of developing a recruitment strategy when designing a research project. This week we will do a brief overview of sampling for qualitative research, but it is a huge and complicated issue. There’s a great chapter ‘Designing and Selecting Samples’ in the book Qualitative Research Practice (Ritchie et al 2013) which goes over many of these methods in detail.

 

Your research questions and methodological approach (ie grounded theory) will guide you to the right sampling methods for your study – there is never a one-size-fits-all approach in qualitative research! For more detail on this, especially on the importance of culturally embedded sampling, there is a well cited article by Luborsky and Rubinstein (1995). But it’s also worth talking to colleagues, supervisors and peers to get advice and feedback on your proposals.

 

Marshall (1996) briefly describes three different approaches to qualitative sampling: judgement/purposeful sampling, theoretical sampling and convenience sampling.

 

But before you choose any approach, you need to decide what you are trying to achieve with your sampling. Do you have a specific group of people that you need to have in your study, or should it be representative of the general population? Are you trying to discover something about a niche, or something that is generalizable to everyone? A lot of qualitative research is about a specific group of people, and Marshall notes:
“This is a more intellectual strategy than the simple demographic stratification of epidemiological studies, though age, gender and social class might be important variables. If the subjects are known to the research, they may be stratified according to known public attitudes or beliefs.”

 

Broadly speaking, convenience, judgement and theoretical sampling can be seen as purposeful – deliberately selecting people of interest in some way. However, randomly selecting people from a large population is still a desirable approach in some qualitative research. Because qualitative studies tend to have a small sample size due to the in-depth nature of engagement with each participant, this can have an impact if you want a representative sample. If you randomly select 15 people, you might by chance end up with more women than men, or a younger than desired sample. That is why qualitative studies may use a little bit of purposeful sampling, finding people to make sure the final profile matches the desired sampling frame. For much more on this, check out the last blog post article on recruitment.

 

Sample size will often also depend on conceptual approach: if you are testing a prior hypothesis, you may be able to get away with a smaller sample size, while a grounded theory approach to develop new insights might need a larger group of respondents to test that the findings are applicable. Here, you are likely to take a ‘theoretical sampling’ approach (Glaser and Strauss 1967) where you specifically choose people who have experiences that would contribute to a theoretical construct. This is often iterative, in that after reviewing the data (for theoretical insights) the researcher goes out again to find other participants the model suggests might be of interest.

 

The convenience sampling approach which Marshal mentions as being the ‘least rigorous technique’ is where researchers target the most ‘easily accessible’ respondents. This could even be friends, family or faculty. This approach can rarely be methodologically justified, and is unlikely to provide a representative sample. However, it is endemic in many fields, especially psychology, where researchers tend to turn to easily accessible psychology students for experiments: skewing the results towards white, rich, well-educated Western students.

 

Now we turn to snowball sampling (Goodman 1961). This is different from purposeful sampling in that new respondents are suggested by others. In general, this is most suited to work with ‘marginalised or hard-to-reach’ populations, where responders are not often forthcoming (Sadler et al 2010). For example, people may not be open about their drug use, political views or living with stigmatising conditions, yet often form closely connected networks. Thus, by gaining trust with one person in the group, others can be recommended to the researcher. However, it is important to note the limitations with this approach. Here, there is the risk of systemic bias: if the first person you recruit is not representive in some way, their referrals may not be either. So you may be looking at people living with HIV/AIDS, and recruit through a support group that is formed entirely of men: they are unlikely to suggest women for the study.

 

For these reasons there are limits to the generalisability and appropriateness of snowball sampling for most subjects of inquiry, and it should not be taken as an easy fix. Yet while many practitioners explain the limitations with snowball research, it can be very well suited for certain kinds of social and action research, this article by Noy (2008) outlines some of the potential benefits to power relations and studying social networks.

 

Finally, there is the issue of sample size and ‘saturation’. This is when there is enough data collected to confidently answer the research questions. For a lot of qualitative research this means collected and coded data as well, especially if using some variant of grounded theory. However, saturation is often a source of anxiety for researchers: see for example the amusingly titled article “Are We There Yet?” by Fusch and Ness (2015). Unlike quantitative studies where a sample size can be determined by the desired effect size and confidence interval in a chosen statistical test, it is more difficult to put an exact number on the right number of participant responses. This is especially because responses are themselves qualitative, not just numbers in a list: so one response may be more data rich than another.

 

While a general rule of thumb would indicate there is no harm in collecting more data than is strictly necessary, there is always a practical limitation, especially in resource and time constrained post-graduate studies. It can also be more difficult to recruit than anticipated, and many projects working with very specific or hard-to-reach groups can struggle to find a large enough sample size. This is not always a disaster, but may require a re-examination of the research questions, to see what insights and conclusions are still obtainable.

 

Generally, researchers should have a target sample size and definition of what data saturation will look like for their project before they begin sampling and recruitment. Don’t forget that qualitative case studies may only include one respondent or data point, and in some situations that can be appropriate. However, getting the sampling approach and sample size right is something that comes with experience, advice and practice.

 

As I always seem to be saying in this blog, it’s also worth considering the intended audience for your research outputs. If you want to publish in a certain journal or academic discipline, it may not be responsive to research based on qualitative methods with small or ‘non-representative’ samples. Silverman (2013 p424) mentions this explicitly with examples of students who had publications rejected for these reasons.

 

So as ever, plan ahead for what you want to achieve for your research project, the questions you want to answer, and work backwards to choose the appropriate methodology, methods and sample for your work. Also, check the companion article about recruitment, most of these issues need to be considered in tandem.

 

Once you have your data, Quirkos can be a great way to analyse it, whether your sample size has one or dozens of respondents! There is a free trial and example data sets to see for yourself if it suits your way of working, and much more information in these pages. We also have a newly relaunched forum, with specific sections on qualitative methodology if you wanted to ask questions, or comment on anything raised in this blog series.

 

 

Qualitative evidence for SANDS Lothians

qualitative charity research - image by cchana

Charities and third sector organisations are often sitting on lots of very useful qualitative evidence, and I have already written a short blot post article on some common sources of data that can support funding applications, evaluations and impact assessments. We wanted to do a ‘qualitative case study’: to work with one local charity to explore what qualitative evidence they already had, what they could collect, and use Quirkos to help create some reports and impact assessments.

 

SANDS Lothians is an Edinburgh based charity that provides long-term counselling and support for families who have experienced bereavement through the loss of a child near-birth. They approached us after seeing advertisements for one of our local qualitative training workshops.


Director Nicola Welsh takes up the story. “During my first six months in post, I could see there was much evidence to highlight the value of our work but was struggling to pull this together in some order which was presentable to others. Through working with Daniel and Kristin we were able to start to structure what we were looking to highlight and with their help begin to organise our information so it was available to share with others. Quirkos allowed us to pull information from service users, stats and studies to present this in a professional document. They gave us the confidence to ask our users about their experiences and encouraged us to record all the services we offered to allow others at a glance to get a feel for what we provide.”

 

First of all, we discussed what would be most useful to the organisation. Since they were in discussion with major partners about possible funding, an impact assessment would be valuable in this process.

 

They also identified concerns from their users about a specific issue, prescriptions for anti-depressants, and wanted to investigate this further. It was important to identify the audience that SANDS Lothians wanted to reach with this information, in this case, GPs and other health professionals. This set the format of a possible output: a short briefing paper on different types of support that parents experiencing bereavement could be referred to.

 

We started by doing an ‘evidence assessment’ (or evidence audit as this previous blog post article notes), looking for evidence on impact that SANDS Lothians already had. Some of this was quantitative, such as the number of phone calls received on a monthly basis. As they had recently started counting these calls, it was valuable evidence of people using their support and guidance services. In the future they will be able to see trends in the data, such as an increase in demand or seasonal variation that will help them plan better.

 

They already had national reports from NHS Scotland on Infant Mortality, and some data from the local health board. But we quickly identified a need for supportive scientific literature that would help them make a better case for extending their counselling services. One partner had expressed concerns that counselling was ineffective, but we found a number of studies that showed counselling to be beneficial for this kind of bereavement. Finding these journal articles for them helped provide legitimacy to the approach detailed in the impact assessment.

 

In fact, a simple step was to create a list of all the different services that SANDS Lothians provides. This had not been done before, but quickly showed how many different kinds of support were offered, and the diversity of their work. This is also powerful information for potential funders or partners, and useful to be able to present quickly.

 

Finally, we did a mini qualitative research project!

 

A post on their Facebook page asking for people to share experiences about being prescribed antidepressants after bereavement got more than 20 responses. While most of these were very short, they did give us valuable and interesting information: for example, not all people who had been suggested anti-depressants by their GP saw this as negative, and some talked about how these had helped them at a difficult time.

 

SANDS Lothians already had amazing and detailed written testimonials and stories from service users, so I was able to combine the responses from testimonials and comments from the Facebook feed into one Quirkos project, and draw across them all as needed.

 

Using Quirkos to pull out the different responses to anti-depressants showed that there were similar numbers of positive and negative responses, and also highlighted parent’s worries we had not considered, such as the effect of medication if trying to conceive again. This is the power of an qualitative approach: by asking open questions, we got a responses about issues we wouldn’t have asked about in a direct survey.

 

quirkos bubble cluster view

 

When writing up the report, Quirkos made it quick and easy to pull out supportive quotes. As I had previously gone through and coded the text, I could click on the counselling bubble, immediately see relevant comments, and copy and paste them into the report. Now SANDS Lothians also has an organised database of comments on how their counselling services helped clients, which they can draw on at any time.

 

Nicola explains how they have used the research outputs. “The impact assessment and white paper has been extremely valuable to our work. This has been shared with senior NHS Lothian staff regarding possible future partnership working.  I have also shared this information with the Scottish Government following the Bonomy recommendations. The recommendations highlight the need for clear pathways with outside charities who are able to assist bereaved parents. I was able to forward our papers to show our current support and illustrate the position Lothians are in regarding the opportunity to have excellent bereavement care following the loss of a baby. It strengthened the work we do and the testimonials give real evidence of the need for this care. 

 

I have also given our papers out at recent talks with community midwives and charge midwives in West Lothian and Royal Infirmary Edinburgh. Cecilia has attached the papers to grant applications which again strengthens our applications and validates our work.”

 

Most importantly, SANDS Lothians now have a framework to keep collecting data, “We will continue to record all data and update our papers for 2016.  Following our work with Quirkos, we will start to collate case studies which gives real evidence for our work and the experiences of parents.  Our next step would be to look specifically at our counselling service and its value.” 

 

“The work with Quirkos was extremely helpful. In very small charities, it is difficult to always have the skills to be an expert in all areas and find the time to train. We are extremely grateful to Daniel and Kristin who generously volunteered their time to assist us to produce this work. I would highly recommend them to any business or third sector organisation who need assistance in producing qualitative research.  We have gained confidence as a charity from our journey with Quirkos and would most definitely consider working with them again in the future.”

 

It was an incredible and emotional experience to work with Nicola and Cecilia at SANDS Lothians on this small project, and I am so grateful to for them for inviting us in to help them, and sharing so much. If you want any more information about the services they offer, or need to speak to someone about losing a baby through stillbirth, miscarriage or soon after birth, all their contact details are available on their website: http://www.sands-lothians.org.uk .

 

If you want any more information about Quirkos and a qualitative approach, feel free to contact us directly, or there is much more information on our website. Download a free trial, or read more about adopting a qualitative approach.

 

 

Recruitment for qualitative research

Recuriting qualitative participants

 

You’ll find a lot of information and debate about sampling issues in qualitative research: discussions over ‘random’ or ‘purposeful’ sampling, the merits and pitfalls of ubiquitous ‘snowball’ sampling, and unending questions about sample size and saturation. I’m actually going to address most of these in the next blog post, but wanted to paradoxically start by looking at recruitment. What’s the difference, and why think about recruitment strategies before sampling?

 

Well, I’d argue that the two have to be considered together, but recruitment tends to be a bit of an afterthought and is so rarely detailed in journal articles (Arcury and Quandt 1999) I feel it merits its own post. In fact, there is a great ONS document about sampling, but it only has one sentence on advice for respondent recruitment: “The method of respondent recruitment and its effectiveness is also an important part of the sampling strategy”. Indeed!

 

When we talk about recruitment, we are considering the way we actually go out and ask people to take part in a research study. The sample frame is how we choose what groups of people and how many to approach, but there are huge practical problems in implementing our chosen sampling method that can be dealt with by writing a comprehensive recruitment strategy.

 

This might sound a bit dull, but it’s actually kind of fun – and the creation of such a strategy for your qualitative research project is a really good thought exercise, helping you plan and later acknowledge shortcomings in what actually happened. Essentially, think of this process as how you will market and advertise your research project to potential participants.

 

Sometimes there is a shifting dynamic between sampling and recruitment. Say we are doing random sampling from numbers in a phone book, a classic ‘random’ technique. The sampling process is the selection of x number of phone numbers to call. The recruitment is the actually calling and asking someone to take part in the research. Now, obviously not everyone is going to answer the phone, or want to answer any questions. So you then have a list of recruited people, which you might actually want to sample from again to make a representative sample. If you found out everyone that answered the phone was retired and over 60, but you wanted a wider age profile, you will need to refactor from your recruited sample.

 

But let’s think about this again. Why could it be that everyone who consented to take part in our study was retired? Well, we used numbers from the phone book, and called during the day. What effect might this have? Numbers in the phone book tend to be people who have been resident in one place for a long time, many students and young people just have mobiles, and if we call during the day, we will not get answers from most people who work. This illustrates the importance of carefully considering the recruitment strategy: although we chose a good random sampling technique, our strategy of making phone calls during the day has already scuppered our plans.

 

How about another example: recruitment through a poster advertising the study. Many qualitative studies aren’t looking for very large number of respondents, but are targeting a very specific sample. In this example, maybe it’s people who have visited their doctor in the last 6 months. Sounds like a poster in the waiting room of the local GP surgery would work well. What are the obvious limitations here?

 

simple qualitative analysis software from quirkos

 

First of all, people who see the poster will probably have visited the GP (since they are in that location), however, it actually only would recruit people who are currently receiving treatment. People who had been in the previous 6 months but didn’t need to go back again, or had such a horrible experience they never returned, will not see our poster and don’t have a chance to be recruited. Both of these will skew the sample of respondents in different ways.

 

In some ways this is inevitable. Whichever sampling technique and recruitment strategy we adopt, some people will not hear about the study or want to take part. However, it is important to be conscious of not just who is being sampled, but who is left out, and the likely effect this has on our sample and consequently our findings. For example our approach here probably means we oversample people who have chronic conditions requiring frequent treatment, and undersample people who hate their doctor. It’s not necessarily a disaster, but just like making a reflexivity statement about our own biases, we must be forthright about the sampling limitations and consider them when analysing and writing conclusions.

 

For these reasons, it’s often desirable to have multiple and complementary recruitment strategies, so that one makes up for deficiencies in the other. So a poster in the waiting room is great, but maybe we can get a list of everyone registered at the surgery, so we can also contact people not currently seeking treatment. This would be wonderful, but in the real world, we might hit problems with the surgery not being interested in the study, not able to release that information for confidentiality reasons, and the huge extra time such a process would require.

 

That’s why I see a recruitment strategy as a practical battle plan that tries to consider the limitations and realities of engaging with the real world. You can also start considering seemingly small things that can have a huge impact on successful recruitment:


• The design of the poster
• The wording of invitation letters
• The time of day you make contact (not just by phone, but don’t e-mail first thing on a Monday morning!)
• Any incentives, and how appropriate they are
• Data protection issues
• Winning the support of ‘gatekeepers’ who control access to your sample
• Timescales
• Cost (especially if you are printing hundreds of letters of flyers)
• Time and effort required to find each respondent
• And many more…


For a more detailed discussion, there’s a great article by Newington and Metcalfe (2014) specifically on influencing factors for recruitment in qualitative research.

 

Finally, I want to reiterate the importance of trying to record who has not been recruited and why. If you are directly contacting a few dozen respondents by phone or e-mail, this is easy to keep track of: you know exactly who has declined or not responded, likely reasons why and probably some demographic details.

 

However, think about the poster example. Here, we will be lucky if 1% of people that come through the surgery contact us to take part in the study. Think through these classic marketing stages: they have to see the poster, think it’s relevant to them, want to engage, and then reach out to contact you. There will be huge losses at each of those stages, and you don’t know who these people are or why they didn’t take part. This makes it very difficult in this kind of study to know the bias of your final sample: we can guess (busy people, those who aren’t interested in research) but we don’t know for sure.

 

Response rates vary greatly by method: by post 25% is really good, direct contact much higher, posters and flyers below 10%. However, you can improve these rates with careful planning, by considering carefully who will engage and why, and making it a good prospect to take part: describe the aims of the research, compensate time, and explain the proposed benefits. But you also need to take an ethical approach, don’t coerce, and make promises you can’t keep. Check out the recruitment guidelines drawn up by the Association for Qualitative Research.

 

My personal experience tells me that most people who engage with qualitative research are lovely! They want to help if they can, and love an opportunity to talk about themselves and have their voice heard. Just be aware of what kinds of people end up being your respondents, and make sure you acknowledge the possibility of hidden voices from people who don’t engage for their own reasons.

 

Once you get to your analysis, don't forget to try Quirkos for free, and see how our easy-to-use software can make a real qualitative difference to your research project! To keep up to date with new blog articles on this, and other qualitative research topics, follow our Twitter feed: twitter.com/quirkossoftware.

 

 

Designing a semi-structured interview guide for qualitative interviews

clipboard by wikidave https://www.flickr.com/photos/wikidave/7386337594

 

Interviews are a frequently used research method in qualitative studies. You will see dozens of papers that state something like “We conducted n in-depth semi-structured interviews with key informants”. But what exactly does this mean? What exactly counts as in-depth? How structured are semi-structured interviews?

 

This post is hosted by Quirkos, simple and affordable software for qualitative analysis.
Download a 1 month free trial!

 

The term “in-depth” is defined fairly vaguely in the literature: it generally means a one-to-one interview on one general topic, which is covered in detail. Usually these qualitative interviews last about an hour, although sometimes much longer. It sounds like two people having a discussion, but there are differences in the power dynamics, and end goal: for the classic sociologist Burgess (2002) these are “conversations with a purpose”.

 

Qualitative interviews generally differ from quantitative survey based questions in that they are looking for a more detailed and nuanced response. They also acknowledge there is no ‘one-size fits all’, especially when asking someone to recall a personal narrative about their experiences. Instead of a fixed “research protocol” that asks the same question to each respondent, most interviewees adopt a more flexible approach. However there is still a need “...to ensure that the same general areas of information are collected from each interviewee; this provides more focus than the conversational approach, but still allows a degree of freedom and adaptability in getting information from the interviewee” –MacNamara (2009).

 

Turner (2010) (who coincidentally shares the same name as me) describes three different types of qualitative interview; Informal Conversation, General Interview Guide, and Standardised Open-Ended. These can be seen as a scale from least to most structured, and we are going to focus on the ‘interview guide’ approach, which takes a middle ground.

 

An interview guide is like a cheat-sheet for the interviewer – it contains a list of questions and topic areas that should be covered in the interview. However, these are not to be read verbatim and in order, in fact they are more like an aide-mémoire. “Usually the interviewer will have a prepared set of questions but these are only used as a guide, and departures from the guidelines are not seen as a problem but are often encouraged” – Silverman (2013). That way, the interviewer can add extra questions about an unexpected but relevant area that emerges, and sections that don’t apply to the participant can be negated.

 

So what do these look like, and how does one go about writing a suitable semi-structured interview guide? Unfortunately, it is rare in journal articles for researchers to share the interview guide, and it’s difficult to find good examples on the internet. Basically they look like a list of short questions and follow-on prompts, grouped by topic. There will generally be about a dozen. I’ve written my fair share of interview guides for qualitative research projects over the years, either on my own or with the collaboration of colleagues, so I’m happy to share some tips.

 


Questions should answer your research questions
Your research project should have one or several main research questions, and these should be used to guide the topics covered in the interviews, and hopefully answer the research questions. However, you can’t just ask your respondents “Can the experience of male My Little Pony fans be described through the lens of Derridean deconstruction?”. You will need to break down your research into questions that have meaning for the participant and that they can engage with. The questions should be fairly informal and jargon free (unless that person is an expert in that field of jargon), open ended - so they can’t be easily answered with a yes or no, and non-leading so that respondents aren’t pushed down a certain interpretation.

 

 

Link to your proposed analytical approach
The questions on your guide should also be constructed in such a way that they will work well for your proposed method of analysis – which again you should already have decided. If you are doing narrative analysis, questions should be encouraging respondents to tell their story and history. In Interpretative Phenomenological Analysis you may want to ask more detail about people’s interpretations of their experiences. Think how you will want to analyse, compare and write up your research, and make sure that the questioning style fits your own approach.

 

 

Specific ‘Why’ and prompt questions
It is very rare in semi-structured interviews that you will ask one question, get a response, and then move on to the next topic. Firstly you will need to provide some structure for the participant, so they are not expected (or encouraged) to recite their whole life story. But on the other level, you will usually want to probe more about specific issues or conditions. That is where the flexible approach comes in. Someone might reveal something that you are interested in, and is relevant to the research project. So ask more! It’s often useful in the guide to list a series of prompt words that remind you of more areas of detail that might be covered. For example, the question “When did you first visit the doctor?” might be annotated with optional prompts such as “Why did you go then?”, “Were you afraid?” or “Did anyone go with you?”. Prompt words might reduce this to ‘Why THEN / afraid / with someone’.

 

 

Be flexible with order
Generally, an interview guide will be grouped into several topics, each with a few questions. One of the most difficult skills is how to segue from one topic or question to the next, while still seeming like a normal conversation. The best way to manage this is to make sure that you are always listening to the interviewee, and thinking at the same time about how what they are saying links to other discussion topics. If someone starts talking about how they felt isolated visiting the doctor, and one of your topics is about their experience with their doctor, you can ask ‘Did you doctor make you feel less isolated?’. You might then be asking about topic 4, when you are only on topic 1, but you now have a logical link to ask the more general written question ‘Did you feel the doctor supported you?’. The ability to flow from topic to topic as the conversation evolves (while still covering everything on the interview guide) is tricky, and requires you to:

 

 

Know your guide backwards - literally
I almost never went into an interview without a printed copy of the interview guide in front of me, but it was kind of like Dumbo’s magic feather: it made me feel safe, but I didn’t really need it. You should know everything on your interview guide off by heart, and in any sequence. Since things will crop up in unpredictable ways, you should be comfortable asking questions in different orders to help the conversational flow. Still, it’s always good to have the interview guide in front of you; it lets you tick off questions as they are asked (so you can see what hasn’t been covered), is space to write notes, and also can be less intimidating for the interviewee, as you can look at your notes occasionally rather than staring them in the eye all the time.

 


Try for natural conversation
Legard, Keegan and Ward (2003) note that “Although a good in-depth interview will appear naturalistic, it will bear little resemblance to an everyday conversation”. You will usually find that the most honest and rich responses come from relaxed, non-combative discussions. Make the first question easy, to ease the participant into the interview, and get them used to the question-answer format. But don’t let it feel like a tennis match, where you are always asking the questions. If they ask something of you, reply! Don’t sit in silence: nod, say ‘Yes’, or ‘Of course’ every now and then, to show you are listening and empathising like a normal human being. Yet do be careful about sharing your own potentially leading opinions, and making the discussion about yourself.

 

 

Discuss with your research team / supervisors
You should take the time to get feedback and suggestions from peers, be they other people on your research project, or your PhD supervisors. This means preparing the interview guide well in advance of your first interview, leaving time for discussion and revisions. Seasoned interviewers will have tips about wording and structuring questions, and even the most experienced researcher can benefit from a second opinion. Getting it right at this stage is very important, it’s no good discovering after you’ve done all your interviews that you didn’t ask about something important.

 

 

Adapting the guide
While these are semi-structured interviews, in general you will usually want to cover the same general areas every time you do an interview, no least so that there is some point of comparison. It’s also common to do a first few interviews and realise that you are not asking about a critical area, or that some new potential insight is emerging (especially if you are taking a grounded theory approach). In qualitative research, this need not be a disaster (if this flexibility is methodologically appropriate), and it is possible to revise your interview guide. However, if you do end up making significant revisions, make sure you keep both versions, and a note of which respondents were interviewed with each version of the guide.

 

 

Test the timing
Inevitably, you will not have exactly the same amount of time for each interview, and respondents will differ in how fast they talk and how often they go off-topic! Make sure you have enough questions to get the detail you need, but also have ‘lower priority’ questions you can drop if things are taking too long. Test the timing of your interview guide with a few participants, or even friends before you settle on it, and revise as necessary. Try and get your interview guide down to one side of paper at the most: it is a prompt, not an encyclopaedia!

 


Hopefully these points will help demystify qualitative interview guides, and help you craft a useful tool to shape your semi-structured interviews. I’d also caution that semi-structured interviewing is a very difficult process, and benefits majorly from practice. I have been with many new researchers who tend to fall back on the interview guide too much, and read it verbatim. This generally leads to closed-off responses, and missed opportunities to further explore interesting revelations. Treat your interview guide as a guide, not a gospel, and be flexible. It’s extra hard, because you have to juggle asking questions, listening, choosing the next question, keeping the research topic in your head and making sure everything is covered – but when you do it right, you’ll get rich research data that you will actually be excited to go home and analyse.

 

 

Don’t forget to check out some of the references above, as well as the myriad of excellent articles and textbooks on qualitative interviews. There’s also Quirkos itself, software to help you make the research process engaging and visual, with a free trial to download of this innovative tool. We also have a rapidly growing series of blog post articles on qualitative interviews. These now include 10 tips for qualitative interviewing, transcribing qualitative interviews and focus groups, and how to make sure you get good recordings. Our blog is updated with articles like this every week, and you can hear about it first by following our Twitter feed @quirkossoftware.