Finding, using and some cautions on secondary qualitative data

secondary data analysis

 

Many researchers instinctively plan to collect and create new data when starting a research project. However, this is not always needed, and even if you end up having to collect your own data, looking for other sources already out there can help prevent redundancy and improve your conceptualisation of a project. Broadly you can think of two different types of secondary data: sources collected previously for specific research projects, and data 'scraped' from other sources, such as Hansard transcripts or Tweets.

 

Other data sources include policy documents, news articles, social media posts, other research articles, and repositories of qualitative data collected by other researchers, which might be interviews, focus groups, diaries, videos or other sources. Secondary analysis of qualitative data is not very common, since the data tends to be collected with a very narrow research interest, and of a depth that makes anonymisation and aggregation difficult. However, there is a huge amount of rich data that could be used again for other subjects, see this article by Heaton 2008 for a general overview, and Irwin 2013 for discussion of some of the ethical issues.

 

The advantage with using qualitative data analysis software is that you can keep many disperate sources of evidence together in one place. If you have an article positing a particular theory, you can quickly cross code supportive evidence with sections of text from that article, and evidence why your data does or does not support the literature. Examining social policy? Put the law or government guidelines into your project file, and cross reference statements from participants that challenge or support these dictates in practice. The best research does not exist in isolation: it must engage with both the existing literature and real policy to make an impact.

 


Data from other sources has the advantage that the researcher doesn’t have to spend time and resources collecting and recruiting. However, the constant disadvantage is that the data was not specifically obtained to meet your particular research questions or needs. For example, data from Twitter might give valuable insights into people’s political views. But statements that people make do not always equate with their views (this is true for directly collected research methods as well), so someone may make a controversial statement just to get more followers, or be suppressing their true beliefs if they believe that expressing them will be unpopular. 

 

Each different website has it’s own culture as well, which can affect what people share, and in how much detail. A paper by Hine (2012) shows that posts from the popular UK 'Mumsnet' forum have particular attitudes that are acceptable, and often posters are looking for validation of their behaviour from others. Twitter and Facebook are no exception, their each have different styles and acceptable posts that true internet ethnographers should understand well!

 

Even when using secondary data collected for a specific academic research project, the data might not be suitable for your needs. A great series of qualitative interviews about political views may seem to be a great fit for your research, but might not have asked a key question (for example about respondents parent’s beliefs) which renders the data unusable for your purpose. Additionally, it is usually impossible to identify respondents in secondary data sets to ask follow on questions, since data is anonymised. It’s sometimes even difficult to see the original research questions and interview schedule, and so to find out what questions were asked and for what purpose.

 

But despite all this, it is usually a good idea to look for secondary sources. It might give you an insight into the area of study you hadn’t considered, highlighting interesting issues that other research as picked up on. It might also reduce the amount of data you need to collect: if someone has done something similar in this area, you can design your data collection to highlight relevant gaps, and build on what has been done before (theoretically, all research should do this to a certain degree).

 

I know it’s something I keep reiterating, but it’s really important to understand who your data represents: you need some kind of contextual or demographic data. This is sometimes difficult to find when using data gathered from social media, where people are often given the option to only state very basic data, such as gender, location or age, but many people may not disclose. It can also be a pain to extract comments from social media posts in such a way that the identity of the poster and their posts are kept with it – however there are 3rd party tools that can help with this.

 

When writing up your research, you will also want to make explicit how you found and collected this source of data. For example, if you are searching Twitter for a particular hashtag or phrase, when did you run the search? If you run it the next day, or even minute, the results will be different, and how far back did you include posts? What languages? Are there comments that you excluded - especially if they look like spam or promotional posts? Think about making it replicable: what information would someone need to get the same data as you?

 

You should also try and be as comprehensive as possible. If you are examining newspaper articles for something that has changed over time (such as use of the phrase ‘tactical warfare’) don’t assume all your results will be online. While some projects have digitised newspaper archives from major titles, there are a lot of sources that are still print only, or reside in special databases. You can gain help and access to these from national libraries, such as the British Library.


There are growing repositories of open access data, including qualitative datasets. A good place to start is the UK Data Service, even if you are outside the UK, as it contains links to a number of international stores of qualitative data. Start here, but note that you will generally have to register, or even gain approval to access some datasets. This shouldn’t put you off, but don’t expect to always be able to access the data immediately, and plan to prepare a case for why you should be granted access. In the USA there is a qualitative specific data repository called the Qualitative Data Repository (or QDR) hosted by Syracuse University.

 

If you have found a research article based on interesting data that is not held on a public repository, it is worth contacting the authors anyway to see if they are able to share it. Research based on government funding increasingly comes with stipulations that the data should be made freely available, but this is still a fairly new requirement, and investigators from other projects may still be willing and able to grant access. However, authors can be protective over their data, and may not have acquired consent from participants in such a way that they would be allowed to share the data with third parties. This is something to consider when you do your own work: make sure that you are able to give back to the research community and share your own data in the future.

 


Finally, a note of caution about tailored results. Google, Facebook and other search engines do not show the same results in the same order to all people. They are customised for what they think you will be interested in seeing, based on your own search history and their assumptions of your gender, location, ethnicity, and political leanings. This article explains the impact of the ‘research bubble’ and this will impact the results that you get from social media (especially Facebook).

 

To get around this in a search, you can use a privacy focused search engine (like DuckDuckGo), add &pws=0 to the end of a Google search, or use a browser in ‘Private’ or ‘Incognito’ mode. However, it’s much more difficult to get neutral results from Facebook, so bare this in mind.

 

Hopefully these tips have given you food-for-thought about sources and limitations of secondary data sources, if you have any comments or suggestions, please share them in the forum. Don’t forget that Quirkos has simple copy and paste source generation that allows you to bring in secondary data from lots of different formats and internet feeds, and the visual interface makes coding and exploring them a breeze. Download a free trial from www.quirkos.com/get.html.

 

 

Developing and populating a qualitative coding framework in Quirkos

coding blog

 

In previous blog articles I’ve looked at some of the methodological considerations in developing a coding framework. This article looks at top-down or bottom-up approaches, whether you start with large overarching themes (a-priori) and break them down, or begin with smaller more simple themes, and gradually impose meanings and connections in an inductive approach. There’s a need in this series of articles to talk about the various different approaches which are grouped together as grounded theory, but this will come in a future article.

 

For now, I want to leave the methodological and theoretical debates aside, and look purely at the mechanics of creating the coding framework in qualitative analysis software. While I’m going to be describing the process using Quirkos as the example software, the fundamentals will apply even if you are using Nvivo, MaxQDA, AtlasTi, Dedoose, or most of the other CAQDAS packages out there. It might help to follow this guide with the software of your choice, you can download a free trial of Quirkos right here and get going in minutes.

 

First of all, a slightly guilty confession: I personally always plan out my themes on paper first. This might sound a bit hypocritical coming from someone who designs software for a living, but I find myself being a lot more creative on paper, and there’s something about the physicality of scribbling all over a big sheet of paper that helps me think better. I do this a lot less now that Quirkos lets me physically move themes around the screen, group them by colour and topic, but for a big complicated project it’s normally where I start.

 

But the computer obviously allows you to create and manage hundreds of topics, rearrange and rename them (which is difficult to do on paper, even with pencil and eraser!). It will also make it easy to assign parts of your data to one of the topics, and see all of the data associated with it. While paper notes may help conceptually think through some of the likely topics in the study and connect them to your research questions, I would recommend users to move to a QDA software package fairly early on in their project.

 

Obviously, whether you are taking an a-priori or grounded approach will change whether you will creating most of your themes before you start coding, or adding to them as you go along. Either way, you will need to create your topics/categories/nodes/themes/bubbles or whatever you want to call them. In Quirkos the themes are called ‘Quirks’ informally, and are represented by default as coloured bubbles. You can drag and move these anywhere around the screen, change their colours, and their size increases every time you add some text to them. It’s a neat way to get confirmation and feedback on your coding. In other software packages there will just be a number next to the list of themes that shows how many coding events belong to each topic.

 


In Quirkos, there are actually three different ways to create a bubble theme. The most common is the large (+) button at the top left of a canvas area. This creates a new topic bubble in a random place with a random colour, and automatically opens the Properties dialogue for you to edit it. Here you can change the name, for example to ‘Fish’ and put in a longer description: ‘Things that live in water and lay eggs’ so that the definition is clear to yourself and others. You can also choose the colour, from some 16 million options available. There is also the option to set a ‘level’ for this Quirk bubble, which is a way to group intersecting themes so that one topic can belong to multiple groups. For example, you could create a level called ‘Things in the sea’ that includes Fish, Dolphins and Ships, and another category called ‘Living things’ that has Fish, Dolphins and Lions. In Quirkos, you can change any of these properties at any time by right clicking on the appropriate bubble.

 

quirkos qualitative properties editor

 

Secondly, you can right click anywhere on the ‘canvas’ area that stores your topics to create a new theme bubble at that location. This is useful if you have a little cluster of topics on a similar theme, and you want to create a new related bubble near the other ones. Of course, you can move the bubbles around later, but this makes things a bit easier.

 

If you are creating topics on the fly, you can also create a new category by dragging and dropping text directly onto the same add Quirk button. This creates a new bubble that already contains the text you dragged onto the button. This time, the property dialogue doesn’t immediately pop-up, so that you can keep adding more sections of data to the theme. Don’t forget to name it eventually though!

 

drag and drop qualitative topic creation

 

All software packages allow you to group your themes in some way, usually this is in a list or tree view, where sub-categories are indented below their ‘parent’ node. For example, you might have the parent category ‘Fish’ and the sub-categories ‘Pike’, ‘Salmon’ and ‘Trout’. Further, there might be sub-sub categories, so for example ‘Trout’ might have themes for ‘Brown Trout’, ‘Brook Trout’ and ‘Rainbow Trout’. This is a useful way to group and sort your themes, especially as many qualitative projects end up with dozens or even hundreds of themes.

 

In Quirkos, categories work a little differently. To make a theme a sub-category, just drag and drop that bubble onto the bubble that will be its parent, like stacking them. You will see that the sub-category goes behind the parent bubble, and when you move your mouse over the top category, the others will pop out, automatically arranging like petals from a flower. You can also remove categories just by dragging and pulling it out from the parent just like picking petals from a flower! You can also create sub-sub categories (ie up to three levels depth) but no more than this. When a Quirk has subcategories clustered below it, this is indicated by a white ring inside the bubble. This method of operation makes creating clusters (and changing your mind) very easy and visual.

 

Now, to add something to the topic, you just have to select some text, and drag and drop it onto the bubble or theme. This will work in most software packages, although in some you can also right click within the selected text where you will find a list of codes to assign that section to.


Quirkos, like other software, will show coloured highlighted stripes over the text or in the margin that show in the document which sections have been added to which codes. In Quirkos, you can always see what topic the stripe represents by hovering the mouse cursor over the coloured section, and the topic name will appear in the bottom left of the screen. You can also right-click on the stripe and remove that section of text from the code at any time. Once you have done some coding, in most software packages you can double click on the topic and see everything you’ve coded at this point.

 

Hopefully this should give you confidence to let the software do what it does best: keep track of lots of different topics and what goes in them. How you actually choose which topics and methodology to use in your project is still up to you, but using software helps you keep everything together and gives you a lot of tools for exploring the data later. Don’t forget to read more about the specific features of Quirkos here and download the free trial from here.

 

Transcribing your own qualitative data

diy qualitative transcription

In a previous blog article I talked about some of the practicalities and costs involved in using a professional transcribing service to turn your beautifully recorded qualitative interviews and focus groups into text data ready for analysis. However, hiring a transcriber is expensive, and is often beyond the means of most post-graduate researchers.

 

There are also serious advantages to doing the transcription yourself that make a better end result and get you much closer to your data. In this article I’m going to go through some practical tips that should make doing transcription a little less painful.

 

But first, a little more on the benefits of transcribing your own data. If you were there in the room with the respondent, you asked the questions, and were watching and listening to the participant. Do the transcription soon after the interview and you are likely to remember words that might be muffled in the recording, points that the respondent emphasised by shaking their head – lots of little details to capture.

 

It’s important to remember that transcription is an interpretive act (Bailey 2008), you can’t just convert an interview into a perfect text version of that data. While this might be obvious when working between different languages where translation is required, I would argue that a transcriber always makes subjective decisions about misheard words, how to record pauses and inflictions, or unconsciously changes words or their order.

 

As I’ve mentioned before, you loose a lot of the nuance of an interview when moving to text, and the transcriber has to make choices about how to mitigate this: Was this hesitation or just pausing for breath? How should I indicate the participant banged on the table for emphasis? Capturing this non-verbal communication in a transcript can really change the interpretation in qualitative data, so I like it when this process is in the control of the researcher. For a lot more on these and other issues there is a review of the qualitative transcription literature by Davidson (2009).


 

What do I actually type?

In a word, everything: the questions, the answers, the hesitations and mumbles, and things that were communicated, but not said verbally.

 

First, some guidelines for what the transcription should look like, bearing in mind that there is no one standard. You can use a word processor, or a spreadsheet like Excel. It can be a little more difficult to get formatting right in a spreadsheet, for example you will need to use Shift+Return to make a new paragraph within a cell, and getting it to look right on a printed page is more of a challenge. Yet since interviews and especially focus groups will usually have more than one voice to assign text to, you need some way to structure the data.

 

In a spreadsheet you can use three columns: the first for an occasional time index (so you see where in the audio this section of text occurs), the second for name of voice, and the third widest one for text. While you can use a table to do the same thing in Word, spreadsheets will do auto-complete for your names, making things a bit faster. However, for just a one-on-one interview, it’s easy to just use a Q: / A: formatting for each respondent in a spreadsheet, and put periodic time stamps in brackets at the top of each page.

 

Second, record non-verbal data in a consistent way, usually in square brackets. For example [hesitates], [laughter], [bangs fist on table], or even when [coffee is delivered]. You may choose to use italics or bold type to show when someone puts emphasis on a word, but choose one or the other and be consistent.

 

Next, consider your system for indicating pauses. Usually a short pause is represented by three dots ‘…’ Anything longer is recorded in square brackets and roughly timed [5 second pause]. These pauses can show hesitation in the participant to answer a difficult question, and long pauses may have special meaning. There is actually a whole article on the importance of silences by Poland and Pederson (1998).

 

When you are transcribing, you also need to decide on the level of detail. Will you record every Um, Er, and stutter? In verbal speech these are surprisingly common. Most qualitative research does want this level of detail, but it is obviously more time consuming to type. You’ll often have corrections in the speech as well, commonly “I’ve… I’ll never say that ag... any more”. Do you include the first self correction? It’s clear in the audio the participant was going to say ‘again’ but changed themselves to ‘any more’ - should I record this? Decide on the level of detail early on, and be consistent.

 

Sometimes people can go completely off topic, or there will be a section in the audio where you were complaining about the traffic, ordering coffee, or a phone call interrupted things. If you decide it’s not relevant to capture, just indicate with time markings what happened in square brackets: [cup smashed on the floor, 5min to clear up].

 

Once you are done with an interview, it’s a good idea to listen to it back, reading through the transcript and correcting any mistakes. The first few times you will be surprised at how often you swapped a few words, or got strange typos.

 

 

So how long will it all take?

Starting out with all this can be daunting, especially if you have a large number of interviews to transcribe. A good rule of thumb is that transcribing an interview verbatim will take between 3 and 6 times longer than the audio. So for an hour of recording, it could take as little as three hours, or as much as six to type up.

 

This sounds horrifying, and it is. I’m quite a fast typer, and have done quite a bit of transcription before, but I average between 3x and 4x the audio time. If you are slow at typing, need to pause the audio a lot, or have to put in a lot of extra descriptive detail it can take a lot longer. The tips below should help you get towards the 3x benchmark, but it’s worth planning out your time a little before you begin.

 

If you have twenty interviews each lasting on average 1 hour, you should probably plan for at least 60 hours of transcription time. You are looking at nearly 9 days or two weeks of work at a standard 9-5 work day. I don’t say this to frighten you, just to mentally acclimatise you to the task ahead!

 

It’s also worth noting that transcription is very intensive work. You will be frantically typing as fast as you can, and it requires extreme mental concentration to listen and type simultaneously, while also watching for errors and fixing typos. I don’t think most people could just do two or three hour sessions at a time without going a little crazy! So you need to plan in some breaks, or at least some different non-typing work.

 

If this sounds insurmountable, don’t panic. Just spread out the work, especially if you can do the transcripts after each interview, instead of in one huge batch. This is generally better since you can review one interview before you do the next one, giving you a chance to change how you ask questions and cover any gaps. Transcription can also be quite engrossing (since you can’t possibly do anything else at the same time), and it’s nice to see the hours ticking off.

 

 

 

So how can you make this faster?

You need to set up your computer (or laptop) to be a professional transcribing station, where you can hear the audio, start and stop it easily, and type comfortably for a long period of time.

 

Even if you type really fast, you won’t be able to keep up with the speed that people speak, meaning you will have to frequently start and stop the audio to catch up. Most professionals will use a ‘foot-pedal’ to do this, so that they don’t have to stop typing, come out of the word processing software and pause an audio player. Even if you are playing audio from a dictaphone next to you, going away from the keyboard, stopping and starting the buttons on the dictaphone and coming back to type again quickly becomes tedious.

 

A foot-pedal lets you start and stop the audio by tapping with your foot (or toe) and often has additional buttons to rewind a little (very useful) or fast-forward through the audio. Now, these cost around £30/$40 or more, but can be a worthwhile investment. However, it’s also worth checking to see if you can borrow one from a colleague, or even if your department or library has one for hire.

 

But if you are a cheapskate like me, there are other ways to do this. Did you know that you can have two or more keyboards attached to a computer, and they will both work? An extra keyboard (with a USB connector) can cost as little as £10/$15 if you don’t already have a spare lying around, and can be plugged into a laptop as well. Put it on the floor, and you can set up one of the keys as a ‘global shortcut’ in an audio player like VLC. Here’s a forum detailing how to set up a certain key so that it will start and stop the audio even if you are typing in another programme. Put your second keyboard on the floor, and tap your chosen key with your toe to start and stop! Even if you only use one keyboard, you can set a shortcut in VCL (for example Alt+1), and every time you press that combination it will play or pause the audio, even if VLC player is hidden.

 

There’s another advantage to using VLC: it can slow down your recordings as they are played back! Once your audio is playing, click on the Playback menu item, then Speed. Change to Slower, and listen as your participants magically start talking like sleepy drunks! This helps me more than anything, because I can slow down the speech to a level that means I can type constantly without getting behind. This method does warp the speech, and having the setting too high can make it difficult to understand. However, the less you have to pause and stop the audio to catch up with your typing, the faster your transcription will go.

 

You can also do this with audio software like Audacity. Here, import your audio file, and click on Effect, and Change Tempo. Drag the slider to the left to slow down the speech (try 20% – 50%) without changing the ‘pitch’ so everyone doesn’t end up sounding like Barry White. You can then save the file with your desired speed, and the quality can be a little better than the live speed changes in VLC.

 

General tips for good typing can help too. Watch the screen as you type, not your fingers, so that you can quickly pick up on mistakes. Learn to use all your fingers to type, don’t just ‘hunt and peck’ - a quick typing tutorial might save you hours in the long run if you don’t do this already.

 

Last of all, consider your posture. I’m serious! If you are going to be hunched up and typing for days and days, bad posture is going to make you ache and get stressed. Make sure your desk and chair are the right height for you, try using a proper keyboard if working from a laptop (or at least prop up the laptop to a good angle). Make sure the lighting is good, there is no screen glare, and use a foot rest if this helps the position of your back. Scrunched up on a sofa with a laptop in your lap for 60 hours is a great way to get cramp, back-ache and RSI. Try and take a break at least every half an hour: get up and stretch, especially your hands and arms.

 

So, you have your beautiful and detailed transcripts? Now you can bring them into Quirkos to analyse them! Quirkos is ideal for students doing their first qualitative analysis project, as it makes coding and analysis of text visual, colourful and easy to learn. There’s a free trial on our website, and you can bring in data from lots of different sources to work with.

 

Sampling considerations in qualitative research

sampling crowd image by https://www.flickr.com/photos/jamescridland/613445810/in/photostream/

 

Two weeks ago I talked about the importance of developing a recruitment strategy when designing a research project. This week we will do a brief overview of sampling for qualitative research, but it is a huge and complicated issue. There’s a great chapter ‘Designing and Selecting Samples’ in the book Qualitative Research Practice (Ritchie et al 2013) which goes over many of these methods in detail.

 

Your research questions and methodological approach (ie grounded theory) will guide you to the right sampling methods for your study – there is never a one-size-fits-all approach in qualitative research! For more detail on this, especially on the importance of culturally embedded sampling, there is a well cited article by Luborsky and Rubinstein (1995). But it’s also worth talking to colleagues, supervisors and peers to get advice and feedback on your proposals.

 

Marshall (1996) briefly describes three different approaches to qualitative sampling: judgement/purposeful sampling, theoretical sampling and convenience sampling.

 

But before you choose any approach, you need to decide what you are trying to achieve with your sampling. Do you have a specific group of people that you need to have in your study, or should it be representative of the general population? Are you trying to discover something about a niche, or something that is generalizable to everyone? A lot of qualitative research is about a specific group of people, and Marshall notes:
“This is a more intellectual strategy than the simple demographic stratification of epidemiological studies, though age, gender and social class might be important variables. If the subjects are known to the research, they may be stratified according to known public attitudes or beliefs.”

 

Broadly speaking, convenience, judgement and theoretical sampling can be seen as purposeful – deliberately selecting people of interest in some way. However, randomly selecting people from a large population is still a desirable approach in some qualitative research. Because qualitative studies tend to have a small sample size due to the in-depth nature of engagement with each participant, this can have an impact if you want a representative sample. If you randomly select 15 people, you might by chance end up with more women than men, or a younger than desired sample. That is why qualitative studies may use a little bit of purposeful sampling, finding people to make sure the final profile matches the desired sampling frame. For much more on this, check out the last blog post article on recruitment.

 

Sample size will often also depend on conceptual approach: if you are testing a prior hypothesis, you may be able to get away with a smaller sample size, while a grounded theory approach to develop new insights might need a larger group of respondents to test that the findings are applicable. Here, you are likely to take a ‘theoretical sampling’ approach (Glaser and Strauss 1967) where you specifically choose people who have experiences that would contribute to a theoretical construct. This is often iterative, in that after reviewing the data (for theoretical insights) the researcher goes out again to find other participants the model suggests might be of interest.

 

The convenience sampling approach which Marshal mentions as being the ‘least rigorous technique’ is where researchers target the most ‘easily accessible’ respondents. This could even be friends, family or faculty. This approach can rarely be methodologically justified, and is unlikely to provide a representative sample. However, it is endemic in many fields, especially psychology, where researchers tend to turn to easily accessible psychology students for experiments: skewing the results towards white, rich, well-educated Western students.

 

Now we turn to snowball sampling (Goodman 1961). This is different from purposeful sampling in that new respondents are suggested by others. In general, this is most suited to work with ‘marginalised or hard-to-reach’ populations, where responders are not often forthcoming (Sadler et al 2010). For example, people may not be open about their drug use, political views or living with stigmatising conditions, yet often form closely connected networks. Thus, by gaining trust with one person in the group, others can be recommended to the researcher. However, it is important to note the limitations with this approach. Here, there is the risk of systemic bias: if the first person you recruit is not representive in some way, their referrals may not be either. So you may be looking at people living with HIV/AIDS, and recruit through a support group that is formed entirely of men: they are unlikely to suggest women for the study.

 

For these reasons there are limits to the generalisability and appropriateness of snowball sampling for most subjects of inquiry, and it should not be taken as an easy fix. Yet while many practitioners explain the limitations with snowball research, it can be very well suited for certain kinds of social and action research, this article by Noy (2008) outlines some of the potential benefits to power relations and studying social networks.

 

Finally, there is the issue of sample size and ‘saturation’. This is when there is enough data collected to confidently answer the research questions. For a lot of qualitative research this means collected and coded data as well, especially if using some variant of grounded theory. However, saturation is often a source of anxiety for researchers: see for example the amusingly titled article “Are We There Yet?” by Fusch and Ness (2015). Unlike quantitative studies where a sample size can be determined by the desired effect size and confidence interval in a chosen statistical test, it is more difficult to put an exact number on the right number of participant responses. This is especially because responses are themselves qualitative, not just numbers in a list: so one response may be more data rich than another.

 

While a general rule of thumb would indicate there is no harm in collecting more data than is strictly necessary, there is always a practical limitation, especially in resource and time constrained post-graduate studies. It can also be more difficult to recruit than anticipated, and many projects working with very specific or hard-to-reach groups can struggle to find a large enough sample size. This is not always a disaster, but may require a re-examination of the research questions, to see what insights and conclusions are still obtainable.

 

Generally, researchers should have a target sample size and definition of what data saturation will look like for their project before they begin sampling and recruitment. Don’t forget that qualitative case studies may only include one respondent or data point, and in some situations that can be appropriate. However, getting the sampling approach and sample size right is something that comes with experience, advice and practice.

 

As I always seem to be saying in this blog, it’s also worth considering the intended audience for your research outputs. If you want to publish in a certain journal or academic discipline, it may not be responsive to research based on qualitative methods with small or ‘non-representative’ samples. Silverman (2013 p424) mentions this explicitly with examples of students who had publications rejected for these reasons.

 

So as ever, plan ahead for what you want to achieve for your research project, the questions you want to answer, and work backwards to choose the appropriate methodology, methods and sample for your work. Also, check the companion article about recruitment, most of these issues need to be considered in tandem.

 

Once you have your data, Quirkos can be a great way to analyse it, whether your sample size has one or dozens of respondents! There is a free trial and example data sets to see for yourself if it suits your way of working, and much more information in these pages. We also have a newly relaunched forum, with specific sections on qualitative methodology if you wanted to ask questions, or comment on anything raised in this blog series.

 

 

Qualitative evidence for SANDS Lothians

qualitative charity research - image by cchana

Charities and third sector organisations are often sitting on lots of very useful qualitative evidence, and I have already written a short blot post article on some common sources of data that can support funding applications, evaluations and impact assessments. We wanted to do a ‘qualitative case study’: to work with one local charity to explore what qualitative evidence they already had, what they could collect, and use Quirkos to help create some reports and impact assessments.

 

SANDS Lothians is an Edinburgh based charity that provides long-term counselling and support for families who have experienced bereavement through the loss of a child near-birth. They approached us after seeing advertisements for one of our local qualitative training workshops.


Director Nicola Welsh takes up the story. “During my first six months in post, I could see there was much evidence to highlight the value of our work but was struggling to pull this together in some order which was presentable to others. Through working with Daniel and Kristin we were able to start to structure what we were looking to highlight and with their help begin to organise our information so it was available to share with others. Quirkos allowed us to pull information from service users, stats and studies to present this in a professional document. They gave us the confidence to ask our users about their experiences and encouraged us to record all the services we offered to allow others at a glance to get a feel for what we provide.”

 

First of all, we discussed what would be most useful to the organisation. Since they were in discussion with major partners about possible funding, an impact assessment would be valuable in this process.

 

They also identified concerns from their users about a specific issue, prescriptions for anti-depressants, and wanted to investigate this further. It was important to identify the audience that SANDS Lothians wanted to reach with this information, in this case, GPs and other health professionals. This set the format of a possible output: a short briefing paper on different types of support that parents experiencing bereavement could be referred to.

 

We started by doing an ‘evidence assessment’ (or evidence audit as this previous blog post article notes), looking for evidence on impact that SANDS Lothians already had. Some of this was quantitative, such as the number of phone calls received on a monthly basis. As they had recently started counting these calls, it was valuable evidence of people using their support and guidance services. In the future they will be able to see trends in the data, such as an increase in demand or seasonal variation that will help them plan better.

 

They already had national reports from NHS Scotland on Infant Mortality, and some data from the local health board. But we quickly identified a need for supportive scientific literature that would help them make a better case for extending their counselling services. One partner had expressed concerns that counselling was ineffective, but we found a number of studies that showed counselling to be beneficial for this kind of bereavement. Finding these journal articles for them helped provide legitimacy to the approach detailed in the impact assessment.

 

In fact, a simple step was to create a list of all the different services that SANDS Lothians provides. This had not been done before, but quickly showed how many different kinds of support were offered, and the diversity of their work. This is also powerful information for potential funders or partners, and useful to be able to present quickly.

 

Finally, we did a mini qualitative research project!

 

A post on their Facebook page asking for people to share experiences about being prescribed antidepressants after bereavement got more than 20 responses. While most of these were very short, they did give us valuable and interesting information: for example, not all people who had been suggested anti-depressants by their GP saw this as negative, and some talked about how these had helped them at a difficult time.

 

SANDS Lothians already had amazing and detailed written testimonials and stories from service users, so I was able to combine the responses from testimonials and comments from the Facebook feed into one Quirkos project, and draw across them all as needed.

 

Using Quirkos to pull out the different responses to anti-depressants showed that there were similar numbers of positive and negative responses, and also highlighted parent’s worries we had not considered, such as the effect of medication if trying to conceive again. This is the power of an qualitative approach: by asking open questions, we got a responses about issues we wouldn’t have asked about in a direct survey.

 

quirkos bubble cluster view

 

When writing up the report, Quirkos made it quick and easy to pull out supportive quotes. As I had previously gone through and coded the text, I could click on the counselling bubble, immediately see relevant comments, and copy and paste them into the report. Now SANDS Lothians also has an organised database of comments on how their counselling services helped clients, which they can draw on at any time.

 

Nicola explains how they have used the research outputs. “The impact assessment and white paper has been extremely valuable to our work. This has been shared with senior NHS Lothian staff regarding possible future partnership working.  I have also shared this information with the Scottish Government following the Bonomy recommendations. The recommendations highlight the need for clear pathways with outside charities who are able to assist bereaved parents. I was able to forward our papers to show our current support and illustrate the position Lothians are in regarding the opportunity to have excellent bereavement care following the loss of a baby. It strengthened the work we do and the testimonials give real evidence of the need for this care. 

 

I have also given our papers out at recent talks with community midwives and charge midwives in West Lothian and Royal Infirmary Edinburgh. Cecilia has attached the papers to grant applications which again strengthens our applications and validates our work.”

 

Most importantly, SANDS Lothians now have a framework to keep collecting data, “We will continue to record all data and update our papers for 2016.  Following our work with Quirkos, we will start to collate case studies which gives real evidence for our work and the experiences of parents.  Our next step would be to look specifically at our counselling service and its value.” 

 

“The work with Quirkos was extremely helpful. In very small charities, it is difficult to always have the skills to be an expert in all areas and find the time to train. We are extremely grateful to Daniel and Kristin who generously volunteered their time to assist us to produce this work. I would highly recommend them to any business or third sector organisation who need assistance in producing qualitative research.  We have gained confidence as a charity from our journey with Quirkos and would most definitely consider working with them again in the future.”

 

It was an incredible and emotional experience to work with Nicola and Cecilia at SANDS Lothians on this small project, and I am so grateful to for them for inviting us in to help them, and sharing so much. If you want any more information about the services they offer, or need to speak to someone about losing a baby through stillbirth, miscarriage or soon after birth, all their contact details are available on their website: http://www.sands-lothians.org.uk .

 

If you want any more information about Quirkos and a qualitative approach, feel free to contact us directly, or there is much more information on our website. Download a free trial, or read more about adopting a qualitative approach.