Reaching saturation point in qualitative research

saturation in qualitative research

 

A common question from newcomers to qualitative research is, what’s the right sample size? How many people do I need to have in my project to get a good answer for my research questions? For research based on quantitative data, there is usually a definitive answer: you can decide ahead of time what sample size is needed to gain a significant result for a particular test or method.

 

This post is hosted by Quirkos, simple and affordable qualitative analysis software. Download a one-month free trial today!

 

In qualitative research, there is no neat measure of significance, so getting a good sample size is more difficult. The literature often talks about reaching ‘saturation point’ - a term taken from physical science to represent a moment during the analysis of the data where the same themes are recurring, and no new insights are given by additional sources of data. Saturation is for example when no more water can be absorbed by a sponge, but it’s not always the case in research that too much is a bad thing. Saturation in qualitative research is a difficult concept to define Bowen (2008), but has come to be associated with the point in a qualitative research project when there is enough data to ensure the research questions can be answered.

 

However, as with all aspects of qualitative research, the depth of the data is often more important than the numbers (Burmeister & Aitken, 2012). A small number of rich interviews or sources, especially as part of a ethnography can have the importance of dozens of shorter interviews. For Fusch (2015):

 

“The easiest way to differentiate between rich and thick data is to think of rich as quality and thick as quantity. Thick data is a lot of data; rich data is many - layered, intricate, detailed, nuanced, and more. One can have a lot of thick data that is not rich; conversely, one can have rich data but not a lot of it. The trick, if you will, is to have both.”

 

So the quantity of the data is only one part of the story. The researcher needs to engage with it at an early level to ensure “all data [has] equal consideration in the analytic coding procedures. Frequency of occurrence of any specific incident should be ignored. Saturation involves eliciting all forms of types of occurrences, valuing variation over quantity.” Morse (1995). When the amount of variation in the data is levelling off, and new perspectives and explanations are no longer coming from the data, you may be approaching saturation. The other consideration is when there are no new perspectives on the research question, for example Brod et al. (2009) recommend constructing a ‘saturation grid’ listing the major topics or research questions against interviews or other sources, and ensuring all bases have been covered.

 

But despite this, is it still possible to put rough numbers on how many sources are required for a qualitative research project? Many papers have attempted to do this, and as could be expected, the results vary greatly. Mason (2010) looked at the average number of respondents in PhD thesis using on qualitative research. They found an average of 30 sources were used, but with a low of 1 source, a high of 95 and a standard deviation of 18.5! It is interesting to look at their data tables, as they show succinctly the differences in sample size expected for different methodological approaches, such as case study, ethnography, narrative enquiry, or semi-structured interviews.

 


While 30 in-depth interviews may seem high (especially for what is practical in a PhD study) others work with much less: a retrospective examination from a qualitative project by Guest et al. (2006) found that even though they conducted 60 interviews, they had saturation after 12, with most of the themes emergent after just 6. On the other hand, if students have supervisors who have more of a mixed-method or quantitative background, they will often struggle to justify the low number of participants suggested for methods of qualitative enquiry.

 


The important thing to note is that it is nearly impossible for a researcher to know when they have reached saturation point unless they are analysing the data as it is collected. This exposes one of the key ties of the saturation concept to grounded theory, and it requires an iterative approach to data collection and analysis. Instead of setting a fixed number of interviews or focus-groups to conduct at the start of the project, the investigator should be continuously going through cycles of collection and analysis until nothing new is being revealed.

 


This can be a difficult notion to work with, especially when ethics committees or institutional review boards, limited time or funds place a practical upper limit on the quantity of data collection. Indeed Morse et al (2014) found that in most dissertations they examined, the sample size was chosen for often practical reasons, not because a claim of saturation was made.

 


You should also be aware that many take umbrage at the idea that one should use the concept of saturation. O’Reilly (2003) notes that since the concept of saturation comes out of grounded theory, it’s not always appropriate to apply to research projects, and the term has become over used in the literature. It’s also not a good indicator by itself of the quality of qualitative research.

 


For more on these issues, I would recommend any of the articles referenced above, as well as discussion with supervisors, peers and colleagues. There is also more on sampling considerations in qualitative research in our previous blog post article.

 

 

Finally, don’t forget that Quirkos can help you take an iterative approach to analysis and data collection, allowing you to quickly analyse your qualitative data as you go through your project, helping you visualise your path to saturation (if you so choose this approach!). Download a free trial for yourself, and take a closer look at the rest of the features the software offers.

 

Sampling considerations in qualitative research

sampling crowd image by https://www.flickr.com/photos/jamescridland/613445810/in/photostream/

 

Two weeks ago I talked about the importance of developing a recruitment strategy when designing a research project. This week we will do a brief overview of sampling for qualitative research, but it is a huge and complicated issue. There’s a great chapter ‘Designing and Selecting Samples’ in the book Qualitative Research Practice (Ritchie et al 2013) which goes over many of these methods in detail.

 

Your research questions and methodological approach (ie grounded theory) will guide you to the right sampling methods for your study – there is never a one-size-fits-all approach in qualitative research! For more detail on this, especially on the importance of culturally embedded sampling, there is a well cited article by Luborsky and Rubinstein (1995). But it’s also worth talking to colleagues, supervisors and peers to get advice and feedback on your proposals.

 

Marshall (1996) briefly describes three different approaches to qualitative sampling: judgement/purposeful sampling, theoretical sampling and convenience sampling.

 

But before you choose any approach, you need to decide what you are trying to achieve with your sampling. Do you have a specific group of people that you need to have in your study, or should it be representative of the general population? Are you trying to discover something about a niche, or something that is generalizable to everyone? A lot of qualitative research is about a specific group of people, and Marshall notes:
“This is a more intellectual strategy than the simple demographic stratification of epidemiological studies, though age, gender and social class might be important variables. If the subjects are known to the research, they may be stratified according to known public attitudes or beliefs.”

 

Broadly speaking, convenience, judgement and theoretical sampling can be seen as purposeful – deliberately selecting people of interest in some way. However, randomly selecting people from a large population is still a desirable approach in some qualitative research. Because qualitative studies tend to have a small sample size due to the in-depth nature of engagement with each participant, this can have an impact if you want a representative sample. If you randomly select 15 people, you might by chance end up with more women than men, or a younger than desired sample. That is why qualitative studies may use a little bit of purposeful sampling, finding people to make sure the final profile matches the desired sampling frame. For much more on this, check out the last blog post article on recruitment.

 

Sample size will often also depend on conceptual approach: if you are testing a prior hypothesis, you may be able to get away with a smaller sample size, while a grounded theory approach to develop new insights might need a larger group of respondents to test that the findings are applicable. Here, you are likely to take a ‘theoretical sampling’ approach (Glaser and Strauss 1967) where you specifically choose people who have experiences that would contribute to a theoretical construct. This is often iterative, in that after reviewing the data (for theoretical insights) the researcher goes out again to find other participants the model suggests might be of interest.

 

The convenience sampling approach which Marshal mentions as being the ‘least rigorous technique’ is where researchers target the most ‘easily accessible’ respondents. This could even be friends, family or faculty. This approach can rarely be methodologically justified, and is unlikely to provide a representative sample. However, it is endemic in many fields, especially psychology, where researchers tend to turn to easily accessible psychology students for experiments: skewing the results towards white, rich, well-educated Western students.

 

Now we turn to snowball sampling (Goodman 1961). This is different from purposeful sampling in that new respondents are suggested by others. In general, this is most suited to work with ‘marginalised or hard-to-reach’ populations, where responders are not often forthcoming (Sadler et al 2010). For example, people may not be open about their drug use, political views or living with stigmatising conditions, yet often form closely connected networks. Thus, by gaining trust with one person in the group, others can be recommended to the researcher. However, it is important to note the limitations with this approach. Here, there is the risk of systemic bias: if the first person you recruit is not representive in some way, their referrals may not be either. So you may be looking at people living with HIV/AIDS, and recruit through a support group that is formed entirely of men: they are unlikely to suggest women for the study.

 

For these reasons there are limits to the generalisability and appropriateness of snowball sampling for most subjects of inquiry, and it should not be taken as an easy fix. Yet while many practitioners explain the limitations with snowball research, it can be very well suited for certain kinds of social and action research, this article by Noy (2008) outlines some of the potential benefits to power relations and studying social networks.

 

Finally, there is the issue of sample size and ‘saturation’. This is when there is enough data collected to confidently answer the research questions. For a lot of qualitative research this means collected and coded data as well, especially if using some variant of grounded theory. However, saturation is often a source of anxiety for researchers: see for example the amusingly titled article “Are We There Yet?” by Fusch and Ness (2015). Unlike quantitative studies where a sample size can be determined by the desired effect size and confidence interval in a chosen statistical test, it is more difficult to put an exact number on the right number of participant responses. This is especially because responses are themselves qualitative, not just numbers in a list: so one response may be more data rich than another.

 

While a general rule of thumb would indicate there is no harm in collecting more data than is strictly necessary, there is always a practical limitation, especially in resource and time constrained post-graduate studies. It can also be more difficult to recruit than anticipated, and many projects working with very specific or hard-to-reach groups can struggle to find a large enough sample size. This is not always a disaster, but may require a re-examination of the research questions, to see what insights and conclusions are still obtainable.

 

Generally, researchers should have a target sample size and definition of what data saturation will look like for their project before they begin sampling and recruitment. Don’t forget that qualitative case studies may only include one respondent or data point, and in some situations that can be appropriate. However, getting the sampling approach and sample size right is something that comes with experience, advice and practice.

 

As I always seem to be saying in this blog, it’s also worth considering the intended audience for your research outputs. If you want to publish in a certain journal or academic discipline, it may not be responsive to research based on qualitative methods with small or ‘non-representative’ samples. Silverman (2013 p424) mentions this explicitly with examples of students who had publications rejected for these reasons.

 

So as ever, plan ahead for what you want to achieve for your research project, the questions you want to answer, and work backwards to choose the appropriate methodology, methods and sample for your work. Also, check the companion article about recruitment, most of these issues need to be considered in tandem.

 

Once you have your data, Quirkos can be a great way to analyse it, whether your sample size has one or dozens of respondents! There is a free trial and example data sets to see for yourself if it suits your way of working, and much more information in these pages. We also have a newly relaunched forum, with specific sections on qualitative methodology if you wanted to ask questions, or comment on anything raised in this blog series.

 

 

Recording good audio for qualitative interviews and focus groups

 

Last week’s blog post looked at the transcription process, and what’s involved in getting qualitative interview or focus-group data transcribed. This week, we are going to step back, and share a few tips from researchers into what makes for good quality audio that will be easy to hear and transcribe.

 

1. Phones aren’t good enough
While many smartphones can now be used in a ‘voice memo’ mode to record audio, you will quickly find the quality is poor. Consider how tiny the microphone is in a phone (the size of a pin head) and that it is designed only to pick up your voice when right next to your face. Using a proper Dictaphone or voice recorder is pretty much essential to pick up the voice of interviewer and respondent(s) clearly.

 

2. Choosing a Dictaphone
Even if you want to buy one, a cheap £20 ($30) voice recorder will be a vast improvement over a phone. Most researchers won’t need one with a lot of memory: just 2GB of storage will usually record for more than 30 hours at the highest setting. There is usually little benefit in spending a lot more money, unless your ethics review board states that you need one that will securely encrypt your data as it records. These might cost closer to £250 ($400). However, you can often borrow one from your library or department.


A recorder should always be digital. There is no real advantage to a tape one – they are expensive, have less capacity, use batteries faster, are larger, and are much more prone to losing your data with erased, overwritten or mangled tape. This is one part where the advanced technology wins hands down! The format they record in doesn’t really matter, as long as your computer and transcriber can play it back. MP3 is the most compatible, note that some of the older Olympus ones use their own DSS format which is a pain to convert or play back on a computers. Digital recorders will have various settings for recording quality, you will usually want to choose the high or highest setting for clear audio. Test before you do a full interview!

 

2. Carry spare batteries!
I’ve definitely got caught out here before, make sure you have a fresh (or recharged) pair of batteries in the Dictaphone, and a spare set in your bag! Every few minutes during the interview, have a quick look to make sure the recording is still running, and before you start, check you have enough time left on the device.

 

3. Choose a quiet location if possible
While cafés can be convenient, relaxed and neutral places to meet respondents for a one-on-one, they tend to be noisy. You will pick up background music, other conversations, clattering plates and especially noisy coffee machines that make the audio difficult to transcribe. A quiet office location works much better, but if you do need to meet at a café, try and do a bit of reconnaissance first: choose one that is quiet, don’t go at lunchtime or other busy times, choose a part of the café away from the kitchen and coffee grinders, and ask them to turn off any music.

 

4. Position the Dictaphone
Usually you will want the Dictaphone to point towards the respondent, since they will be doing most of the talking. But don’t put the Dictaphone directly on a table, especially if you are having tea/coffee. You will pick up loud THUD noises every time someone puts down their mug, or taps the table with their hand. Just putting the recorder on a napkin or coaster will help isolate the sound.

 

5. Prevent stage fright!
Some people will get nervous as soon as the recording starts, and the conversation will dry up. To prevent this, you can cover the scary red recording light with a bit of tape or Blu-Tack. However, it can also help to start the recorder half-way through the casual introductions, so there isn’t a sudden ‘We’re Live!’ moment. You don’t need to transcribe all the initial banter, but it helps the conversation seamlessly shift into the research questions. Also, try and ignore the Dictaphone as much as possible, so that you both forget about it and have a natural discussion.

 

6. Watch your confirmation noises!
Speaking of natural conversation, it is rare while listening for the interviewer not to make ‘confirmation sounds’ like ‘Yes’, ‘Uh-ha’, ‘Mmm’ etc. Yet these are a pain for qualitative transcription (as most people will want to keep the researchers comments, especially for discourse analysis) and it also breaks up the flow of the transcript. Obviously, just staring silently at your participant while they talk can be disconcerting to say the least! It takes a little practice, but you can communicate and encourage the flow of the conversation just with periodic eye contact, nodding and positive body language. If someone makes a request for confirmation such as: ‘So of course that’s what I did, right?’ Rather than actually verbally responding, you can nod, turn your palms up and shrug, and roll your eyes. This way, it shows you are listening and engaging with the conversation, without constantly interrupting the flow of the narrative.

 

7. Use a boundary mic for group discussions
For focus groups or table discussions, use a cheap ‘boundary’ microphone so that it will pick up all the voices: ideally stereo ones that give some sense of direction to help identify who-said-what during transcription. Again, these don’t need to be expensive: I’ve used a cheap £20 ($30) button-battery powered one with great results. It’s something you can spend a lot of money on for high-end equipment, so again look for opportunities to borrow. 

 

8. Get a group to introduce themselves
For qualitative group sessions, you will almost always want to be able to assign contributions to individual participants. If you are doing the transcription and know the people very well, this can be easy. However, it is surprisingly difficult to differentiate a group of voices that you don’t know just with a recording. For voices to be identified, make sure you start the recording by getting everyone to go round the table and introduce themselves with a few sentences for context (not just their name).

 

9. Backup immediately!
Got your recording? Great! Now back it up! All the time it exists only on your Dictaphone, it can be lost, stolen or dropped in a puddle, losing your data for ever. As soon as you can, get it back to a computer or laptop and copy it to another location. Make sure that your data storage procedure matches your data protection and ethics requirements, and try not to carry around your interview recordings longer than you need to.

 

10. Finally, listen and engage!
Try not to worry about the technical aspects during the interview, shift into researcher and facilitator mode. Take notes if you feel comfortable doing so: even though you are getting a recording, some brief notes can make a good summary and helps concentration. Tick off your qualitative research/interview questions as you go, and write a few notes about how the interview went and the key points immediately afterwards.

 

If you need more advice, you can also read our top 10 tips for qualitative interviews, to make sure things go smoothly on the day. Hopefully, following these steps will help you get great audio recordings for your research project, that will make transcription and listening to your data easier.

 

 

Once you’ve got it transcribed, you'll find that Quirkos is the most intuitative and visual software for qualitative analysis of text data. You can download a free trial for a month, and see an overview of the features here…