Constant comparative method in qualitative analysis
The constant comparison method isn't restricted to Grounded Theory, and is a frequently applied approach to analysing and exploring qualitative data. It's essentially a really common-sense approach for examining qualitative data...
It has roots in classical Grounded Theory, but the constant comparison method isn't restricted to Grounded Theory, and is a frequently applied approach to analysing and exploring qualitative data.
It's essentially a common-sense approach for examining qualitative data - to understand your data or part of it, you need to compare with something else! This might be an interview with another participant (comparing between interviews), between groups of respondents, or even between parts of data assigned to codes or themes. The idea is that comparison can show differences (and similarities) through the data, and the comparisons help you understand the story of why these differences arise.
I'll explain this in the video below, but keep reading for the theoretical background and more detail.
For Tesch (1990), comparison is the most significant way that researchers create and refine categories and analytic themes:
"Comparing and contrasting is used for practically all intellectual tasks during analysis: forming categories, establishing the boundaries of the categories, assigning the segments to categories, summarizing the content of each category, finding negative evidence, etc. The goal is to discern conceptual similarities, to refine the discriminative power of categories, and to discover patterns." (Tesch 1990)
However, it's also used as a methodology: an approach to analysing data that benefits from on-going sampling and recruitment of new participants to provide points of comparison, and explore certain themes in greater depth. But let's go back to the beginning and see how the terminology came about.
Constant Comparative Method is actually a critical part of Glaser and Strauss' (1967) treatise on Grounded Theory, but actually predates it in an article attributed to Glaser alone (Glaser 1965). It proposed a way to bridge the differences between a basic comprehensive thematic coding approach, and theory generation with analysis. They suggest that going through and methodically creating codes for everything hinders the generation of new hypotheses, yet without coding the analyst "merely inspects his [sic] data for new properties of his theoretical categories and writes memos on these properties". They propose a hybrid model where the analyst should be essentially re-examining the code each time something is added to it, and seeing commonalities and differences. That way, theory is constantly being created, or at least refined, in a more systematic and thorough way.
"Systematizing the second approach [pure grounded theory with no coding] by this method does not supplant the skills and sensitivities required in inspection. Rather the constant comparative method is designed to aid analysts with these abilities in generating a theory which is integrated, consistent, plausible, close to the data" (Glaser 1965)
Now, while the terms 'systematizing', 'consistent' and even the suggestion of coding will be abhorrent to certain practitioners who find them too positivistic, for me the key phrase is the last one: 'close to the data'. For researchers that apply a very pure grounded theory approach, with no coding, no notes, sometimes not even transcripts, there is a distinct possibility that the hypotheses generated only connects with a very abstracted and un-evenly absorbed reading of the data. Constant comparison encourages the researcher to stay deeply entwined with the data, and the words of the participants, without relying on their own remembered interpretations. Yet this 'systematizing' approach is not intended to produce a consistent and systematic interpretation:
"the constant comparative method is not designed (as methods of quantitative analysis are) to guarantee that two analysts working independently with the same data will achieve the same result" (Glaser 1965)
Glaser (1965) notes that the focus of constant comparison should be the generation of many, possibly initially conflicting hypotheses on a general issue. If the intention is to create one precise theory, and test it through the data, Glaser recommends analytic induction - a separate approach with a separate aim (which we will leave for a separate article).
For constant comparison Glaser (1965) suggests 4 stages for constant comparison (which really cover the whole of the analysis process):
1) Comparing incidents applicable to each category
2) Integrated categories and their properties
3) Delimiting the theory
4) Writing the theory
The process about should sound fairly common to a qualitative iterative approach, with each stage building on the last. But an important part of the process is that word, constant. We often talk about qualitative coding being a cyclical, iterative process. It's rare that you can just read through and code the data once. Approaches like Grounded Theory and Thematic Analysis suggest phases that build on each other (open, then axial coding for example), and when applying constant comparison, it's important that comparison is a constant and frequently applied part of the process, not just a phase to be done at the end.
It's tempting to limit comparison to natural 'break points' in the analysis, like at the end of coding a source, or a group of interviews, but really the comparison needs to be an integrated part of the process. Every few sentences there might be a statement that is challenging to the emerging theory or definition of a code, and that should invite a process of reflection and comparison with other parts of the data. Quirkos is designed to make that comparison quick, and keep you close to the data as you review it.
However, you should be able to see that when applied well, constant comparison can add a lot of extra time to an already slow process. Don't get disheartened through, this careful reading and cross-examination is what makes qualitative research powerful, challenging to the status-quo and the researcher's own assumptions, and capable of creating change. But also note that
"Comparison can often be based on memory. Usually there is no need to refer to the actual note on every previous incident for each comparison." Glaser and Strauss (1967)
So it's clearly implied that the researcher should be becoming close and familiar with the data through the process, and this is an important for applying the skill with which they will create codes and later themes.
Because constant comparison is so often used to inform further and ongoing recruitment, it is really a methodology and not just an analytical technique, since it informs the whole research design and sampling process. Constant comparison should suggest new people with new experiences that need to be recruited to explore uncertainties, contradictions and refine codes and hypotheses. Therefore, analysis should begin early in the data collection process, and be continual, without pre-defined ideas about sample size.
This links back to the issue of saturation in qualitative research: when adding new participants doesn't seem to be uncovering new findings or theory. Saturation itself has become a contested issue, with some claiming that it is too positivistic and problematic (see Low 2019) for example. However, I feel that the concept Glaser introduces as theoretical saturation is not necessarily the same as sample saturation, and that alternatives like 'information power' (Kirsti et al. 2016) maybe just different ways of describing the same issues.
Also, note that the constant comparative technique needn't be limited to just Classical Grounded Theory (CGT). Elements and concepts can be applied in thematic analysis, discourse analysis, and even approaches like IPA in the later stages. Fram (2013) discusses this with some examples in different approaches, but I'm not sure about some of her interpretations and critique of Classical Grounded Theory. As ever in these blog posts, my interpretation (and others) are only a part of the understanding. I would highly recommend anyone to read the original paper on constant comparative analysis (Glaser 1965) - it's an easy read, with clear examples, and an almost prescient anticipation of the theoretical and practical issues the modern literature still frets about today!
Also, make sure you don't confuse this with Qualitative Comparative Analysis or QCA, (Ragin 1998) which mostly focuses on classifying whole cases, not within-case qualitative analysis.
So! There are many things you should compare, within and between codes and themes, within sources and between sources, across and between groups of respondents (for example by role or demographics like gender or age) and in your own notes and memos. But with all these comparisons, note that it is a constant, continual process, with the aim to develop and write theory, not reduce the analysis to quantitative measures of difference.
Especially when using software tools (CAQDAS or QDAS), it can become easy to conflate numerical counts of the number of 'incidents', codes or themes occurring in a source or group. However, this is not the right focus for a proper qualitative approach, it should be comparison of the qualitative data itself - reading and regrouping to create and challenge a constant stream of theory.
Quirkos is designed to not show quantitative summaries of the data by default, and has a specific query comparison mode to show the text side-by-side. Comparison is designed to be quick so that it can be used constantly, unlike other software that requires you to set up a complicated query you need to run and re-run. But it still allows you to set the parameters to compare codes, groups, individuals, or even between coders working on the some project. Quirkos Cloud also has real live collaboration, so working as a team and constantly comparing your work is greatly simplified.
It has a free trial with no restrictions on the features, so that you can see if it will work for your qualitative approach, whether you end up using a form of constant comparative method or not! You can also watch some tutorials to see how it works with a variety of methods.
Boeije, H. 2002. A Purposeful Approach to the Constant Comparative Method in the Analysis of Qualitative Interviews. Quality & Quantity 36, pp. 391–409. https://doi.org/10.1023/A:1020909529486
Fram, S. M. 2013. The Constant Comparative Analysis Method Outside of Grounded Theory. TQR 18(1). https://files.eric.ed.gov/fulltext/EJ1004995.pdf
Glaser, BG. 1965. The Constant Comparative Method of Qualitative Analysis. Social Problems. 12(4). pp. 436-445.
Glaser, BG. & Strauss, AL. 1967. The Discovery of Grounded Theory: Strategies for Qualitative Research. New York: Aldine De Gruyter.
Tesch, R. 1990. Qualitative research: Analysis types and software tools. Falmer, New York.