Top-down or bottom-up qualitative coding?

In framework analysis, sometimes described as a top-down or 'a-priori' approach, the researcher decides on the topics of interest they will look for before they start the analysis, usually based on a theory they are looking to test. In inductive coding the researcher takes a more bottom-up approach, starting with the data and a blank-sheet, noting themes as the read through the text.

 

Obviously, many researchers take a pragmatic approach, integrating elements of both. For example it is difficult for a emergent researcher to be completely naïve to the topic before they start, and they will have some idea of what they expect to find. This may create bias in any emergent themes (see previous posts about reflexivity!). Conversely, it is common for researchers to discover additional themes while reading the text, illustrating an unconsidered factor and necessitating the addition of extra topics to an a-proiri framework.

 

I intend to go over these inductive and deductive approaches in more detail in a later post. However, there is also another level in qualitative coding which is top-down or bottom-up: the level of coding. A low 'level' of coding might be to create a set of simple themes, such as happy or sad, or apple, banana and orange. These are sometimes called manifest level codes, and are purely descriptive. A higher level of coding might be something more like 'issues from childhood', fruit, or even 'things that can be juggled'. Here more meaning has been imposed, sometimes referred to as latent level analysis.

 

 

Usually, researchers use an iterative approach, going through the data and themes several times to refine them. But the procedure will be quite different if using a top-down or bottom-up approach to building levels of coding. In one model the researcher starts with broad statements or theories, and breaks them down into more basic observations that support or refute that statement. In the bottom-up approach, the researcher might create dozens of very simple codes, and eventually group them together, find patterns, and infer a higher level of meaning from successive readings.

 

So which approach is best? Obviously, it depends. Not just on how well the topic area is understood, but also the engagement level of the particular researcher. Yet complementary methods can be useful here: the PI of the project, having a solid conceptual understanding of the research issue, can use a top-down approach (in both approaches to the analysis) to test their assumptions. Meanwhile, a researcher who is new to the project or field could be in a good position to start from the bottom-up, and see if they can find answers to the research questions starting from basic observations as they emerge from the text. If the themes and conclusions then independently reach the same starting points, it is a good indication that the inferences are well supported by the text!

 

qualitative data analysis software - Quirkos

 

 

Why qualitative research?

There are lies, damn lies, and statistics

It’s easy to knock statistics for being misleading, or even misused to support spurious findings. In fact, there seems to be a growing backlash at the automatic way that significance tests in scientific papers are assumed to be the basis for proving findings (an article neatly rebutted here in the aptly named post “Give p a chance!”). However, I think most of the time statistics are actually undervalued. They are extremely good at conveying succinct summaries about large numbers of things. Not that there isn’t room for more public literacy about statistics, a charge that can be levied at many academic researchers too.

But there is a clear limit to how far statistics can take us, especially when dealing with complex and messy social issues. These are often the result of intricately entangled factors, decided by fickle and seemingly irrational human beings. Statistics can give you an overview of what is happening, but they can’t tell you why. To really understand the behaviour and decisions of an individual, or a group of actors, we need to get an in-depth knowledge: one data point in a distribution isn’t going to be enough power.

Sometimes, to understand a public health issue like obesity, we need to know about everything from supermarket psychology that promotes unhealthy food, to how childhood depression can be linked with obesity. When done well, qualitative research allows us to look across societal and personal factors, integrating individuals stories into a social narrative that can explain important issues.

To do this, we can observe the behaviour of people in a supermarket, or interview people about their lives. But one of the key factors in some qualitative research, is that we don’t always know what we are looking for. If we explicitly go into a supermarket with the idea that watching shoppers will prove that supermarket two-for-one offers are causing obesity, we might miss other issues: the shelf placement of junk food, or the high cost of fresh vegetables. In the same way, if we interview someone with set questions about childhood depression, we might miss factors like time needed for food preparation, or cuts to welfare benefits.

This open ended, sometimes called ‘semi-structured’, or inductive analytical approach is one of the most difficult, but most powerful methods of qualitative research. Collecting data first, and then using grounded theory in the analytic phase to discover underlying themes from which can build hypotheses, sometimes seems like backward thinking. But when you don’t know what the right questions are, it’s difficult to find the right answers.

More on all this soon…