We live in a world of deep qualitative data.
It’s often proposed that we are very quantitatively literate. We are exposed to numbers and statistics frequently in news reports, at work, when driving, with fitness apps etc. So we are actually pretty good at understanding things like percentages, fractions, and making sense of them quickly. It’s a good reason why people like to see graphs and numerical summaries of data in reports and presentations: it’s a near universal language that people can quickly understand.
But I believe we are also really good at qualitative understanding.
Bohn and Short in a 2009 study estimated that “The average American consumes 100,500 words of information in a single day”, comprised of conversations, TV shows, news, written articles, books… It sounds like a staggering amount of qualitative data to be exposed to, basically a whole PhD thesis every single day!
Obviously, we don’t digest and process all of this, people are extremely good at filtering this data; ignoring adverts, skim reading websites to get to the articles we are interested in and skim reading those, and of course, summarising the gist of conversations with a few words and feelings. That’s why I argue that we are nearly all qualitative experts, summarising and making connections with qualitative life all the time.
And those connections are the most important thing, and the skill that socially astute humans do so well. We can pick up on unspoken qualitative nuances when someone tells us something, and understand the context of a news article based on the author and what is being reported. Words we hear such as ‘economy’ and ‘cancer’ and ‘earthquake’ are imbued with meaning for us, connecting to other things such as ‘my job’ and ‘fear’ and ‘buildings’.
This neural network of meaning is a key part of our qualitative understanding of the world, and whether we want to challenge these by some type of Derridan deconstruction of our associations between language and meaning, they form a key part of our daily prejudices and understanding of the world in which we live.
For me, a key problem with qualitative analysis is that it struggles to preserve or record these connections and lived associations. I touched on this issue of reductionism in the last blog post article on structuring unstructured qualitative data, but it can be considered a major weakness of qualitative analysis software. Essentially, one removes these connected meanings from the data, and puts it as a binary category, or at best, represents it on a scale.
Incidentally, this debate about scaling and quantifying qualitative data has been going on for at least 70 years from Guttman, who even in this 1944 article notes that there has been ‘considerable discussion concerning the utility of such orderings’. What frustrates me at the moment is that while some qualitative analysis software can help with scaling this data, or even presenting it in a 2 or 3 dimensional scale by applying attributes such as weighting, it still is a crude approximation of the complex neural connections of meaning that deep qualitative data possesses.
In my experiments getting people with no formal qualitative or research experience to try qualitative analysis with Quirkos, I am always impressed at how quickly people take to it, and can start to code and assign meaning to qualitative text from articles or interviews. It’s something we do all the time, and most people don’t seem to have a problem categorising qualitative themes. However, many people soon find the activity restrictive (just like trained researchers do) and worry about how well a basic category can represent some of the more complex meanings in the data.
Perhaps one day there will be practical computers and software that ape the neural networks that make us all such good qualitative beings, and can automatically understand qualitative connections. But until then, the best way of analysing data seems to be to tap into any one of these freely available neural networks (i.e. a person) and use their lived experience in a qualitative world in partnership with a simple software tool to summarise complex data for others to digest.
After all, whatever reports and articles we create will have to compete with the other 100,000 words our readers are consuming that day!