Quantitative vs. qualitative research

quantitative vs qualitative research


So this much is obvious: quantitative research uses numbers and statistics to draw conclusions about large populations. You count something that is countable, and process results across the sample.

 

Qualitative methods are more elusive: however in general they revolve around collecting data from people about an experience. This could be how they used a service, how they felt about something, and could be verbal or written. But it is generally speech or talk, albeit with a variety of levels inferred above and below this (if they are truthful, or if what they say has deeper or hidden meaning). Rather than applying a statistical test to the data, a qualitative researcher must read/listen to the data and make an interpretation of what is being discussed, often hoping to discover patterns or contradictions.

 

Interpretation is done in both approaches: quantitative results are still examined in context (often compared with other numbers and data), and given a metric of significance such as a p-value or r-squared to assess if the results, or a part of them, are meaningful.

 

Finally, in general it is seen that a quantitative approach is a positivist paradigm, while qualitative methods fit better with constructionist or pragmatic paradigms (Savin-Baden and Major 2013). However, both are essentially attempting to model and sample something about the world in which we live so that we can simplify and understand it. And it’s not a case of one is better than the other: just like a hammer can’t be used to turn screws, or a screwdriver to hammer in nails, the different methods have different uses. Researchers should always make sure that the question comes first, and that is used to choose the methodology.

 

But you should also ask, is there a quantitative way to measure what your question is asking? If it’s something as simple as numbers of people, or a quantitative aspect like salary. While there are also quantitative measures of things like anxiety or pain that can be used as a proxy to make inferences across a large population. However, for detailed understanding of these issues and how they affect people, these metrics can be crude and don’t get to the detail of the lived experience.

 

However, choosing the right approach also depends on the how much is known about the research question and topic area. If you don’t know what the problems are in a field, you don’t know what questions to ask, or how to record the the answers.

 

I would argue that even in the physical sciences, qualitative research comes first, and sets questions to answer with quantitative methods. Quantitative research projects usually grow from qualitative observations of the physical world, such as 'I can see that ice seems to melt when it gets warm. At what temperature does ice melt?' or qualitative exploration of the existing literature to find things from other research that is surprising or unexplained.

 

In the classic high-school science experiment above, you would quantitatively measure the melting point of water by taking a sample. You don't try and melt all the ice in the world: you take one piece, and assume that other ice behaves in the same way. In both quantitative and qualitative research, sampling correctly is important. Just as only taking one small piece of impure ice will give you skewed results, so will only sampling one small part of a human population.

 

In quantitative research, because you are usually only sampling for one question at a time (i.e. temperature) it's best to have a large sample size. Especially when dealing with naturally variable, unrestricted variables (for example like a person's height) the data will tend to form a bell curve with a large majority of the answers in the middle, and a small number of outliers at either end. If we were sampling ice to melt, we might find that most ice melts around the same temperature, but very pure or dirty ice will have a slight difference. We would take the answer to be the statistical average, for the mean by adding up all the results and dividing by the sample size.

 

You could argue that the same is true for qualitative research. If you are asking people about their favourite ice cream, you'll get a better answer by asking a large number of people, right? Well this might not always be true. Firstly, just as with the ice melting experiment, sampling every piece of ice in the world will not add much more accuracy to your work but will be a lot more work. And with qualitative research, you are generally asking a much more complicated question for each person sampled, so the work increases exponentially as your sample size grows.

 


As your qualitative data grows, Quirkos can help you manage and make sense of it...

 

Remember, it's rare that qualitative research aims to give one definitive answer: it's more exploratory, and interested in the outlier cases just as much as the common ones. So in our qualitative research question 'What is your favourite ice cream' people may talk about gelato, sorbet or iced coffee. Are these really ice cream? One could argue that technically they are not, but if people consider them to be ice cream, and we want to know what to sell for desert at our restaurant, this becomes relevant. As a result of qualitative research, we usually learn to ask better questions 'What is your favourite frozen dessert?' might be a better question.

 

Now our qualitative research has helped us create a good piece of quantitative research. We can do a survey with a large sample size, and ask the question 'What is your favourite frozen dessert?' and give a list of options which are the most common answers from our qualitative research.

 

However, there can still be flaws with this approach. When answering a survey people don't always say what they mean, and you lose the context of their answers. In surveys there is primacy effect which means that people are lazy, and much more likely to tick the first answer in a list. In this case, the richness of our qualitative answers are lost. We don't know what context people are talking about (while walking along a beach, or in a restaurant or at home?) and we also loose the direct contact with the respondent so we can tell if they are lying or being sarcastic, and we can't ask follow on questions.

 

That's why qualitative research can still be useful as part of, or following quantitative research, for discovering ‘Why’ – understanding the results in the richness of lived experience. Often research projects will have a qualitative component – taking a subset of the the larger quantitative study and getting an in-depth qualitative insight with them.

 

There’s no shame in using a mixed methods approach if it is the most appropriate for the subject area. While there is often criticism over studies that ‘tack-on’ a small qualitative component, and don’t properly integrate or triangulate the types of results, this is a implementation rather than paradigm problem. But remember, it’s not a case of one approach vs another, there are no inheriently good or bad approaches. Methods should be appropriate to each task/question and should be servants to the researcher, not ruling them (Silverman 2013).

 

Quirkos is about as close as a pure qualitative software package as you can find. It's quick to learn, visual and keeps you close to the data. Our focus is on just doing qualitative coding and analysis well, and not to attempt  statistical analysis of qualitative data. We believe that for most qualitative researchers that's the right methodological approach. However, there is capacity for allow some mixed method analysis, so that you can filter results by demographic or other data.

 

The best way to see if Quirkos works for you is to give it a go! Download our one month free trial of the full version with no restrictions, and see if Quirkos works for your research paradigm.

 

 

The importance of the new qualitative data exchange standard

qda xml qualitative exchange

 

Last week, a group of software developers from ATLAS.ti, f4analyse, Nvivo (QSR), Transana, QDA Miner (Provalis) and Quirkos were in Montreal for the third international meeting on the creation of a common file format for exchanging qualitative data projects. The initiative is also supported by Dedoose and MAXQDA, which means that all the major qualitative data analysis software (QDAS) providers have agreed to support a standard that will allow researchers to bring data across any existing QDAS platform.

 

This work has been almost two years in the making already, and so far the first part of the standard was announced last week – a ‘codebook’ exchange file, which lets users share their coding framework, i.e. the list of codes/nodes/themes/Quirks that you use in your project. This is already pretty useful if you have developed a long, or standardised coding framework for analysis, and want to use it in another project in a different qualitative analysis software package.

 

However, this is really the tip of the iceberg. It is hoped that by early next year, the full standard will be complete and released, allowing for much more complete projects (including text and multimedia sources and coding) to be exchanged between whatever software package you like. Although the official page: qdasoftware.org  (currently redirecting to here http://web.ato.uqam.ca/developpements/formats_echange/QDAS-XML) lists more technical details of the aim and format of the exchange initiative, it’s a necessarily technical. I’d like here to briefly discuss why I think this is the most important piece of news in the last 20 years for qualitative research.

 

Analysing and coding qualitative data is extremely time consuming, even when using software to help. It can also be mentally and emotionally draining, and the idea of having to redo this work is impossible for most researchers to swallow: it would be like trying to rewrite a novel from scratch – for many large qualitative projects, it is probably a similar amount of work.

 

And until now, there were very few options to move this project from one piece of software for another. Imagine after writing your novel in Word if you couldn’t share it with the public, or even your editor because they were using a different software package? While some QDAS allow limited import and export of certain features from certain other packages, this can be tortuous or usually impossible. For example, MAXQDA seems to currently be able to import projects from NVivo 8 or 9 (but not the more recent versions 10, 11 or 12) and only by installing MS SQL Server 2008, and only on Windows. You can’t save stuff back again, and every time there is a new version of the software, this procedure has to change again (or like this example, gets stuck in an older version), and your data might get trapped.

 

If you move to a different university that has a subscription to a different tool from where you worked or studied before you can’t access your data. You can’t work with someone who has a licence for a different qualitative software package, because you probably can’t share your data projects. In the past this has limited cross-institutional research projects I’ve been part of. And if you’ve done most of your work in one package, but want to use one cool feature in another one, you are out of luck.

 

Qualitative analysis software is expensive, and the university departments which buy them only let you have one at a time. And woe betide you if someone high-up decides they aren’t buying, say, Atlas.ti anymore, you all have to use MAXQDA. All your previous work is probably inaccessible or can only be restored by using painstaking procedures of recollecting and redoing all you had done in your former software.

 

And even if you finished that previous project, the richness of qualitative data means that there are often many different things that could be read from the same set of sources. For example, a project that interviewed people about job prospects and training might also have interesting data about people’s self-esteem and identity through their career. The current situation where data is trapped in a single, proprietary format really limits potential for revisiting analysis again in the future.

 

So that’s the internal problem for qualitative researchers. But the impacts to wider society are far greater.

 

In theory, when writing a research article for publication, the editor or reviewers can ask to examine any of the data for the project, checking for bias or errors in statistical interpretation. But for qualitative research this is made much more difficult due to the large numbers of different formats that data might be in. I feel this is has led to some of the accusations of bias and lack of replicability in qualitative research. It’s really hard to see someone’s analysis process, even if they are reviewing your article for publication – the fundamental basis of trust in science publications.

 

This links into problems with data archiving. Making an anonymised version of your data publically available is increasingly a requirement with publicly funded research. Some of this is possible since the raw data will likely be transcripts in a common text format. But the working out, the coding and details of your analysis and conclusions may be in for example a .nvp (NVivo project file) or similar. And if you don’t have that exact version of the software or work on a Mac, you can’t open that file. Again, the rapid changing of these file formats does not create much future-proofing – in 10 years from now there may be no software that can open your old project.

 

This means that data archives of qualitative data are currently of limited use, since they don’t have coded data, or it is shared in a proprietary format that most people can’t open. There is no free ‘reader’ app for most of these proprietary project files.

 


So why has this happened, and taken so long to fix?

 

Firstly, there are commercial arguments – it seems to make business sense to lock users to a particular software package, as you make it less likely for them to change to a rival software package. I’m not sure how big a consideration this actually is, but it’s a common practice across many industries. Personally, I am always surprised by the fantastic level of comradery between the ‘rival’ software developers in the meetings about creating the exchange format – we are all here for the users (many are qualitative researchers themselves).

 

Secondly, it is very hard to develop these open standards, and this was not the first attempt - For example the UK DExT format. There have been several such proposals and specifications previously published, but none of them have attracted support from more than one developer. Getting that cross-developer support is obviously crucial to getting adoption, otherwise you add new complexity and uncertainty to the field:


xkcd standards

https://xkcd.com/927/

 

And this is why I think this QDA-XML exchange format is going to succeed. A great and independent committee, led by Jeanine Evers from KWALON  and Erasmus University Rotterdam have managed to get signed commitments of support from all the major qualitative software developers, and nearly all of them have been working on the standard for the last two years.

 

There is likely to be good support since decisions made about the format have been negotiated (often at great length) between all the contributing members. Participants in the meetings have a good idea of what their software can and can’t do, and the best way to implement it. It has been an often painful process of compromise for this first version, as many software packages have unique features.

 

So that is the one caveat – this format will not be 100% comprehensive. A particular pretty output graph you crafted in one software package can’t be shown in another in the same way, as certain ways of working which are unique to a software will be lost in translation.

 

But, I think that for most users the format will allow them to transfer and preserve 90% of their work, and certainly all the basics; codes and coding, sources and metadata, groupings and categories, notes and memos. These things won’t look exactly the same in all packages (for example Quirkos supports 16 million colours for codes, some don’t support colours at all). However, the important parts of your data and analysis will come through, allowing for greater flexibility and opportunities for sharing, archiving and secondary analysis. To me, this opens the door to a fundamentally better understanding of the world.

 

An open, liberally licenced (MIT) standard means that anyone can support it, so it is not limited to the current developers, it is very much a future looking initiative. While I suspect it will still be some time in 2019 until this full support appears in releases of your favourite qualitative analysis software (CAQDAS), I think the promise of an open standard is nearer to being delivered than ever before, and that it will fundamentally change for the better the world of qualitative research.