Most of the examples the blog has covered so far have been about using Quirkos for research, especially with interview and participant text sources. However, Quirkos can take any text source you can open on your computer, including text PDFs (but not scanned PDFs where each page is effectively a photograph). So why not use Quirkos like a reference manager, to sort and analyse a large cohort of articles and research? The advantage being that you can not only keep track of references, but also cross-reference the content: analysing common themes across all the articles.

There are two ways to manage this: first you can set the standard information for each source/article that is imported, such as author, year, journal, etc. If you format these as you wish them to appear in the reference (by putting the commas and dots in the value), and order them with the Properties and Value editor, you can create reports that churn out the references in whichever notation you need, such as Harvard or APA. But you can also add any extra values you like at an article level, so you could rank articles out of 10, have a comment property, or categorise them by methodology. This way, you can quickly see only text from articles rated as 8/10 or above, or everything with a sample size between 50 and 100: whatever information you categorise.

Secondly you can categorise text in the article using Quirk bubbles. So as you read through the articles, code sections in any way that is of interest to you: highlight sections on the methodology, bits you aren’t convinced about, or other references you want to check out. Highlight findings and conclusion sections (or just interesting parts of them), and with the properties you can quickly look at all the findings from papers using a particular approach, and compare and contrast them. It’s obviously quite a bit of work to code all your articles, but since you would have to read through all the papers anyway, making your notes digital and searchable in this way makes it much quicker and flexible when pulling it all together.

With qualitative synthesis you can combine multiple pieces of research, and see if there are common themes, or contradictions. Say you have found three articles on parenting, but they are all from different minority ethnic communities. Code them in Quirkos, and in a click you can see all the problems people are having with schools across all groups, or if one community describes more serious issues than another.

Evidence synthesis and systematic reviews like this are often, and quite rightly, mandated by funders and departments before commissioning a new piece of research, to make sure that the research questions add meaningfully to the existing canon. However, it’s also worth noting that, especially with qualitative synthesis taken from published articles, there can be a publication bias by relying only on comments left in the final paper: most of the data set is hidden to secondary researchers. Imagine if you are looking at schooling and parenting, but are taking data from an article on the difficulties of parenting: it’s possible that the researchers did not include quotations on the good aspects of school as it was outside the article’s focus. If possible it’s always worth getting the full data set, but this can often throw up data protection and ethical issues. There’s no simple answer to these problems, except to make sure readers are aware of your sources, and anticipate the likely limitations of your approach. Often with qualitative research, it feels like reflexivity and disclaimers go hand in hand!

Tags : qualtative ,   synthesis ,   systematic ,   reviews ,   evidence