Quirkos for Linux!

quirkos loves linux

 

We are excited to announce official Quirkos support for Linux! This is something we have been working on for some time, and have been really encouraged by user demand to support this Free and Open Source (FOSS) platform. Quirkos on Linux is identical to the Windows and Mac versions, with the same graphical interface, feature set and file format, so there are no issues working across platforms.


Currently we are only offering a script based installer, which can be downloaded from the main download page. In the future we may try and offer some packaged based deb or rpm downloads, but for the moment there are two practical reasons this is not feasible. First, it is much easier for us to provide one installer that should work on all distributions, regardless of what package manager is utilised. Secondly, Quirkos is build using the latest version of Qt (5.5) which is not yet supported in most stable distributions yet. This would either lead to dependency hell, or users having to install Qt5.5 libraries manually (which actually take up a lot of space, and are themselves based around a script based installer). However, we will revisit this in the future if there is sufficient demand.

 

Most dependencies can be solved by installing qt5 from your repository, although most KDE desktops will already have many of the required packages.

 

Once downloaded, you must make the installer file executable. There are two ways to do this, either by running “chmod +x  quirkos-1.3-linux-installer.run” from the shell in the directory containing the installer, or an alternative GUI based method in Gnome is to right click on the file in the Nautilus file browser, select the properties tab, and then tick the 'Allow executing file as program' box.


Once you've done this, either double click on the file, or run in the bash terminal with “./quirkos-1.3-linux-installer.run”. Of course, if you want to install to a system wide folder (such as /opt/bin) you should run the installer with root permissions. By default Quirkos will install in the user's home folder, although this can be changed during the install process. An uninstaller is also created, but all files are contained in the root Quirkos folder, so deleting the folder will remove everything from your system. After installing, a shortcut will be created on the desktop (on Ubuntu systems) which can be used to run Quirkos, or dragging the icon to the Unity side-bar will keep the launcher in an accessible place. Otherwise, run the Quirkos.sh file in the Quirkos folder to start the application.


If you are looking for FOSS software for qualitative research, try RQDA, an extension for the versatile R statistical package, an open source alternative to SPSS. There is also Weft QDA, although this doesn't seem to have been updated since 2006. It's worth noting that both have fairly obtuse interfaces, and are not well suited for beginners!


We have tested Quirkos on numerous different systems, but obviously we can't check all iterations. So if you have any problems or issues, PLEASE let us know, this is new ground for us, and indeed is the first 'mainstream' qualitative analysis software to be offered for Linux. In fact, tell us if it all works fine as well – the more we hear people are using Quirkos on Linux, the better!

 

 

Quirkos 1.3 is released!

Quirkos version 1.3 on Linux

We are proud to announce a significant update for Quirkos, that adds significant new features, improves performance, and provides a fresh new look. Major changes include:

  • PDF import
  • Greater ability to work with Levels to group and explore themes
  • Improved performance when working with large projects
  • New report generation and styling
  • Ability to copy and paste quotes directly from search and hierarchy views
  • Improved CSV export
  • New tree-hierarchy view for Quirks
  • Numerous bug fixes
  • Cleaner visual look

 

We’ve made a few tweaks to the way Quirkos looks, tidying up dialogue boxes and improving the general style and visibility, but maintaining the same layout, so there is nothing out of place for experienced users.

 


There is once again no change to Quirkos project files, so all versions of Quirkos can talk to each other with no issues, and there is no need to do anything to your files – just keep working with your qualitative data. The update is free for all paid users, and a simple process to install. Just download the latest version, install to the same directory as the last release, and the new version will replace the old. There is no need to update the licence code, and we would recommend all users to move to the new version as soon as they can to take advantage of the improvements!

 


Lots of people have requested PDF support, so that users can add journal articles and PDF reports into Quirkos, and we are happy to say this is now enabled. Please note that at the moment PDF support is limited to text only – some PDF files, especially from older journals that have been scanned in are not actually stored as text, but as a scanned image of text. Quirkos can’t read the text from these PDFs, and you will usually need to use OCR (optical character recognition) software to convert these (included in some professional editions of Acrobat Reader for example).

 


We have always supported ‘Levels’ in Quirkos, a way to group Quirks that can work across hierarchical groupings and parent-child relationships. Many people desired to work with categories in this way, so we have improved the ways you can work with levels. They are now a refinable category in search results and queries, allowing you to generate a report containing data refined by level, and a whole extra dimension to group your qualitative themes.

 


Reports have been completely revamped to improve how you share qualitative data, with better images, and a simpler layout. There are now many more options for showing the properties belonging to each quote, streamlined and grouped section headings, better display of hierarchial groupings, and a much more polished, professional look. As always, our reports can be shared as PDF, interactive HTML, or customised using basic CSS and Javascript.

 


Although the canvas view with distinctive topic bubbles is a distinguishing feature in Quirkos, we know some people prefer to work with a more traditional tree hierarchy view. We’ve taken on board a lot of feedback, and reworked the ‘luggage label’ view to a tree structure, so that it works better with large numbers of nodes. The hierarchy of grouped codes in this view has also been made clearer.

 


There are also numerous bug fixes and performance improvements, fixing some issues with activation, improving the speed when working with large sources, and some dialogue improvements to the properties editor on OS X.

 

We are also excited to launch our first release for Linux! Just like all the other platforms, the functionality, interface and project files are identical, so you can work across platforms with ease. There will be a separate blog post article about Quirkos on Linux tomorrow.

 


We are really excited about the improvements in the new version, so download it today, and let us know if you have any other suggestions or feedback. Nearly all of the features we have added have come from suggestions made by users, so keep giving us your feedback, and we will try and add your dream features to the next version...

 

 

Bing Pulse and data collection for market research

bing pulse example

 

Judging by the buzz and article sharing going on last week, there was a lot of interest and worry about Microsoft launching their own market research platform. Branded as part of ‘Bing’, their offering, called ‘Pulse’ has actually been around for a while, and is still geared around collecting feedback from live events, especially political discussions.


I can see why this move might have a lot of companies worried, it seems to me that the market research arena is crowded with start-ups and established firms offering platforms, or ‘communities’ for collecting participant data. There’s LiveMinds, Aha!, VisionLive, a quick search will bring up dozens of competitors. So an entry into the market from an organisation with deep pockets and brand awareness like Microsoft may well have many looking to see how this develops. However, with my own limited time with Pulse, I don’t think there is much to worry about yet.


First of all, Pulse is currently entirely focused on one niche, feedback on live events. There are no tools to do anything like advert or creatives validation, no proper survey tools or interactive online focus groups. The MO is very much quantitatively focused, with very little option to capture qualitative feedback at this time. Secondly, it seems to have a lot of limitations, and in this beta state, almost no documentation.


I quickly got stuck trying to create a real-time voting question, with a mandatory box for ‘response theme’ that was greyed out, but wouldn’t continue without being completed. The ‘help’ tools just link to a generic Bing help website, which don’t contain any content about Pulse. The layout is a little confusing, getting you stuck in a strange loop between the ‘Live Dashboard’ and ‘Pulse Options’, and it’s also slow: get used to seeing the little flapping loading logo after every action.

 

As for integration, the only option at the moment seems to be the API, which only has four available calls. There doesn’t seem to be any way to get results (especially those not covered by those API calls) out from the platform: I can’t see any CSV export or the like. Also, considering the powerful analytic options available through the Azure platform, it’s disappointing not to see any easy integration there. In short, far from being a quick DIY solution, you will need someone to programme yet another API into your platform to do anything more than look at a few graphs on the Pulse platform.

 

I want to stress that this was hardly a detailed review and test of the capabilities of the platform, my opinions are based just on playing with it for an hour or so. However, it is nice to be able to try it out with just a registration, personally I don’t like products where the demo is locked away and difficult to try out. It’s a competitive market, and I feel more inclined to trust software that the developers aren’t shy of showing off!

 

Now, I understand that most market research providers are not so much worried about the current feature set of Pulse, but what this entry into the field means in the future, especially for a product that Microsoft is content to offer for free at this time. But I would echo some of the comments made in the Greenbook article by Leonard Murphy, that it usually doesn’t make sense for market research firms to do their own their own quantitative data collection. The future, he says, is integrating with data collection tools and adding value in terms of insight, custom development and consultation.


And that is the crux with all these market research platforms: they are primarily data collection tools, with limited analytics. Pulse doesn’t seem to have anything on this front at the moment, but with too many of these solutions, the insight stops with a couple of graphs or statistics. I feel there is still the need to integrate with another tool, or draw from extensive market research analytic experience to make anything from the data once it has been collected. It maybe that most clients don’t expect or require any kind of rigour in the breakdown of project results, especially when it comes to qualitative data. I am still yet to see anything that looks to me like a true end-to-end platform for market research, but am willing to be proved wrong!

 

At the moment, there are some great and flexible tools for collecting customer data online, be it quantitative or qualitative. But these are ubiquitous, and very cheap to run – we host an online survey platform for our customers for free, just as a convenience. Yet getting to answers and insight from that data usually requires an additional analytical step, especially for qualitative research. As I’ve said before,  the most difficult step is understanding the data and how you integrate analytics into your workflow. Increasingly the data collection platform you choose, and how much you pay for it will not be an issue.

 

 

What can CAQDAS do for you? The Five-Level QDA

five level qda

 

I briefly mentioned in my last blog post an interesting new article by Silver and Woolf (2015) on teaching QDA (Qualitative Data Analysis) and CAQDAS (Computer Assisted Qualitative Data AnalysiS). It’s a great article, not only because it draws from more than 20 years combined pedagogical experience, but suggests a new way to guide students through using software for qualitative analysis.

 

The basis of the strategy is the ‘Five-Level QDA’ approach, which essentially aims to get students to stop and think about how they want to do their qualitative analysis before they dive head-first into learning a particular CAQDAS package. Users are guided through a five-step tool that I would paraphrase as:

 

  1. Stating the analysis/research objectives
  2. Devising an analytic plan
  3. Identifying matches between the plan and available tools
  4. Selecting which operations to do in which tools
  5. Creating a workflow of tools and software to meet all the aims above


For more detail, it’s worth checking out the full article which includes example worksheets, and there is also a textbook due out covering the approach in more depth. It’s also interesting to see how they describe the development of their pedagogical approach in the last decade or so.

 

The Five-Level method is designed to be delivered remotely, as well as in workshops, but to start out with being a software agnostic approach, drawing from the experience of the trainers to choose the best approach for each researcher and research project. Based around Analytic Planning Worksheets, it feels like the main aim is to get people to step back and think about their needs before learning the software.

 

This is often badly needed, mostly due to the practical limitations. Firstly, many people (especially new researchers) don’t do a detailed analytical plan when designing their project or research questions. In qualitative research, this is not always a disaster, often one source of investigation ends up being much richer than anticipated, and the variety of methods used mean that data doesn’t always look as we thought it would when we started (for better or worse).

 

However, there are also some very practical limitations which lead to people learning a CAQDAS package before they start their research journey. Often training courses are only offered once a semester (or year), so you need to take advantage of that when you can. While ideally there would be an interactive process between learning the capabilities and refining the analytical strategy, in the timescale of one project this is not always feasible. Often people have learnt so much after their first qualitative project, that the next time their analytical approach is extremely different.

 

The other issue is what CAQDAS is actually available: often a department or school will have a licence for just one (or maybe two) packages, and understandably, the training offered will often focus on those. There are also practical limitations when working as part of a team (especially people in a different institution) who might not have access to the same software. This affects the approach a lot, because it’s important to choose a workflow that everyone can participate in. I’ve been involved in projects where we end up using Excel for analysis, because everyone had access and familiarity with spreadsheet software.

 

I think that this is the kind of consideration that the Silver and Woolf worksheets are trying to tease out, and their examples illustrate how by point 5 people have chosen a number of tools that they can use together to answer their research questions.

 

As a final point, I think that RTP (Research Training Programmes) courses offered for post-grad students sometimes leave a lot to be desired on this front. Even those that are specifically on qualitative methodologies (where available) tend in my experience to have little more than a slide on the whole analysis process, and sometimes just a bullet point on software! I spend a lot of time talking to people about CAQDAS, and I am always surprised at how few people have heard even of NVivo, let alone a dozen alternative packages that I could name off the top of my head. Yet each has its own strengths and weaknesses: Transana is great for video, the Provalis products for stats geeks, MaxQDA a friendly all-rounder, Dedoose for working remotely, and Quirkos obviously for beginners.

 

But it’s a chicken and egg problem – to know what which software is best, you need to know what you can do with it. Which is why it can help so much to draw on the experience of CAQDAS trainers, but not just to go on a course and learn one package, but to go with an open mind and a research question, and let them suggest the best combination for each approach. In short, ask not what your CAQDAS can do for you, ask what you want to do with your CAQDAS!

 

Update (14/8/15):

 

Christain Schmieder has written a response to this blog post and the 5-level QDA, and how it links into his curriculum for qualitative question generation using CAQDAS.

 

 

The CAQDAS jigsaw: integrating with workflows

 

I’m increasingly seeing qualitative research software as being the middle piece of a jigsaw puzzle that has three stages: collection, coding/exploring, and communication. These steps are not always clear cut, and generally there should be a fluid link between them. But the process, and enacting of these steps is often quite distinct, and the more I think about the ‘typical’ workflow for qualitative analysis, the more I see these stages, and most critically, a need to be flexible, and allow people different ways of working.

 

At any stage it’s important to choose the best tools (and approach) for the job. For qualitative analysis, people have so many different approaches and needs, that it’s impossible to impose a ‘one-size-fits-all’ approach. Some people might be working with multimedia sources, have anything from 3 to 300 sources, and be using various different methodological and conceptual approaches. On top of all this are the more mundane, but important practical limitations, such as time, familiarity with computers, and what software packages their institution makes available to them.

 

But my contention is that the best way to go about facilitating a workflow is not to be a jack-of-all trades, but a master of one. For CAQDAS (Computer Assisted Qualitative Data AnalysiS) software, it should focus on what it does best: aiding the analysis process, and realise that it has to co-exist with many other software packages.

 

For the first stage, collection and transcription of data, I would generally not recommend people use any CAQDAS package. If you are recording transcripts, these are best done on a Dictaphone, and transcribing them is best done in proper word-processing software. While it’s technically possible to type directly into nearly all CAQDAS software tools (including Quirkos), why would you? Nearly everyone has access to Word or LibreOffice, which gives excellent spell-checking tools for typos, and much more control over saving and formatting each source. Even if you are working with multimedia data, you are probably going to trim audio transcripts in Audacity (or Pro-Tools), and resize and colour correct pictures in Photoshop.

 

So I think that qualitative analysis software needs to recognise this, and take in data from as many different sources as possible, and not try and tie people to one format or platform. It’s great to have tight integration with something like Evernote or SurveyMonkey, but both of those cost extra, and aren’t always the right approach for people, so it’s better to be input-agnostic.

 

But once you’ve got data in, it’s stage 2 where qualitative software shines. CAQDAS software is dedicated to the coding and sorting of qualitative data, and has tools and interfaces specifically designed to make this part of the process easier and quicker. However, that’s not how everyone wants to work. Some people are working in teams where not everyone has access to CAQDAS, and others prefer to use Word and Excel to sort and code data. That should be OK too, because for most people the comfortable and familiar way is the easiest path, and what it’s easy to forget as a software developer is that people want to focus on the data and findings, not the tools and process.

 

So CAQDAS should ideally be able to bring in data coded in other ways, for people that prefer to just do the visualisation and exploration in qualitative software. But CAQDAS should also be able to export coded data at this stage, so that people can play with the data in other ways. Some people want to do statistical analysis, so it should connect with SPSS or R. And it should also be able to work with spreadsheet software, because so many people are familiar with it, and it can be used to make very specific graphs.

 

Again, it’s possible to do all of this in most CAQDAS software, but I’ve yet to see any package that gives the statistical possibilities and rigour that R does, and while graphs seem to get prettier with every new version, I still prefer the greater customisation and export options you get in Excel.

 

The final stage is sharing and communicating, and once again this should be flexible too. Some people will have to get across their findings in a presentation, so generate images for this. Many will be writing up longer reports, so export options for getting quotes into word-processing software is essential again. At this stage you will (hopefully) be engaging with an ever widening audience, so outputs need to be completely software agnostic so everyone can read them.

 

When you start seeing all the different tools that people use in the course of their research project, this concept of CAQDAS being a middle piece of the puzzle becomes really clear, and allowing people flexibility is really important. Developing CAQDAS software is a challenge, because everyone has slightly different needs. But the solution usually seems to be more ways in, and more ways out. That way people can rely on the software as little or as much as they like, and always find an easy way to integrate with all the tools in their workflow.

 

I was inspired to write this by reading a recent article on the Five-level QDA approach, written by Christine Silver and Nick Woolf. They outline a really strong ‘Analytic Planning Worksheet’ that is designed to get people to stop and break down their analytical tasks before they start coding, so that they can identify the best tools and process for each stage. This helps researchers create a customisable workflow for their projects, which they can use with trainers to identify which software is best for each step.

 

Next week, I’m going to write a blog post more specifically about the Five-level QDA, and pedagogical issues that the article raises about learning qualitative research software. Watch this space!

 

 

Using Quirkos for fun and (extremely nerdy) projects

This week, something completely different! A guest blog from our own Kristin Schroeder!

 

Most of our blog is a serious and (hopefully) useful exploration of current topics in qualitative research and how to use Quirkos to help you with your research. However we thought it might be fun to share something a little different.


I first encountered qualitative research in a serious manner when I joined Quirkos in January this year, and to help me get up to speed I tried to code a few things to help me understand the software.
One of the texts I used was a chapter from The Lord of the Rings, because, I thought, with something I already know like the back of my hand I could concentrate on the mechanics of coding without being distracted too much by the content.


I chose ‘The Council of Elrond’ – one of the longest chapters in the book and one often derided for being little more than an extended information dump. Essentially lots and lots of characters (some of whom only appear in this one scene in the whole book) sit around and tell each other about stuff that happened much earlier. It’s probably not Tolkien’s finest writing, and I suppose, most modern editors would demand that all that verbal exposition should either be cut or converted into actual action chapters.


I have always loved the Council chapter, however, as to me it’s part of the fascinating backdrop of the Lord of the Rings. As Tolkien himself puts it in one of his Letters:


“Part of the attraction of the L.R. is, I think, due to the glimpses of a large history in the background: an attraction like that of viewing far off an unvisited island, or seeing the towers of a distant city gleaming in a sunlit mist.”


Of course, if you are a Tolkien fan(atic) you can go off and explore these unvisited islands and distant cities in the Silmarillion and the Histories of Middle Earth, and then bore your friends by smugly explaining all the fascinating First and Second Age references, and just why Elrond won’t accept an Oath from the Fellowship. (Yes, I am guilty of that…)


Looking at the chapter using Quirkos I expected to see bubbles growing around the exchange of news, around power and wisdom, and maybe to get some interesting overlap views on Frodo or Aragorn. However, the topic that surprised me most in this chapter in particular was Friendship.


I coded the topic ‘Friendship’ 29 times – as often as ‘Relaying News’ and ‘History’, and more often even than collective mentions of Elves (27), Humans (19) or the Black Riders (24).


The overlap view of ‘Friendship’ was especially unexpected:

 

The topics ‘Gandalf’ and ‘Friendship’ overlap 22 times, which is not totally surprising since Gandalf does most of the talking throughout the chapter, and he is the only character who knows everyone else in the Council already. But the second most frequent overlap is with Elrond: he intersects with Friendship eight times, which is more often than Frodo who only gets five overlaps with Friendship!


Like most of the Elves in Lord of the Rings, Elrond is rather aloof and even in his own council acts as a remote facilitator for the other characters. Yet, the cluster view on Friendship led me to reconsider his relationship not only with Gandalf (when Gandalf recites the Ring inscription in the Black Speech, he strongly presumes on Elrond’s friendship, and Elrond forgives him because of that friendship) but also with Bilbo.


Re-reading Elrond’s exchanges with Bilbo during the Council, I was struck by the gentle teasing apparent in the hobbit’s reminders of his need for lunch and Elrond’s requests that Bilbo should tell his story without too many embellishments and it need not be in verse. The friendship between Bilbo and Elrond also rather explains how Bilbo had the guts to compose and perform a song about Elrond’s father Eärendil in the previous chapter, something even Aragorn, Elrond’s foster son, described as a daring act.


Perhaps none of this is terribly surprising. Within the unfolding story of the Lord of the Rings, Bilbo has been living in Elrond’s house for 17 years - time enough even for busy Elflords to get to know their house guests. And for readers who grew up with the tale of The Hobbit, Bilbo’s centrality may also not be much of a surprise. For me, however, looking at the chapter using Quirkos opened up a rather pleasing new dimension and led me to reconsider a couple of beloved characters in a new light.

 

 

Participatory Qualitative Analysis

laptops for qualitative analysis

 

Engaging participants in the research process can be a valuable and insightful endeavour, leading to researchers addressing the right issues, and asking the right questions. Many funding boards in the UK (especially in health) make engaging with members of the public, or targets of the research a requirement in publicly funded research.

 

While there are similar obligations to provide dissemination and research outputs that are targeted at ‘lay’ members of the public, the engagement process usually ends in the planning stage. It is rare for researchers to have participants, or even major organisational stakeholders, become part of the analysis process, and use their interpretations to translate the data into meaningful findings.

 

With surprisingly little training, I believe that anyone can do qualitative analysis, and get engaged in actions like coding and topic discovery in qualitative data sets.

 

I’ve written about this before but earlier this year we actually had a chance to try this out with Quirkos. It was one of the main reasons we wanted to design new qualitative analysis software; existing solutions were too difficult to learn for non-expert researchers (and quite a lot of experienced experts too).

 

So when we did our research project on the Scottish Referendum, we invited all of the participants to come along to a series of workshops and try analysing the data themselves. Out of 12, only 3 actually came along, but none of these people had any experience of doing qualitative research before.

 

And they were great at it!

 

In a two hour session, respondents were given a quick overview of how to do coding in Quirkos (in just 15 minutes), and a basic framework of codes they could use to analyse the text. They were free to use these topics, or create their own as they wished – all 3 participants chose to add codes to the existing framework.

 

They were each given transcripts from someone else’s anonymised interview: as these were group sessions, we didn’t want people to be identified while coding their own transcript. Each were 30 minute interviews, around 5000 words in length. In the two hour session, all participants had coded one interview completely, and done most (or all) of the second. One participant was so engrossed in the process, he had to be sent home before he missed his dinner, but took a copy of Quirkos and the data home to keep working on his own computer.

 

The graph below shows how quickly the participants learnt how to code. The y axis shows the number of seconds between each ‘coding event’: every time someone coded a new piece of text (and numbered sequentially along the x axis). The time taken to code starts off high, with questions and missteps meaning each event takes a minute or more. However, the time between events quickly decreases, and in fact the average time for the respondents was to add a code every 20 seconds. This is after any gaps longer than 3 minutes have been removed – these are assumed to be breaks for tea or debate! Each user made at least 140 tags, assigning text to one or more categories.

 

 

So participants can be used as cheap labour to speed up or triangulate the coding process? Well, it can be more than this. The topics they chose to add to the framework (‘love of Scotland’, ‘anti-English feelings’, ‘Scottish Difference’) highlighted their own interpretations of the data, showing their own opinions and variations. It also prompted discussion with other coders, about what they thought about the views of people in the dataset, how they had interpreted the data:


“Suspicion, oh yeah, that’s negative trust. Love of Scotland, oh! I put anti-English feelings which is the opposite! Ours are like inverse pictures of each other’s!”

 

Yes: obviously we recorded and transcribed the discussions and reflections, and analysed them in Quirkos! And these revealed that people expressed familiar issues with reflexivity, reliability and process that could have come from experienced qualitative researchers:


“My view on what the categories mean or what the person is saying might change before the end, so I could have actually read the whole thing through before doing the comments”


“I started adding in categories, and then thinking, ooh, if I’d added that in earlier I could actually have tied it up to such-and-such comment”


“I thought that bit revealed a lot about her political beliefs, and I could feel my emotions entering into my judgement”


“I also didn’t want to leave any comment unclassified, but we could do, couldn’t we? That to me is about the mechanics of using the computer, ticky box thing.”

 

This is probably the most useful part of the project to a researcher: the input of participants can be used as stimulus for additional discussion and data collection, or to challenge the way researchers do their own coding. I found myself being challenged about how I had assigned codes to controversial topics, and researchers could use a more formal triangulation process to compare coding between researchers and participants, thus verifying themes, or identifying and challenging significant differences.

 

Obviously, this is a tiny experimental project, and the experience of 3 well educated, middle-class Scots should not be interpreted as meaning that anyone can (or would want to) do this kind of analysis. But I believe we should do try this kind of approach whenever it is appropriate. For most social research, the experts are the people who are always in the field – the participants who are living these lives every day.

 

You can download the full report, as well as the transcripts and coded data as a Quirkos file from http://www.quirkos.com/workshops/referendum/

 

 

Engaging qualitative research with a quantitative audience.

graphs of quantiatative data in media

 

The last two blog post articles were based on a talk I was invited to give at ‘Mind the Gap’, a conference organised by MDH RSA at the University of Sheffield. You can find the slides here, but they are not very text heavy, so don’t read well without audio!

 

The two talks which preceded me, by Professors Glynis Cousin and John Sandars, echoed quite a few of the themes. Professor Cousin spoke persuasively about reductionism in qualitative research, in her talk on the ‘Science of the Singular’ and the significance that can be drawn from a single case study. She argued that by necessity all research is reductive, and even ‘fictive’, but that doesn’t restrict what we can interpret from it.

 

Professor Cousin described how both Goffman (1961) and Kenkessie (1962) did extensive ethnographies on mental asylums about the same time, but one wrote a classic academic text, and the other the ‘fictive’ novel, One Flew Over the Cuckoo’s Nest. One could argue that both were very influential, but the different approaches in ‘writing-up’ appeal to different audiences.

 

That notion of writing for your audience was evident in Professor Sanders talk, and his concern for communications methods that have the most impact. Drawing from a variety of mixed-method research projects in education, he talked about choosing a methodology that has to balance the approach the researcher desires in their heart, with what the audience will accept. It is little use choosing an action-research approach if the target audience (or journal editors) find it inappropriate in some way.

 

This sparked some debate about how well qualitative methods are accepted in mainstream journals, and if there is a preference towards publishing research based on quantitative methods. Some felt that authors felt an obligation to take a defensive stance when describing qualitative methods, further restricting the limited word limits that cut down so much detail in qualitative dissemination. The final speaker, Dr Kiera Barlett also touched on this issue when discussing publications strategies for mixed-method projects. Should you have separate qualitative and quantitative papers for respective journals, or try and have publications that draw from all aspects of the study? Obviously this will depend on the field, findings and methods chosen, but it again raised a difficult issue.

 

Is it still the case that quantitative findings have more impact than qualitative ones? Do journal articles, funders and decision makers still have a preference for what are seen as more traditional statistical based methodologies? From my own anecdotal position I would have to agree with most of these, although to be fair I have seen little evidence of funding bodies (at least in the UK and in social sciences and health) having a strong preference against qualitative methods of inquiry.

 

However, during the discussion at the conference it was noted that the preference for ‘traditional’ methods is not just restricted to journal reviewers but the culture of disciplines at large. This is often for good reason, and not restricted to a qualitative/quantitative divide: particular techniques and statistical tests tend to dominate, partly because they are well known. This has a great advantage: if you use a common indicator or test, people probably have a better understanding of the approach and limitations, so can interpret the results better, and compare with other studies. With a novel approach, one could argue that readers also need to also go and read all the references in the methodology section (which they may or may not bother to do), and that comparisons and research synthesis are made more difficult.

 

As for journal articles, participants pointed out that many online and open-access journals have removed word limits (or effectively done so by allowing hyperlinked appendices), making publication of long text based selections of qualitative data easier. However, this doesn’t necessarily increase palatability, and that’s why I want to get back to this issue about considering the audience for research findings, and choosing an appropriate medium.

 

It may be easy to say that if research is predominantly a quantitative world, quantifying, summarising, and statistically analysing qualitative data is the way to go. But this is abhorrent, not just to the heart of a qualitative researcher, but also deceptive - imparting a quantitative fiction on a qualitative story. Perhaps the challenge is to think of approaches outside the written journal article. If we can submit a graphic novel as a PhD or explain your research as a dance we can reach new audiences, and engage in new ways with existing ones.

 

Producing graphs, pie charts, and even the bubble views in Quirkos are all ways that essentially summarise, quantify and potentially trivialise qualitative data. But if this allows us to access a wider audience used to quantitative methods, it may have a valuable utility, at least in providing that first engagement that makes a reader want to look in more detail. In my opinion, the worst research is that which stays unread on the shelf.

 

 

Our hyper-connected qualitative world

qualitative neurons and connections

 

We live in a world of deep qualitative data.

 

It’s often proposed that we are very quantitatively literate. We are exposed to numbers and statistics frequently in news reports, at work, when driving, with fitness apps etc. So we are actually pretty good at understanding things like percentages, fractions, and making sense of them quickly. It’s a good reason why people like to see graphs and numerical summaries of data in reports and presentations: it’s a near universal language that people can quickly understand.

 

But I believe we are also really good at qualitative understanding.

 

Bohn and Short in a 2009 study estimated that “The average American consumes 100,500 words of information in a single day”, comprised of conversations, TV shows, news, written articles, books… It sounds like a staggering amount of qualitative data to be exposed to, basically a whole PhD thesis every single day!

 

Obviously, we don’t digest and process all of this, people are extremely good at filtering this data; ignoring adverts, skim reading websites to get to the articles we are interested in and skim reading those, and of course, summarising the gist of conversations with a few words and feelings. That’s why I argue that we are nearly all qualitative experts, summarising and making connections with qualitative life all the time.


And those connections are the most important thing, and the skill that socially astute humans do so well. We can pick up on unspoken qualitative nuances when someone tells us something, and understand the context of a news article based on the author and what is being reported. Words we hear such as ‘economy’ and ‘cancer’ and ‘earthquake’ are imbued with meaning for us, connecting to other things such as ‘my job’ and ‘fear’ and ‘buildings’.

 

This neural network of meaning is a key part of our qualitative understanding of the world, and whether we want to challenge these by some type of Derridan deconstruction of our associations between language and meaning, they form a key part of our daily prejudices and understanding of the world in which we live.

 

For me, a key problem with qualitative analysis is that it struggles to preserve or record these connections and lived associations. I touched on this issue of reductionism in the last blog post article on structuring unstructured qualitative data, but it can be considered a major weakness of qualitative analysis software. Essentially, one removes these connected meanings from the data, and puts it as a binary category, or at best, represents it on a scale.

 

Incidentally, this debate about scaling and quantifying qualitative data has been going on for at least 70 years from Guttman, who even in this 1944 article notes that there has been ‘considerable discussion concerning the utility of such orderings’. What frustrates me at the moment is that while some qualitative analysis software can help with scaling this data, or even presenting it in a 2 or 3 dimensional scale by applying attributes such as weighting, it still is a crude approximation of the complex neural connections of meaning that deep qualitative data possesses.

 

In my experiments getting people with no formal qualitative or research experience to try qualitative analysis with Quirkos, I am always impressed at how quickly people take to it, and can start to code and assign meaning to qualitative text from articles or interviews. It’s something we do all the time, and most people don’t seem to have a problem categorising qualitative themes. However, many people soon find the activity restrictive (just like trained researchers do) and worry about how well a basic category can represent some of the more complex meanings in the data.

 

Perhaps one day there will be practical computers and software that ape the neural networks that make us all such good qualitative beings, and can automatically understand qualitative connections. But until then, the best way of analysing data seems to be to tap into any one of these freely available neural networks (i.e. a person) and use their lived experience in a qualitative world in partnership with a simple software tool to summarise complex data for others to digest.

 

After all, whatever reports and articles we create will have to compete with the other 100,000 words our readers are consuming that day!

 

 

Structuring unstructured data

 

The terms ‘unstructured data’ and ‘qualitative data’ are often used interchangeably, but unstructured data is becoming more commonly associated with data mining and big data approaches to text analytics. Here the comparison is drawn with databases of data where we have a defined field and known value and the loosely structured (especially to a computer) world of language, discussion and comment. A qualitative researcher lives in a realm of unstructured data, the person they might be interviewing doesn’t have a happy/sad sign above their head, the researcher (or friend) must listen and interpret their interactions and speech to make a categorisation based on the available evidence.


At their core, all qualitative analysis software systems are based around defining and coding: selecting a piece of text, and assigning it to a category (or categories). However, it is easy to see this process as being ‘reductionist’: essentially removing a piece of data from it’s context, and defining it as a one-dimensional attribute. This text is about freedom. This text is about liberty. Regardless of the analytical insight of the researcher in deciding what relevant themes should be, and then filtering a sentence into that category, the final product appears to be a series of lists of sections of text.


This process leads to difficult questions such as, is this approach still qualitative? Without the nuanced connections between complicated topics and lived experiences, can we still call something that has been reduced to a binary yes/no association qualitative? Does this remove or abstract researchers from the data? Isn't this a way of quantifying qualitative data?


While such debates are similarly multifaceted, I would usually argue that this process of structuring qualitative data does begin to categorise and quantify it, and it does remove researchers from their data. But I also think that for most analytical tasks, this is OK, if not essential! Lee and Fielding (1996) say that “coding, like linking in hypertext, is a form of data reduction, and for many qualitative researchers is an important strategy which they would use irrespective of the availability of software”. When a researcher turns a life into 1 year ethnography, or a 1 hour interview, that is a form of data reduction. So is turning an audio transcript into text, and so is skim reading and highlighted printed versions of that text.


It’s important to keep an eye on the end game for most researchers: producing a well evidenced, accurate summary of a complex issue. Most research, as a formula to predict the world or a journal article describing it, is a communication exercise that (purely by the laws of entropy if not practicality) must be briefer than the sum of it’s parts. Yet we should also be much more aware that we are doing this, and together with our personal reflexivity think about the methodological reflexivity, and acknowledge what is being lost or given prominence in our chosen process.


Our brains are extremely good at comprehending the complex web of qualitative connections that make everyday life, and even for experienced researchers our intuitive insight into these processes often seem to go beyond any attempt to rationalise them. A structuralist approach to qualitative data can not only help as an aide-mémoir, but also to demonstrate our process to others, and challenge our own assumptions.


In general I would agree with Kelle (1997) that “the danger of methodological biases and distortion arising from the use of certain software packages is overemphasized in current discussions”. It’s not the tool, it’s how you use it!