Quirkos is just weeks away!

It's been a long time since I've had time to write a blog article, as there are so many things to put in place before Quirkos launches in the next few weeks. But one-by-one everything is coming together. Feedback has helped us tweak the interface, testing across all the platforms is going well, the manuals and support resources are developing and the infrastructure is in place to let us deliver downloads and licences to our first customers!


We will be announcing the pricing structure next week, but there will always be a one-month free trial, so everyone can try Quirkos and see if it’s right for them. We are also really excited that there will be a formal launch workshop in London in December, hosted by the University of Surrey CAQDAS Networking Project. Quirkos will be available to purchase beforehand, but this will be the first proper Quirkos event, and there will be cake to celebrate!


We will also have our first international event in October, when Quirkos will be on show at the Qualitative Health Research conference in Victoria, Canada. It’s run by the fantastic International Institute for Qualitative Methodology at the University of Alberta, and is now in it’s 20th year. In the next few months, we will also announce a series of UK workshops in major cities and Universities on using Quirkos for qualitative research. There will also be some exciting announcements about new people joining the Quirkos team, and more stories from people who have been using Quirkos in their work. In short, it’s going to be a busy few months!

Knowing your customers

barcode

As consumers, it feels like we are bombarded more than ever with opportunities for providing feedback on products and services. While shopping on-line, or even when browsing BBC News we are asked to complete a short questionnaire; after dealing with a telephone bank there’s the option to complete a quick survey; and at airport security you can rate your experience by hitting a button with either a smiley face or a frowny face.

 

But despite being told that ‘your feedback really matters to us’, what happens to it? It’s often difficult to see any change from your feedback, and even when giving direct feedback, too often the changes suggested are not made. But more than this, it’s sometimes difficult to understand how we can expect people to understand our needs and problems with such brief and forced categorisation. If I rate a telephone sales agent from 1-10 on categories of helpfulness, friendliness, and professionalism, you are forced to somehow shoehorn your feedback on any other aspect of the experience into these areas. You don’t know if ‘length of wait’ will be a category, and you could care less how friendly they were, if they couldn’t fix your problem.

 

I wonder this when hearing stories in the news about the continuing sales decline at Tesco, a huge organisation that clearly spends millions on customer understanding. Because for all the hoop-la around loyalty schemes, ‘convenience’ stores, price-matching (only with certain competitors), are management at some level blind to the rise of budget supermarkets? Customers clearly aren’t, and it’s difficult to tell if big supermarkets either have their head in the sand, or they assume their customers are stupid.

 

I can’t pretend to know why the likes of Lidl and Aldi have become so popular, it could just be price, but perhaps a more pleasant streamlined shopping experience, without having to choose between value, regular and luxury versions of everything. A focus group could tell you: if you had a wide range of users, you could ask them questions; or better still, let people raise issues themselves without being pigeonholed. And this seems to be more and more difficult for the grocery shopping market, thanks to a huge demographic shift. Watch Robert Preston’s excellent series on consumer culture, and you can see that in the 60s, to know retail shoppers was to know housewives: if you could get their spending, you got it all. But today everyone shops, and with a lingering recession, job pressures and a mobile work market, we shop whenever we get a chance. A one stop grocery shop usually tries to attract housewives, househusbands, students, bachelors, hipsters and skinflints alike, either spreading themselves too thin, or becoming bewildering to all.

 

So perhaps the one-size-fits all model is not going to be the way of the future. On-line grocery sales have begun to slow at just 5.1% of the market, either there is going to be an innovation here, or the market already has found its niche. But it seems that many smaller grocery stores are on the rise, tailored to a specific target audience. Health food shops which in the 90s used to only sell vitamins and gluten-free flour now sell 'healthy' cornflakes and fresh organic produce – so you can get everything in one shop. Marks and Spencer’s ubiquitous stores now cater for a new ready-meal elite: grabbing dinner for one or two on the way back from the office. And Farmfoods and Iceland have the low end of the market – frozen food so busy families can buy a week’s worth of inexpensive, easily prepared budget meals.

 

So these big chains do well by knowing their audience. And it’s no different for smaller, independent businesses. Quirkos is proposing that even these small firms can afford to do their own direct market research, and with detail that will give a much better feel for their customers than just relying on crude statistics and smiley and frowny faces. That way, rather than relying on very traditional market research, business can take a more local and individual approach. Rather than focus groups behind one-way-mirrors, or questionnaires with low engagement rates, why not invite a group of customers to a wine evening, and record a discussion about new products? If recorded properly by diligent staff, collated and analysed, informal feedback to cashiers can start to build a picture of what products or experiences are missing.

 

Then Quirkos would step in, providing software that is easy to get started with, so a manager can pull together all these sources of feedback, read them, and put them into themes that s/he can use to make the changes customers are looking for. Market research is already a huge industry in the UK, but can’t we go further and democratise it? Small scale for small businesses, quick to learn, and priced for everyone?

Using Quirkos for Systematic Reviews and Evidence Synthesis

Most of the examples the blog has covered so far have been about using Quirkos for research, especially with interview and participant text sources. However, Quirkos can take any text source you can open on your computer, including text PDFs (but not scanned PDFs where each page is effectively a photograph). So why not use Quirkos like a reference manager, to sort and analyse a large cohort of articles and research? The advantage being that you can not only keep track of references, but also cross-reference the content: analysing common themes across all the articles.


There are two ways to manage this: first you can set the standard information for each source/article that is imported, such as author, year, journal, etc. If you format these as you wish them to appear in the reference (by putting the commas and dots in the value), and order them with the Properties and Value editor, you can create reports that churn out the references in whichever notation you need, such as Harvard or APA. But you can also add any extra values you like at an article level, so you could rank articles out of 10, have a comment property, or categorise them by methodology. This way, you can quickly see only text from articles rated as 8/10 or above, or everything with a sample size between 50 and 100: whatever information you categorise.


Secondly you can categorise text in the article using Quirk bubbles. So as you read through the articles, code sections in any way that is of interest to you: highlight sections on the methodology, bits you aren’t convinced about, or other references you want to check out. Highlight findings and conclusion sections (or just interesting parts of them), and with the properties you can quickly look at all the findings from papers using a particular approach, and compare and contrast them. It’s obviously quite a bit of work to code all your articles, but since you would have to read through all the papers anyway, making your notes digital and searchable in this way makes it much quicker and flexible when pulling it all together.


With qualitative synthesis you can combine multiple pieces of research, and see if there are common themes, or contradictions. Say you have found three articles on parenting, but they are all from different minority ethnic communities. Code them in Quirkos, and in a click you can see all the problems people are having with schools across all groups, or if one community describes more serious issues than another.


Evidence synthesis and systematic reviews like this are often, and quite rightly, mandated by funders and departments before commissioning a new piece of research, to make sure that the research questions add meaningfully to the existing canon. However, it’s also worth noting that, especially with qualitative synthesis taken from published articles, there can be a publication bias by relying only on comments left in the final paper: most of the data set is hidden to secondary researchers. Imagine if you are looking at schooling and parenting, but are taking data from an article on the difficulties of parenting: it’s possible that the researchers did not include quotations on the good aspects of school as it was outside the article’s focus. If possible it’s always worth getting the full data set, but this can often throw up data protection and ethical issues. There’s no simple answer to these problems, except to make sure readers are aware of your sources, and anticipate the likely limitations of your approach. Often with qualitative research, it feels like reflexivity and disclaimers go hand in hand!

Getting a foot in the door with qualitative research

A foot in a doorA quick look at the British Library thesis catalogue suggests that around 800 theses are completed every year in the UK using qualitative methods*. This suggests that 7% of the roughly 10,000 annual British PhDs completed use qualitative methods. There are likely to be many more masters theses, and even undergraduate dissertations that use some qualitative methods, so the student demand for qualitative training is considerable.

 

Usually, while PhD Research Training Programmes will include good coverage of different qualitative methods and ethical issues, using software for qualitative analysis is often not covered. In my experience it is either left to summer school sessions, annual one-off internal training sessions, but usually training at an external full or two day session at organisations like the University of Surrey CAQDAS programme. Most PhD students (especially in the UK) are at considerable time and financial pressures, so accessing this training is often difficult. Again, it's sometimes difficult to get a foot in the door with qualitative analysis software.

 

Yet there are some good opportunities for qualitative researchers, even outside academia. Obviously market research is a huge employer, and can provide very varied work, changing with every project. Increasingly it seems that the public sector, at both the local and national level are hiring researchers with qualitative experience, especially in organisations like the NHS, where patient satisfaction is becoming an increasingly important metric.

 

Quirkos has been designed with my own experiences in mind, to provide an easy way to get started with qualitative analysis. In fact, I've jokingly referred to it before as a 'gateway' product, easy to start, and hopefully leading to a good experience and a desire to progress to advanced ways of working! We are also going to offer PhD students a discounted 'PhD Pack', which will include a student licence, on-line training, and two academic licences for their supervisors, so that the whole team can see the progress and comment on ongoing analysis.

 

Researching the numbers of students in the UK, I was stunned to find out that the number of full-time PhD students has nearly doubled, from 9,990 in 1997 to 18,075 in 2010 – the last year for which statistics are available. Now, clearly the number of academic positions has not increased at the same rate, (although it has increased over that time period) so the number of available academic jobs has not kept up with supply. Of course, a PhD can lead to many more opportunities, but it is clear that there is great competition for post-doctorate posts. This has been noted by many other commentators, but also in my own experience. Many of my post-doc friends and colleagues are ridiculously intelligent and capable people, but are still in jobs that chronically undervalue their abilities. Between ourselves, we often joke that for academic jobs, it has become a game of 'dead-man's-boots', waiting for a senior academic to retire, starting a chain of departmental promotions that create a new junior position. These posts are also only available after doing several temporary post-doc positions: it is a long process to get your foot in the door, and you often find yourself competing with good friends.

 

It seems to me that many university departments are now scaling back the number of PhD and Masters students they accept, acknowledging the pressure that large student numbers put on supervisors, despite the large amounts of income they bring to the department (especially Masters programmes). However, if widespread, this change is not yet visible in the latest HEFCE data, which dates back to 2010-11, and shows higher numbers of starters, and an increase in (projected) completion rates. Yet there is a huge and growing pool of very bright critical thinkers on the market, and even if academic opportunities are limited, a good number of other doors to get a foot-hold into.

 

* To get these figures, I have only used search terms qualitative AND either interview or “focus group” across titles and abstracts, to make sure that no other uses of the phrase were included: for example genetic qualitative research. Other methods such as ethnography and diaries added only a dozen or so results each. Frustratingly, the EThOS search doesn't let you specify a date range, but including a year (2012) as a search term mostly returns submissions from that year. It's also interesting to note that the number of PhDs mentioning qualitative methods has doubled since 2007, although it's difficult to tell how much of this is due to any increased popularity of qualitative research, or the increase in total submissions noted above, and the increase in digital submissions to the BL system.

Paper vs. computer assisted qualitative analysis

I recently read a great paper by Rettie et al. (2008) which, although based on a small sample size, found that only 9% of UK market research organisations doing qualitative research were using software to help with qualitative analysis.

 

At first this sounds very low, but it holds true with my own limited experiences with market research firms, and also with academic researchers. The first formal training courses I attended on Qualitative Analysis were conducted by the excellent Health Experiences Research Group at Oxford University, a team I would actually work with later in my career. As an aforementioned computer geek, it was surprising for me to hear Professor Sue Ziebland convincingly argue for a method they defined as the One Sheet of Paper technique, immortalised as OSOP. This is essentially a way to develop a grounded theory or analytical approach by reducing the key themes to a diagram that can be physically summarised on a single piece of paper, a method that is still widely cited to this day.

 

However, the day also contained a series of what felt like ‘confessions’ about how much of people’s Qualitative analysis was paper based: printing out whole transcripts of interviews, highlighting sections, physically cutting and gluing text into flipcharts, and dozens and dozens of multi-coloured Post-it notes! Personally, I think this is a fine method of analysis, as it keeps researchers close to the data and, assuming you have a large enough workspace, it lets you keep dozens of interviews and themes to hand. It’s also very good for team work, as the physicality gets everyone involved in reviewing codes and extracts.

 

In the last project I worked on, looking at evidence use for health decision making we did most of the analysis in Excel, which was actually easier for the whole team to work with than any of the dedicated qualitative analysis software packages. However, we still relied heavily on paper: printing out the interviews and Excel spreadsheets, and using flip-chart paper, post-its and marker pens in group analysis sessions. Believe me, I felt a pang of guilt for all the paper we used in each of these sessions, rainforests be damned! But it kept us inspired, engaged, close to the data and let us work together.

 

So I can quite understand why so many academics and market research organisations choose not to use software packages: at the moment they don’t have the visual connection to the data that paper annotations allow, it’s often difficult to see the different stages of the coding process, and it’s hard to produce reports and outputs that communicate properly.

 

The problem with this approach is the literal paper-trail – how you turn all these iterations of coding schemes and analysis sessions into something you can write up to share with others in order to justify how you made the decisions that led to your conclusions. So I had to file all these flip-charts and annotated sheets, often taking photos of them so they could be shared with colleagues at other universities. It was a slow and time consuming process, but it kept us close to the data.

 

When designing Quirkos, I have tried in some ways to replicate the paper-based analysis process. There’s a touch interface, reports that show all the highlighting in a Word document, and views that keep you close to the data. But I also want to combine this with all the advantages you get from a software package, not least the ability to search, shuffle dozens of documents, have more colours than a whole rainbow of Post-it notes, and the invaluable Undo button!

 

Software can also help keep track of many more topics and sources than most people (especially myself) can remember, and if there are a lot of different themes you want to explore from the data, software is really good at keeping them all in one place and making them easy to find. Working as part of a team, especially if some researchers work remotely or in a different organisation can be much easier with software. E-mailing a file is much easier than sending a huge folder of annotated paper, and combining and comparing analysis can be done at any stage of the project.

 

Qualitative analysis software also lets you take different slices through the data, so you can compare responses grouped by any caracteristics for the sources you have. So it's easy to look at all the comments from people in one location, or between a certain age range. Certainly this is possible to do with qualitative data on paper as well, but the software can remove the need of a lot of paper shuffling, especially when you have a large number of respondents.

 

But most importantly, I think software can allow more experimentation - you can try different themes, easily combine or break them apart, or even start from scratch again, knowing that the old analysis approach you tried is just a few clicks away. I think that the magic undo button also gives researchers more confidence in trying something out, and makes it easier for people to change their mind.

 

Many people I’ve spoken to have asked what the ‘competition’ for Quirkos is like, meaning, what do the other software packages do. But for me the real competitor is the tangible approach and the challenge is to try and have something that is the best of both worlds: a tool that not only apes the paper realm in a virtual space, but acknowledges the need to print out and connect with physical workflows. I often want to review a coded project on paper, printing off and reading in the cafe, and Quirkos makes sure that all your coding can be visually displayed and shared in this way.

 

Everyone has a workflow for qualitative analysis that works for them, their team, and the needs of their project. I think the key is flexibility, and to think about a set of tools that can include paper and software solutions, rather than one approach that is a jack of all trades, and master of none.

 

Analysing text using qualitative software

I'm really happy to see that the talks from the University of Surrey CAQDAS 2014 are now up online (that's 'Computer Assisted Qualitative Data Analysis Software' to you and me). It was a great conference about the current state of software for qualitative analysis, but for me the most interesting talks were from experienced software trainers, about how people actually were using packages in practice.

There were many important findings being shared, but for me one of the most striking was that people spend most of their time coding, and most of what they are coding is text.

In a small survey of CAQDAS users from a qualitative research network in Poland, Haratyk and Kordasiewicz found that 97% of users were coding text, while only 28% were coding images, and 23% directly coding audio. In many ways, the low numbers of people using images and audio are not surprising, but it is a shame. Text is a lot quicker to skip though to find passages compared to audio, and most people (especially researchers) and read a lot faster than people speak. At the moment, most of the software available for qualitative analysis struggles to match audio with meaning, either by syncing up transcripts, or through automatic transcription to help people understand what someone is saying.

Most qualitative researchers use audio as an intermediary stage, to create a recording of a research event, such as in interview or focus group, and have the text typed up word-for-word to analyse. But with this approach you risk losing all of the nuance that we are attuned to hear in the spoken word, emphasis, emotion, sarcasm – and these can subtly or completely transform the meaning of the text. However, since audio is usually much more laborious to work with, I can understand why 97% of people code with text. Still, I always try to keep the audio of an interview close to hand when coding, so that I can listen to any interesting or ambiguous sections, and make sure I am interpreting them fairly.

Since coding text is what most people spend most of their time doing, we spent a lot of time making the text coding process in Quirkos was as good as it could be. We certainly plan to add audio capabilities in the future, but this needs to be carefully done to make sure it connects closely with the text, but can be coded and retrieved as easily as possible.

 

But the main focus of the talk was the gaps in users' theoretical knowledge, that the survey revealed. For example, when asked which analytical framework the researchers used, only 23% described their approach as Grounded Theory. However, when the Grounded Theory approach was described in more detail, 61% of respondents recognised this method as being how they worked. You may recall from the previous top-up, bottom-down blog article that Grounded Theory is essentially finding themes from the text as they appear, rather than having a pre-defined list of what a researcher is looking for. An excellent and detailed overview can be found here.

Did a third of people in this sample really not know what analytical approach they were using? Of course it could be simply that they know it by another name, Emergent Coding for example, or as Dey (1999) laments, there may be “as many versions of grounded theory as there were grounded theorists”.

 

Finally, the study noted users comments on advantages and disadvantages with current software packages. People found that CAQDAS software helped them analyse text faster, and manage lots of different sources. But they also mentioned a difficult learning curve, and licence costs that were more than the monthly salary of a PhD student in Poland. Hopefully Quirkos will be able to help on both of these points...

 

Quirkos Beta Update!

We are busy at Quirkos HQ putting the finishing touches on the Beta version of Quirkos.

The Alpha was relased 5 months ago, and during that time we've collected feedback from people who've used Quirkos in a variety of settings to do all kinds of different research. We've also been adding a lot of features that were requested, and quite a few bonus ones too! We've made search much more powerful, created new graphical reports, and given people tools to get writing reports and articles quickly.

There has been a lot of interest in the next version, and we are excited to share it with people. But we also want to make sure that when it goes out we give the best possible impression of what Quirkos will be like to use. We are planning to make the Beta available to people who have signed up at the end of June, so we can collect more feedback and be ready for a September launch. We'll be detailing some of the new features on the blog over the next few months, so watch this space!

Evaluating feedback

We all know the score: you attend a conference, business event, or training workshop, and at the end of the day you get a little form asking you to evaluate your experience. You can rate the speakers, venue, lunch and parking on a scale from one-to-five, and tick to say whether you would recommend the event to a friend or colleague.

But what about the other part of the evaluation: the open comments box? What was your favourite part of the day? What could we improve for next time? Any other comments? Hopefully someone is going to spend time typing up all these comments, and see if there are some common themes or good suggestions they can use to improve the event next year. Even if you are using a nifty on-line survey system like SurveyMonkey, does someone read and act on the suggestions you spent all that time writing?

And what about feedback on a product, or on service in a hotel or restaurant? Does something actually happen to all those comments, or as one conference attendee once suggested to me, do they all end up on the floor?

In fact, this is a common problem in research. Even when written up, reports often just stay on the shelf, and don't have influence on practice or procedure. If you want decision makers to pay attention to participant feedback and evaluations, then you need to present them in a clear and engaging way.

 

For the numerical or discrete part of surveys, this is not usually too hard. You can put these values in Excel, (or SPSS if you are statistically minded) and explore the data in pivot tables and bar graphs. Then you can see that the happiest attendees were the ones who ranked lunch as excellent, or that 76% of people would recommend the day to others.

Simple statistics and visualisations like this are a standard part of our language: we hear and see them in the news, at board meetings, even in football league tables. They communicate clearly and quickly.

But what about those written comments? In Excel you can't really see all the comments made by people who ranked the conference poorly, or see if the same suggestions are being made about workshop themes for next year.

That's what Quirkos aims to do: become the 'Excel of text'. It's software that everyone can use to explore, summarise and present text data in an intuitive way.

If you put all of your conference evaluations or customer feedback in Quirkos, you can quickly see all the comments made by people who didn't like your product. Or everything that women from the ages of 24-35 said about your service compared with men from 45-64. By combining the numerical, discrete and text data, you have the power to explore the relationships between themes and the differences between respondees. Then you can share these findings as graphs, bubble maps or just the quotes themselves: quick and easy to understand.

This unlocks the power of comments from all your customers, because Quirkos allows you to see why they liked a particular product. And it gives you the chance to be a better listener: if your consumers have an idea for improving your product, you can make it pop out as clear as day.

Hopefully it also breaks a vicious circle: people don't bother leaving comments as they assume they are aren't being read, and thus organisers stop asking for comments, because those sections are ignored or give generic responses.

 

So hopefully next time you fill out a customer feedback form or event evaluation, your comments will lead to direct improvements, rather than just being lost in translation.

Touching Text

Presenting Quirkos at the CAQDAS 2014 conference this month was the first major public demonstration of Quirkos, and what we are trying to do. It’s fair to say it made quite a splash! But getting to this stage has been part of a long process from an idea that came about many years ago.

Like many geeks on the internet, I’d been amazed by the work done by Jeff Han and colleagues at the University of New York on cheap, multi-touch interfaces. This was 2006, and the video went viral in a time before iPhones and tablets, when it looked like someone had finally worked out how to make the futuristic computer interface from Minority Report which had come out in 2002. Others, such as Johnny Lee at Carnegie Mellon University had worked out how the incredible technology in the controllers for the Wii could make touchscreen interactive whiteboards with a £25 toy.

I’ve always been of the opinion that technology is only interesting when it is cheap: it can’t have an impact when it’s out of reach for a majority of people. Now, none of this stuff was particularly ground-breaking in itself, but these people were like heroes to me, for making something amazing out of bits and pieces that everyone could afford.

Meanwhile, I was trying to do qualitative analysis for my PhD [danfreak.net/thesis.html], and having a iBook that wouldn’t run any of the qualitative analysis packages, I cobbled together my own system: my first attempt at making a better qualitative research system. It was based on a series of unique three letter unique codes I’d insert into a sentence, and a Linux based file search system called ‘Beagle’ which allowed me to see a piece of text I’d assigned with a code across any of the files on my computer. Thus in one search I could see all the relevant bits of text from interviews, focus groups, diaries and notes. It was clunky, but worked, and was the beginning of something with potential.

 

By 2009, I had my first proper research job in Oxford, and was spending my salary trying to make a touchscreen computer out of a £120 netbook and a touchscreen overlay I’d imported from China. In fact, I got through two of these laptops, after short-circuiting the motherboard of one while trying to cram the innards into a thin metal case. What excited me was the potential for a £150 touchscreen computer, with no keyboard, that you used like a ‘tablet’ from Star Trek. Then, while I was doing this, Apple came out with the long-anticipated iPad, which had the distinct advantage of being about ¼ of the thickness and weight!

But while all this was going on in my spare time, at work I was spending all day coding semi-structured interviews for a research project. I was being driven mad with the slow coding process, Nvivo was crashing frequently and corrupting all the work when it did, and using interfaces in the 21st century that were beginning to feel a whole generation behind.

And that’s where the idea came from: me speculating on what qualitative analysis would be like with a touch screen interface. What if you could do it on a giant tablet or digital whiteboard with a team of people? I drew sketches of bubbles (I’ve always liked playing with bubbles) that grew when you added text to them, integrating the interface and the visualisation, and showing relationships between the themes.

 

After this, the idea didn’t really progress until I was working on my next job, at Sheffield Hallam University. Again, qualitative analysis was giving me a headache, this time because we wanted to do analysis with participants and co-researchers, and most of the packages were too difficult to learn and too expensive to afford to let the whole team get involved. A new set of colleagues shared my pain with using current CAQDAS software, and as no-one else seemed to be doing anything about it, I thought it was worth giving a try.

I took a course in programming user interfaces using cross-platform frameworks, and was able to knock up some barely functioning prototypes, at the time called ‘Qualia’. But again, things didn’t really progress until I left my job to focus on it full time, fleshing out the details and hiring the wonderful Adrian Lubik: a programmer who actually knows what he’s doing!

With the project gaining momentum, a better name was needed. Looking around classical Greek and Latin names, I came across ‘kirkos’, the Greek word which is the root of the word ‘circle’. Change the beginning to ‘Qu’ for qualitative, and voilá, Quirkos was born: Qualitative Circles. Something that very neatly summed up what I’d been working towards for nearly a decade.

In June we’ll be releasing the beta version to testers for the first time, and the final version will go on sale in September at a lower price point that means a lot more people can try qualitative research. It’s really exciting to be at this stage, with so much enthusiasm and anticipation building in the market. But it’s also just a beginning; we have a 5 year plan to keep adding unique features and develop Quirkos into something that is innovative at every stage of the research process. It’s been a long journey, but it’s great that so many people are now coming along!

Top-down or bottom-up qualitative coding?

In framework analysis, sometimes described as a top-down or 'a-priori' approach, the researcher decides on the topics of interest they will look for before they start the analysis, usually based on a theory they are looking to test. In inductive coding the researcher takes a more bottom-up approach, starting with the data and a blank-sheet, noting themes as the read through the text.

 

Obviously, many researchers take a pragmatic approach, integrating elements of both. For example it is difficult for a emergent researcher to be completely naïve to the topic before they start, and they will have some idea of what they expect to find. This may create bias in any emergent themes (see previous posts about reflexivity!). Conversely, it is common for researchers to discover additional themes while reading the text, illustrating an unconsidered factor and necessitating the addition of extra topics to an a-proiri framework.

 

I intend to go over these inductive and deductive approaches in more detail in a later post. However, there is also another level in qualitative coding which is top-down or bottom-up: the level of coding. A low 'level' of coding might be to create a set of simple themes, such as happy or sad, or apple, banana and orange. These are sometimes called manifest level codes, and are purely descriptive. A higher level of coding might be something more like 'issues from childhood', fruit, or even 'things that can be juggled'. Here more meaning has been imposed, sometimes referred to as latent level analysis.

 

 

Usually, researchers use an iterative approach, going through the data and themes several times to refine them. But the procedure will be quite different if using a top-down or bottom-up approach to building levels of coding. In one model the researcher starts with broad statements or theories, and breaks them down into more basic observations that support or refute that statement. In the bottom-up approach, the researcher might create dozens of very simple codes, and eventually group them together, find patterns, and infer a higher level of meaning from successive readings.

 

So which approach is best? Obviously, it depends. Not just on how well the topic area is understood, but also the engagement level of the particular researcher. Yet complementary methods can be useful here: the PI of the project, having a solid conceptual understanding of the research issue, can use a top-down approach (in both approaches to the analysis) to test their assumptions. Meanwhile, a researcher who is new to the project or field could be in a good position to start from the bottom-up, and see if they can find answers to the research questions starting from basic observations as they emerge from the text. If the themes and conclusions then independently reach the same starting points, it is a good indication that the inferences are well supported by the text!

 

qualitative data analysis software - Quirkos