How Quirkos can change the way you look at your qualitative data

Quirkos qualitative software seeing data

We always get a lot of inquiries in December from departments and projects who are thinking of spending some left-over money at the end of the financial year on a few Quirkos licences. A great early Christmas present for yourself the team! It’s also a good long term investment, since our licences don’t expire and can be used year after year. They are transferable to new computers, and we’ve committed to provide free updates for the current version. We don’t want the situation where different teams are using different versions, and so can’t share projects and data. Our licences are often a fraction of the cost of other qualitative software packages, but for the above reasons we think that we offer much more value than just the comparative sticker price.

 

But since Quirkos also has a different ethos (accessibility) and some unique features, it also helps you approach your qualitative research data in a different way to other software. In the two short years that Quirkos has been available, it’s become used by more than 100 universities across the world, as well as market research firms and public sector organisations. That has given me a lot of feedback that helps us improve the software, but also a sense of what things people love the most about it. So following is the list of things I hear most about the software in workshops and e-mails.

 

It’s much more visual and colourful

quirkos qualitative coding bubbles

Experienced researchers who have used other software are immediately struck by how colourful and visual the Quirkos approach is. The interface shows growing bubbles that dynamically show the coding in each theme (or node), and colours all over the screen. For many, the Quirkos design allows people to think in colours, spatially, and in layers, improving the amount of information they can digest and work with. Since the whole screen is a live window into the data, there is less need to generate separate reports, and coding and reviewing is a constant (and addictive) feedback process.


This doesn’t appeal to everyone, so we still have a more traditional ‘tree’ list structure for the themes which users can switch between at any time.

 

 

I can get started with my project quicker


We designed Quirkos so it could be learnt in 20 minutes for use in participatory analysis, so the learning curve is much lower than other qualitative software. Some packages can be intimidating to the first-time user, and often have 2 day training courses. All the training and support materials for Quirkos are available for free on our website, without registration. We increasingly hear that students want self-taught options, which we provide in many different formats. This means that not only can you start using Quirkos quickly, setting up and putting data into a new project is a lot quicker as well, making Quirkos useful for smaller qualitative projects which might just have a few sources.

 

 

I’m kept closer to my data

qualitative software comparison view


It’s not just the live growing bubbles that mean researchers can see themes evolve in their analysis, there are a suite of visualisations that let you quickly explore and play with the data. The cluster views generate instant Venn diagrams of connection and co-occurrences between themes, and the query views show side-by-side comparisons for any groups of your data you want to compare and contrast. Our mantra has been to make sure that no output is more than one click away, and this keeps users close to their data, not hidden away behind long lists and sub-menus.

 

 

It’s easier to share with others

qualitative word export


Quirkos provides some unique options that make showing your coded qualitative data to other people easier and more accessible. The favourite feature is the Word export, which creates a standard Word document of your coded transcripts, with all the coding shown as colour coded comments and highlights. Anyone with a word processor can see the complete annotated data, and print it out to read away from the computer.


If you need a detailed summary, the reports can be created as an interactive webpage, or a PDF which anyone can open. For advanced users you can also export your data as a standard spreadsheet CSV file, or get deep into the standard SQLite database using any tool (such as http://sqlitebrowser.org/) or even a browser extension.

 

 

I couldn’t get to grips with other qualitative software

quirkos spreadsheet comparison


It is very common for researchers to come along to our workshops having been to training for other qualitative analysis software, and saying they just ‘didn’t get it’ before. While very powerful, other tools can be intimidating, and unless you are using them on a regular basis, difficult to remember all the operations. We love how people can just come back to Quirkos after 6 months and get going again.


We also see a lot of people who tried other specialist qualitative software and found it didn’t fit for them. A lot of researchers go back to paper and highlighters, or even use Word or Excel, but get excited by how intuitive Quirkos makes the analysis process.

 

 

Just the basics, but everything you need


I always try and be honest in my workshops and list the limitations of Quirkos. It can’t work with multimedia data, can’t provide quantitative statistical analysis, and has limited memo functionality at the moment. But I am always surprised at how the benefits outweigh the limitations for most people: a huge majority of qualitative researchers only work with text data, and share my belief that if quantiatitve statistics are needed, they should be done in dedicated software. The idea has always been to focus on making the core actions that researchers do all the time (coding, searching, developing frameworks and exploring data) and make them as smooth and quick as possible.

 


If you have comments of your own, good or bad, we love to hear them, it’s what keeps us focused on the diverse needs of qualitative researchers.


Get in touch and we can help explain the different licence options, including ‘seat’ based licences for departments or teams, as well as the static licences which can be purchased immediately through our website. There are also discounts for buying more than 3 licences, for different sectors, and developing countries.


Of course, we can also provide formal quotes, invoices and respond to purchase orders as your institution requires. We know that some departments take time to get things through finances, and so we can always provide extensions to the trial until the orders come through – we never want to see researchers unable to get at their data and continue their research!


So if you are thinking about buying a licence for Quirkos, you can download the full version to try for free for one month, and ask us any questions by email (sales@quirkos.com), Skype ‘quirkos’ or a good old 9-to-5 phone call on (+44) 0131 555 3736. We are here for qualitative researchers of all (coding) stripes and spots (bubbles)!

 

Snapshot data and longitudinal qualitative studies

longitudinal qualitative data


In the last blog post, we looked at creating archives of qualitative data that can be used by other researchers (or yourself in the future) for secondary analysis. In that article I postulated that secondary data analysis could make collecting new data a rarer, and expensive event. However, some (including Dr Susanne Friese) pointed out that as the social world is always changing, there is a constant need to collect new data. I totally agree with this, but it made me think about another problem with research studies – they are time-limited collection events that end up capturing a snapshot of the world when they were recorded.


Most qualitative research collects data as a series of one-off dives into the lives and experience of participants. These might be a single interview or focus group, or the results of a survey that captures opinions at a particular point in time. This might not be a certain date, it might also be at a key point in a participant’s journey – such as when people are discharged from hospital, or after they attend a training event. However, even within a small qualitative research project it is possible to measure change, by surveying the people at different stages in their individual or collective journeys.


This is sometimes called ‘Qualitative Longitudinal Research’ (Farrall et al. 2006), and often involves contacting people at regular intervals, possibly even after years or decades. Examples of this type of project include the five year ‘Timescapes’ project in the UK which looked at changes in personal and family relationships (Holland 2011), or a mutli-year look at homelessness and low-income families in Worcester, USA for which the archived data is available (yay!).


However, such projects tend to be expensive, as they require having researchers working on a project for many years. They also need quite a high sample size because there is an inevitable annual attrition of people who move, fall ill or stop responding to the researchers. Most projects don’t have the luxury of this sort of scale and resources, but there is another possibility to consider: if it is appropriate to collect data over a shorter timescale. This is often collecting data at ‘before and after’ data points – for example bookending a treatment or event, so are often used in well planned evaluations. Researchers can ask questions about expectations before the occasion, and how they felt afterwards. This is useful in a lot of user experience research, but also to understand motivations of actors, and improve delivery of key services.


But there are other reasons to do more than one interview with a participant. Even having just two data points from a participant can show changes in lives, circumstances and opinions, even when there isn’t an obvious event distinguishing the two snapshots. It also gets people to reflect on  answers they gave in the first interview, and see if their feelings or opinions have changed. It can also give an opportunity to further explore topics that were interesting or not covered in the first interview, and allow for some checking – do people answer the questions in a consistent way? Sometimes respondents (or even interviewers) are having a bad day, and this doubles the chance of getting high quality data.


In qualitative analysis it is also an opportunity to look through all the data from a number of respondents and go back to ask new questions that are revealed from the data. In a grounded theory approach this is very valuable, but it can also be used to check researcher’s own interpretations and hypothesis about the research topic.


There are a number of research methods which are particularly suitable to longitudinal qualitative data studies. Participant diaries for example can be very illuminating, and can work over long or short periods of time. Survey methods also work well for multiple data points, and it is fairly easy to administer these at regular intervals by collecting data via e-mail, telephone or online questionnaires. These don’t even have to have very fixed questions, it is possible to do informal interviews via e-mail as well (eg Meho 2006), and this works well over long and medium periods of time.


So when planning a research project it’s worth thinking about whether your research question could be better answered with a longitudinal or multiple contact approach. If you decide this is the case, just be aware of the possibility that you won’t be able to contact everyone multiple times, and if means that some people’s data can’t be used, you need to over-recruit at the initial stages. However, the richer data and greater potential for reliability can often outweigh the extra work required. I also find it very illuminating as a researcher, to talk to participants after analysing some of their data. Qualitative data can become very abstract from participants (especially once it is transcribed, analysed and dissected) and meeting the person again and talking through some of the issues raised can help reconnect with the person and story.


Researchers should also consider how they will analyse such data. In qualitative analysis software like Quirkos, it is usually fairly easy to categorise the different sources you have by participant or time scale, so that you can look at just 1st or 2nd interviews or all together. I have had many questions from users into how best to organise their projects for such studies: and my advice is generally to keep everything in one project file. Quirkos makes it easy to do analysis on just certain data sources, and show results and reports from everything, or just one set of data.


So consider giving Quirkos a try with a free download for Windows, Mac or Linux, and read about how to build complex queries to explore your data over time, or between different groups of respondents.

 

 

Archiving qualitative data: will secondary analysis become the norm?

archive secondary data

 

Last month, Quirkos was invited to a one day workshop in New York on archiving qualitative data. The event was hosted by Syracuse University, and you can read a short summary of the event here. This links neatly into the KWALON led initiative to create a common standard for interchange of coded data between qualitative software packages.


The eventual aim is to develop a standardised file format for qualitative data, which not only allows use of data on any qualitative analysis software, but also for coded qualitative data sets to be available in online archives for researchers to access and explore. There are several such initiatives around the world, for example QDR in the United States, and the UK Data Archive in the United Kingdom. Increasingly, research funders are requiring that data from research is deposited in such public archives.


A qualitative archive should be a dynamic and accessible resource. It is of little use creating something like the warehouse at the end of ‘Raiders of the Lost Ark’: a giant safehouse of boxed up data which is difficult to get to. Just like the UK Data Archive it must be searchable, easy to download and display, and have enough detail and metadata to make sure data is discoverable and properly catalogued. While there is a direct benefit to having data from previous projects archived, the real value comes from the reuse of that data: to answer new questions or provide historical context. To do this, we need a change in practice from researchers, participants and funding bodies.


In some disciplines, secondary data analysis of archival data is common place, think for example of history research that looks at letters and news from the Jacobean period (eg Coast 2014). The ‘Digital Humanities’ movement in academia is a cross-dicipline look at how digital archives of data (often qualitative and text based) can be better utilised by researchers. However, there is a missed opportunity here, as many researchers in digital humanities don’t use standard qualitative analysis software, preferring instead their own bespoke solutions. Most of the time this is because the archived data is in such a variety of different formats and formatting.


However, there is a real benefit to making not just historical data like letters and newspapers, but contemporary qualitative research projects (of the typical semi-structured interview / survey / focus-group type) openly available. First, it allows others to examine the full dataset, and check or challenge conclusions drawn from the data. This allows for a higher level of rigour in research - a benefit not just restricted to qualitative data, but which can help challenge the view that qualitative research is subjective, by making the full data and analysis process available to everyone.

 

Second, there is huge potential for secondary analysis of qualitative data. Researchers from different parts of the country working on similar themes can examine other data sources to compare and contrast differences in social trends regionally or historically. Data that was collected for a particular research question may also have valuable insights about other issues, something especially applicable to rich qualitative data. Asking people about healthy eating for example may also record answers which cover related topics like attitudes to health fads in the media, or diet and exercise regimes.

 

At the QDR meeting last month, it was postulated that the generation of new data might become a rare activity for researchers: it is expensive, time consuming, and often the questions can be answered by examining existing data. With research funding facing an increasing squeeze, many funding bodies are taking the opinion that they get better value from grants when the research has impact beyond one project, when the data can be reused again and again.

 

There is still an element of elitism about generating new data in research – it is an important career pathway, and the prestige given to PIs running their own large research projects are not shared with those doing ‘desk-based’ secondary analysis of someone else’s data. However, these attitudes are mostly irrational and institutionally driven: they need to change. Those undertaking new data collection will increasingly need to make sure that they design research projects to maximise secondary analysis of their data, by providing good documentation on the collection process, research questions and detailed metadata of the sources.

 

The UK now has a very popular research grant programme specifically to fund researchers to do secondary data analysis. Loiuse Corti told the group that despite uptake being slow in the first year, the call has become very competitive (like the rest of grant funding). These initiatives will really help raise the profile and acceptability of doing secondary analysis, and make sure that the most value is being gained from existing data.


However, making qualitative data available for secondary analysis does not necessarily mean that it is publicly available. Consent and ethics may not allow for the release of the complete data, making it difficult to archive. While researchers should increasingly start planning research projects and consent forms to make data archivable and shareable, it is not always appropriate. Sometimes, despite attempts to anonymise qualitative data, the detail of life stories and circumstances that participants share can make them identifiable. However, the UK Data Archive has a sort of ‘vetting’ scheme, where more sensitive datasets can only be accessed by verified researchers who have signed appropriate confidentiality agreements. There are many levels of access so that the maximum amount of data can be made available, with appropriate safeguards to protect participants.


I also think that it is a fallacy to claim that participants wouldn’t want their data to be shared with other researchers – in fact I think many respondents assume this will happen. If they are going to give their time to take part in a research project, they expect this to have maximum value and impact, many would be outraged to think of it sitting on the shelf unused for many years.


But these data archives still don’t have a good way to share coded qualitative data or the project files generated by qualitative software. Again, there was much discussion on this at the meeting, and Louise Corti gave a talk about her attempts to produce a standard for qualitative data (QuDex). But such a format can become very complicated, if we wish to support and share all the detailed aspects of qualitative research projects. These include not just multimedia sources and coding, but date and metadata for sources, information about the researchers and research questions, and possibly even the researcher’s journals and notes, which could be argued to be part of the research itself.

 

The QuDex format is extensive, but complicated to implement – especially as it requires researchers to spend a lot of time entering metadata to be worthwhile. Really, this requires a behaviour change for researchers: they need to record and document more aspects of their research projects. However, in some ways the standard was not comprehensive enough – it could not capture the different ways that notes and memos were recorded in Atlas.ti for example. This standard has yet to gain traction as there was little support from the qualitative software developers (for commercial and other reasons). However, the software landscape and attitudes to open data are changing, and major agreements from qualitative software developers (including Quirkos) mean that work is underway to create a standard that should eventually create allow not just for the interchange of coded qualitative data, but hopefully for easy archival storage as well.


So the future of archived qualitative archive requires behaviour change from researchers, ethics boards, participants, funding bodies and the designers of qualitative analysis software. However, I would say that these changes, while not necessarily fast enough, are already in motion, and the word is moving towards a more open world for qualitative (and quantitative) research.


For things to be aware of when considering using secondary sources of qualitative data and social media, read this post. And while you are at it, why not give Quirkos a try and see if it would be a good fit for your next qualitative research project – be it primary or secondary data. There is a free trial of the full version to download, with no registration or obligations. Quirkos is the simplest and most visual software tool for qualitative research, so see if it is right for you today!

 

 

Stepping back from coding software and reading qualitative data

printing and reading qualitative data

There is a lot of concern that qualitative analysis software distances people from their data. Some say that it encourages reductive behaviour, prevents deep reading of the data, and leads to a very quantified type of qualitative analysis (eg Savin-Baden and Major 2013).

 

I generally don’t agree with these statements, and other qualitative bloggers such as Christina Silver and Kristi Jackson have written responses to critics of qualitative analysis software recently. However, I want to counter this a little with a suggestion that it is also possible to be too close to your data, and in fact this is a considerable risk when using any software approach.

 

I know this is starting to sound contradictory, but it is important to strike a happy balance so you can see the wood from the trees. It’s best to have both a close, detailed reading and analysis of your data, as well as a sense of the bigger picture emerging across all your sources and themes. That was the impetus behind the design of Quirkos: that the canvas view of your themes, where the size of each bubble shows the amount of data coded to it, gives you a live birds-eye overview of your data at all times. It’s also why we designed the cluster view, to graphically show you the connections between themes and nodes in your qualitative data analysis.

 

It is very easy to treat analysis as a close reading exercise, taking each source in turn, reading it through and assigning sections to codes or themes as you go. This is a valid first step, but only part of what should be an iterative, cyclical process. There are also lots of ways to challenge your coding strategy to keep you alert to new things coming from the data, and seeing trends in different ways.

 

However, I have a confession. I am a bit of a Luddite in some ways: I still prefer to print out and read transcripts of data from qualitative projects away from the computer. This may sound shocking coming from the director of a qualitative analysis software company, but for me there is something about both the physicality of reading from paper, and the process of stepping away from the analysis process that still endears paper-based reading to me. This is not just at the start of the analysis process either, but during. I force myself to stop reading line-by-line, put myself in an environment where it is difficult to code, and try and read the corpus of data at more of a holistic scale.
I waste a lot of trees this way (even with recycled paper), but always return to the qualitative software with a fresh perspective, finish my coding and analysis there, but having made the best of both worlds. Yes, it is time consuming to have so many readings of the data, but I think good qualitative analysis deserves this time.

 

I know I am not the only researcher who likes to work in this way, and we designed Quirkos to make this easy to do. One of the most unique and ‘wow’ features of Quirkos is how you can create a standard Word document of all the data from your project, with all the coding preserved as colour-coded highlights. This makes it easy to printout, take away and read at your leisure, but still see how you have defined and analysed your data so far.

word export qualitative data

 

There are also some other really useful things you can do with the Word export, like share your coded data with a supervisor, colleague or even some of your participants. Even if you don’t have Microsoft Office, you can use free alternatives like LibreOffice or Google Docs, so pretty much everyone can see your coded data. But my favourite way to read away from the computer is to make a mini booklet, with turn-able pages – I find this much more engaging than just a large stack of A4/Letter pages stapled in the top corner. If you have a duplex printer that can print on both sides of the page, generate a PDF from the Word file (just use Save As…) and even the free version of Adobe Reader has an awesome setting in Print to automatically create and format a little booklet:

word booklet

 

 

I always get a fresh look at the data like this, and although I am trying not to be too micro-analytical and do a lot of coding, I am always able to scribble notes in the margin. Of course, there is nothing to stop you stepping back and doing a reading like this in the software itself, but I don’t like staring at a screen all day, and I am not disciplined enough to work on the computer and not get sucked into a little more coding. Coding can be a very satisfying and addictive process, but at the time I have to define higher-level themes in the coding framework, I need to step back and think about the bigger picture, before I dive into creating something based on the last source or theme I looked at. It’s also important to get the flow and causality of the sources sometimes, especially when doing narrative and temporal analysis. It’s difficult to read the direction of an interview or series of stories just from looking at isolated coded snippets.

 

Of course, you can also print out a report from Quirkos, containing all the coded data, and the list of codes and their relations. This is sometimes handy as a key on the side, especially if there are codes you think you are underusing. Normally at this stage in the blog I point out how you can do this with other software as well, but actually, for such a commonly required step, I find this very hard to do in other software packages. It is very difficult to get all the ‘coding stripes’ to display properly in Nvivo text outputs, and MaxQDA has lots of options to export coded data, but not whole coded sources that I can see. Atlas.ti does better here with the Print with Margin feature, which shows stripes and code names in the margin – however this only generates a PDF file, so is not editable.

 

So download the trial of Quirkos today, and every now and then step back and make sure you don’t get too close to your qualitative data…

 

 

Problems with quantitative polling, and answers from qualitative data

 

The results of the US elections this week show a surprising trend: modern quantitative polling keeps failing to predict the outcome of major elections.

 

In the UK this is nothing new, in both the 2015 general election and the EU referendum polling failed to predict the outcome. In 2015 the polls suggested very close levels of support for Labour and the Conservative party but on the night the Conservatives won a significant majority. Secondly, the polls for the Referendum on leaving the EU indicated there was a slight preference for Remain, when voters actually voted to Leave by a narrow margin. We now have a similar situation in the States, where despite polling ahead of Donald Trump, Hillary Clinton lost the Electoral College system (while winning a slight majority in the popular vote). There are also recent examples of polling errors in Israel, Greece and the Scottish Independence Referendum.

 

Now, it’s fair to say that most of these polls were within the margin of error, (typically 3%) and so you would expect these inaccurate outcomes to happen periodically. However, there seems to be a systematic bias here, in each time underestimating the support for more conservative attitudes. There is much hand-wrangling about this in the press, see for example this declaration of failure from the New York Times. The suggestion that journalists and traditional media outlets are out of touch with most of the population may be true, but does not explain the  polling discrepancies.

 

There are many methodological problems: numbers of people responding to telephone surveys is falling, perhaps not surprising considering the proliferation of nuisance calls in the UK. But this remains for most pollsters a vital way to get access to the largest group of voters: older people. In contrast, previous attempts to predict elections through social media and big data approaches have been fairly inaccurate, and likely will remain that way if social media continues to be dominated by the young.

 

However, I think there is another problem here: pollsters are not asking the right questions. Look how terribly worded the exit poll questions are, they try to get people to put themselves in a box as quickly as possible: demographically, religiously, and politically. Then they ask a series of binary questions like “Should illegal immigrants working in the U.S. should be offered legal status or deported to their home country?” giving no opportunity for nuance. The aim is clear – just get to a neat quantifiable output that matches a candidate or hot topic.

 

There’s another question which I think in all it’s many iterations is poorly worded: Who are you going to vote for? People might change whether they would support a different politician at any moment in time (including in a polling booth), but are unlikely to suddenly decide their family is not important to them. It’s often been shown that support for a candidate is not a reliable metric: people give you answers influenced by the media, the resdearcher and of course they can change their mind. But when you ask people questions about their beliefs, not a specific candidate, they tend to be much more accurate. It also does not always correlate that a person will believe a candidate is good, and vote for them. As we saw in Brexit, and possibly with the last US election, many people want to register a protest vote – they are not being heard or represented well, and people aren’t usually asked if this is one of the reasons they vote. It’s also very important to consider that people are often strategic voters, and are themselves influenced by the polls which are splashed everywhere. The polls have become a constant daily race of who’s ahead, possibly increasing voter fatigue and leading to complacency for supporters of who ever is ahead the day of the finishing line. These made future predictions much more difficult.

 


In contrast, here’s two examples of qualitative focus group data on the US election. The first is a very heavily moderated CBS group, which got very aggressive. Here, although there is a strong attempt to ask for one word answers on candidates, what comes out is a general distrust of the whole political system. This is also reflected in the Lord Ashcroft focus groups in different American states, which also include interviews with local journalists and party leaders. When people are not asked specific policy or candidate based questions, there is surprising  agreement: everyone is sick of the political system and the election process.


This qualitative data is really no more subjective than polls based on who answers a phone on a particular day, but provides a level of nuance lacking in the quantitative polls and mainstream debate, which helps explain why people are voting different ways – something many are still baffled by. There are problems with this type of data as well, it is difficult to accurately summarise and report on, and rarely are complete transcripts available for scrutiny. But if you want to better gauge the mood of a nation, discussion around the water-cooler or down the pub can be a lot more illuminating, especially when as a researcher or ethnographer you are able to get out of the way and listen (as you should when collecting qualitative data in focus groups).

 

Political data doesn’t have to be focus group driven either – these group discussions are done because they are cheap, but qualitative semi-structured interviews can really let you understand key individuals that might help explain larger trends. We did this before the 2015 general election, and the results clearly predicted and explained the collapse in support for the Labour party in Scotland.

 

There has been a rush in the polling to add more and more numbers to the surveys, with many reaching tens or even hundreds of thousands of respondents. But these give a very limited view of voter opinions, and as we’ve seen above can be very skewed by question and sampling method. It feels to me that deep qualitative conversations with a much smaller number of people from across the country would be a better way of gauging the social and political climate. And it’s important to make sure that participants have the power to set the agenda, because pollsters don’t always know what issues matter most to people. And for qualitative researchers and pollsters alike: if the right questions don’t get asked, you won’t get the right answers!

 

Don't forget to try Quirkos, the simplest and most visual way to analyse your qualitative text and mixed method data. We work for you, with a free trial and training materials, licences that don't expire and expert researcher support. Download Quirkos and try for your self!

 

 

 

Tips for running effective focus groups

In the last blog article I looked at some of the justifications for choosing focus groups as a method in qualitative research. This week, we will focus on some practical tips to make sure that focus groups run smoothly, and to ensure you get good engagement from your participants.

 


1. Make sure you have a helper!

It’s very difficult to run focus groups on your own. If you are wanting to layout the room, greet people, deal with refreshment requests, check recording equipment is working, start video cameras, take notes, ask questions, let in late-comers and facilitate discussion it’s much easier with two or even three people for larger groups. You will probably want to focus on listening to the discussion, not have to take notes and problem solve at the same time. Having another facilitator or helper around can make a lot of difference to how well the session runs, as well as how much good data is recorded from it.

 


2. Check your recording strategy

Most people will record audio and transcribe their focus groups later. You need to make sure that your recording equipment will pick up everyone in the room, and also that you have a backup dictaphone and batteries! Many more tips in this blog post article. If you are planning to video the session, think this through carefully.

 

Do you have the right equipment? A phone camera might seem OK, but they usually struggle to record long sessions, and are difficult to position in a way that will show everyone clearly. Special cameras designed for gig and band practice are actually really good for focus groups, they tend to have wide-angle lenses and good microphones so you don’t need to record separate audio. You might also want to have more than one camera (in a round-table discussion, someone will always have their back to the camera. Then you will want to think about using qualitative analysis software like Transana that will support multiple video feeds.

 

You also need to make sure that video is culturally appropriate for your group (some religions and cultures don’t approve of taking images) and that it won’t make people nervous and clam up in discussion. Usually I find a dictaphone less imposing than a camera lens, but you then loose the ability to record the body language of the group. Video also makes it much easier to identify different speakers!

 


3. Consent and introductions

I always prefer to do the consent forms and participant information before the session. Faffing around with forms to sign at the start or end of the workshop takes up a lot of time best used for discussion, and makes people hurried to read the project information. E-mail this to people ahead of time, so at least they can just sign on the day, or bring a completed form with them. I really feel that participants should get the option to see what they are signing up for before they agree to come to a session, so they are not made uncomfortable on the day if it doesn't sound right for them. However, make sure there is an opportunity for people to ask any questions, and state any additional preferences, privately or in public.

 


4. Food and drink

You may decide not to have refreshments at all (your venue might dictate that) but I really love having a good spread of food and drink at a focus group. It makes it feel more like a party or family occasion than an interrogation procedure, and really helps people open up.

 

While tea, coffee and biscuits/cookies might be enough for most people, I love baking and always bring something home-baked like a cake or cookies. Getting to talk about and offer  food is a great icebreaker, and also makes people feel valued when you have spent the time to make something. A key part of getting good data from a good focus group is to set a congenial atmosphere, and an interesting choice of drinks or fruit can really help this. Don’t forget to get dietary preferences ahead of time, and consider the need for vegetarian, diabetic and gluten-free options.

 


5. The venue and layout

A lot has already been said about the best way to set out a focus group discussion (see Chambers 2002), but there are a few basic things to consider. First, a round or rectangle table arrangement works best, not lecture hall-type rows. Everyone should be able to see the face of everyone else. It’s also important not to have the researcher/facilitator at the head or even centre of the table. You are not the boss of the session, merely there to guide the debate. There is already a power dynamic because you have invited people, and are running the session. Try and sit yourself on the side as an observer, not director of the session.

 

In terms of the venue, try and make sure it is as quiet as possible, and good natural light and even high ceilings can help spark creative discussion (Meyers-Levy and Zhu 2007).

 


6. Set and state the norms

A common problem in qualitative focus group discussions is that some people dominate the debate, while others are shy and contribute little. Chambers (2002) just suggests to say at the beginning of the session this tends to happen, to make people conscious of sharing too much or too little. You can also try and actively manage this during the session by prompting other people to speak, go round the room person by person, or have more formal systems where people raise their hands to talk or have to be holding a stone. These methods are more time consuming for the facilitator and can stifle open discussion, so it's best to use them only when necessary.

 

You should also set out ground rules, attempting to create an open space for uncritical discussion. It's not usually the aim for people to criticise the view of others, nor for the facilitator to be seen as the leader and boss. Make these things explicit at the start to make sure there is the right atmosphere for sharing: one where there is no right or wrong answer, and everyone has something valuable to contribute.

 


7. Exercises and energisers

To prompt better discussion when people are tired or not forthcoming, you can use exercises such as card ranking exercises, role play exercises and prompts for discussion such as stories or newspaper articles. Chambers (2002) suggests dozens of these, as well as some some off-the-wall 'energizer' exercises: fun games to get people to wake up and encourage discussion. More on this in the last blog post article. It can really help to go round the room and have people introduce themselves with a fun fact, not just to get the names and voices on tape for later identification, but as a warm up.

 

Also, the first question, exercise or discussion point should be easy. If the first topic is 'How did you feel when you had cancer?' that can be pretty intimidating to start with. Something much simpler, such as 'What was hospital food like?' or even 'How was your trip here?' are topics everyone can easily contribute to and safely argue over, gaining confidence to share something deeper later on.

 


8. Step back, and step out

In focus groups, the aim is usually to get participants to discuss with each-other, not a series of dialogues with the facilitator. The power dynamics of the group need to reflect this, and as soon as things are set in motion, the researcher should try and intervene as little as possible – occasionally asking for clarification or to set things back on track. Thus it's also their role to help participants understand this, and allow the group discussion to be as co-interactive as possible.

 

“When group dynamics worked well the co-participants acted as co-
researchers taking the research into new and often unexpected directions and engaging in interaction which were both complementary (such as sharing common experiences) and argumentative” 
- Kitzinger 1994

 


9. Anticipate depth

Focus groups usually last a long time, rarely less than 2 hours, but even a half or whole day of discussion can be appropriate if there are lots of complex topics to discuss. It's OK to consider having participants do multiple focus groups if there is lots to cover, just consider what will best fit around the lives of your participants.

 

At the end of these you should find there is a lot of detailed and deep qualitative data for analysis. It can really help digesting this to make lots of notes during the session, as a summary of key issues, your own reflexive comments on the process, and the unspoken subtext (who wasn't sharing on what topics, what people mean when they say, 'you know, that lady with the big hair').


You may also find that qualitative analysis software like Quirkos can help pull together all the complex themes and discussions from your focus groups, and break down the mass of transcribed data you will end up with! We designed Quirkos to be very simple and easy to use, so do download and try for yourself...

 

 

 

Considering and planning for qualitative focus groups

focus groups qualitative

 

This is the first in a two-part series on focus groups. This week, we are looking at some of the  why you might consider using them in a research project, and questions to make sure they are well integrated into your research strategy. Next week we will look at some practical tips for effectively running and facilitating a successful session.


Focus groups have been used as a research method since the 1950s, but were not as common in academia until much later (Colucci 2007). Essentially they are time limited sessions, usually in a shared physical space, where a group of individuals are invited to discuss with each other and a facilitator a topic of interest to the researcher.


These should not been seen as ‘natural’ group settings. They are not really an ethnographic method, because even if comprised of an existing group (for example of people who work together or belong to the same social group) the session exists purely to create a dialogue for research purposes.


Together with ‘focused’ or semi-structured interviews, they are one of the most commonly used methods in qualitative research, both in market research and the social sciences. So what situations and research questions are they appropriate for?


If you are considering choosing focus groups as an easy way to quickly collect data from a large number of respondents, think again! Although I have seen a lot of market research firms do a single focus group as the extent of their research, one group generates limited data on its own. It’s also false to consider data from a focus group being the same as interview data from the same number of people: there is a group dynamic which is usually the main benefit to adopting this approach. Focus groups are best at recording the interactions and debate between a group of people, not many separate opinions.


They are also very difficult to schedule and manage from a practical standpoint. The researcher must find a suitably large and quiet space that everyone can attend, and is at a mutually convenient time. Compared with scheduling one-on-one interviews, the practicalities are much more difficult: meeting in a café or small office is rarely a good venue. It may be necessary to hire a dedicated venue or meeting room, as well as proper microphones to make sure everyone’s voice can be heard in a recording. The numbers that actually show up on the day will always fluctuate, so its unusual for all focus groups to have the same number of participants.


Although a lot of research projects seem to just do 3 or 4 focus groups, it’s usually best to try for a larger number, because the dynamics and data are likely to be very different in each one. In general you are less likely to see saturation on complex issues, as things go ‘off the rails’ and participants take things in new directions. If managed right, this should be enlightening rather than scary, but you need to anticipate this possibility, and make sure you are planning to collect enough data to cover all the bases.


So, before you commit to focus groups in your qualitative methods, go through the questions below and make sure you have reasons to justify their inclusion. There isn’t a right answer to any of them, because they will vary so much between different research projects. But once you can answer these issues, you will have a great idea of how focus groups fit into your study, and be able to write them up for your methodology section.

 

Planning Groups

How accessible will focus groups be to your planned participants?  Are participants going to have language or confidence issues? Are you likely to get a good range of participation? If the people you want to talk to are shy or not used to speaking (in the language the researcher wants to conduct the session in) focus groups may not get everyone talking as much as you like.


Are there anonymity issues? Are people with a stigmatising condition going to be willing to disclose their status or experience to others in the group? Will most people already know each other (and their secrets) and some not? When working with sensitive issues, you need to consider these potential problems, and your ethics review board will want to know you’ve considered this too.


What size of group will work best, and is it appropriate to plan focus groups around pre-existing groups? Do you want to choose people in a group that have very different experiences to provoke debate or conflict? Alternatively you can schedule groups of people with similar backgrounds or opinions to better understand a particular subset of your population.

 

Format

What will the format of your focus group be, just an open discussion? Or will you use prompts, games, ranking exercises, card games, pictures, media clippings, flash cards or other tools to get discussion and interactivity (see Colucci (2007)? These can be useful not just as a prompt, but as a point of commonality and comparison between groups. But make sure they are appropriate for the kind of group you want to work with, and they don’t seem forced or patronising. (Kitzinger 1994).


Analysis

Last of all, think about how you are going to analyse the data. Focus groups really require an extra level of analysis: the dynamic and dialectic can be seen as an extra layer on what participants are revealing about themselves. You might also need to be able to identify individual speakers in the transcript and possibly their demographic details if you want to explore these.


What is the aim within your methodology: to generate open discussion, or confirm and detail a specific position? Often focus groups can be very revealing if you have a very loose theoretical grounding, or are trying to initially set a research question.


How will the group data triangulate as part of a mixed methodology? Will the same people be interviewed or surveyed? What explicitly will you get out of the focus groups that will uniquely contribute to the data?

 


So this all sounds very cautionary and negative, but focus groups can be a wonderful, rich and dynamic data tool, that really challenges the researcher and their assumptions. Finally, focus groups are INTENSE experiences for a researcher. There are so many things to juggle, including the data collection, facilitating and managing group dynamics, while also taking notes and running out to let in latecomers. It’s difficult to do with just one person, so make sure you get a friend or colleague to help out!

 

Quirkos can help you to manage and analyse your focus group transcriptions. If you have used other qualitative analysis software before, you might be surprised at how easy and visual Quirkos makes the analysis of qualitative text – you might even get to enjoy it! You can download a trial for free and see how it works, but there are also a bunch of video tutorials and walk-throughs so you quickly get the most out of your qualitative data.

 


Further Reading and References

 

Colucci, E., 2007, Focus groups can be fun: the use of activity-oriented questions in focus group discussions, Qual Health Res, 17(10), http://qhr.sagepub.com/content/17/10/1422.abstract


Grudens-Schuck, N., Allen, B., Larson., 2004, Methodology Brief: Focus group fundamentals, Extension Community and Economic Development Publications. Book 12.
http://lib.dr.iastate.edu/extension_communities_pubs/12


Kitzinger, J., 1994, The methodology of Focus Groups: the importance of interaction between research participants, Sociology of Health and Illness, 16(1), http://onlinelibrary.wiley.com/doi/10.1111/1467-9566.ep11347023/pdf

 

Robinson, N., 1999, The use of focus group methodology with
selected examples from sexual health
research, Journal of Advanced Nursing, 29(4), 905-913

 

 

Circles and feedback loops in qualitative research

qualitative research feedback loops

The best qualitative research forms an iterative loop, examining, and then re-examining. There are multiple reads of data, multiple layers of coding, and hopefully, constantly improving theory and insight into the underlying lived world. During the research process it is best to try to be in a constant state of feedback with your data, and theory.


During your literature review, you may have several cycles through the published literature, with each pass revealing a deeper network of links. You will typically see this when you start going back to ‘seminal’ texts on core concepts from older publications, showing cycles of different interpretations and trends in methodology that are connected. You can see this with paradigm trends like social captial, neo-liberalism and power. It’s possible to see major theorists like Foucault, Chomsky and Butler each create new cycles of debate in the field, building up from the previous literature.


A research project will often have a similar feedback loop between the literature and the data, where the theory influences the research questions and methodology, but engagement with the real ‘folk world’ provides challenge to interpretations of data and the practicalities of data collection. Thus the literature is challenged by the research process and findings, and so a new reading of the literature is demanded to correlate or challenge new interpretations.

 

Thus it’s a mistake to think that a literature review only happens at the beginning of the research process, it is important to engage with theory again, not just at the end of a project when drawing conclusions and writing up, but during the analysis process itself. Especially with qualitative research, the data will rarely neatly fit with one theory or another, but demand a synthesis or new angle on existing research.

 

The coding process is also like this, in that it usually requires many cycles through the data. After reading one source, it can feel like the major themes and codes for the project are clear, and will set the groundwork for the analytic framework. But what if you had started with another source? Would the codes you would have created have been the same? It’s easy to either get complacent with the first codes you start with, worrying that the coding structure gets too complicated if there you keep creating new nodes.

 

However, there will always be sources which contain unique data or express different opinions and experiences that don’t chime with existing codes. And what if this new code actually fits some of the previous data better? You would need to go back to previously analysed data sources and explore them again. This is why most experts will recommend multiple tranches through the data, not just to be consistent and complete, but because there is a feedback loop in the codes and themes themselves. Once you have a first coding structure, the framework itself can be examined and reinterpreted, looking for groupings and higher level interpretations. I’ve talked about this more in this blog article about qualitative coding.


Quirkos is designed to keep researchers deeply embedded in this feedback process, with each coding event subtly changing the dynamics of the coding structure. Connections and coding is shown in real time, so you can always see what is happening, what is being coded most, and thus constantly challenge your interpretation and analysis process.

 

Queries, questions and sub-set analysis should also be easy to run and dynamic, because good qualitative researchers shouldn’t only do interrogation and interpretation of the data at the end of the analysis process, it should be happening throughout it. That way surprises and uncertainties can be identified early, and new readings of the data illuminate these discoveries.

 

In a way, qualitative analysis is never done: and it is not usually a linear process. Even when project practicalities dictate an end point, a coded research project in software like Quirkos sits on your hard drive, awaiting time for secondary analysis, or for the data to be challenged from a different perspective and research question. And to help you when you get there, your data and coding bubbles will immediately show you where you left off – what the biggest themes where, how they connected, and allow you to go to any point in the text to see what was said.

 

And you shouldn’t need to go back and do retraining to use the software again. I hear so many stories of people who have done training courses for major qualitative data analysis software, and when it comes to revisiting their data, the operations are all forgotten. Now, Quirkos may not have as many features as other software, but the focus on keeping things visual and in plain sight means that these should comfortably fit under your thumb again, even after not using it for a long stretch.

 

So download the free trial of Quirkos today, and see how it’s different way of presenting the data helps you continuously engage with your data in fresh ways. Once you start thinking in circles, it’s tough to go back!

 

Triangulation in qualitative research

triangulation facets face qualitative

 

Triangles are my favourite shape,
  Three points where two lines meet

                                                                           alt-J

 

Qualitative methods are sometimes criticised as being subjective, based on single, unreliable sources of data. But with the exception of some case study research, most qualitative research will be designed to integrate insights from a variety of data sources, methods and interpretations to build a deep picture. Triangulation is the term used to describe this comparison and meshing of different data, be it combining quantitative with qualitative, or ‘qual on qual’.


I don’t think of a data in qualitative research as being a static and definite thing. It’s not like a point of data on a plot of graph: qualitative data has more depth and context than that. In triangulation, we think of two points of data that move towards an intersection. In fact, if you are trying to visualise triangulation, consider instead two vectors – directions suggested by two sources of data, that may converge at some point, creating a triangle. This point of intersection is where the researcher has seen a connection between the inference of the world implied by two different sources of data. However, there may be angles that run parallel, or divergent directions that will never cross: not all data will agree and connect, and it’s important to note this too.


You can triangulate almost all the constituent parts of the research process: method, theory, data and investigator.


Data triangulation, (also called participant or source triangulation) is probably the most common, where you try to examine data from different respondents but collected using the same method. If we consider that each participant has a unique and valid world view, the researcher’s job is often to try and look for a pattern or contradictions beyond the individual experience. You might also consider the need to triangulate between data collected at different times, to show changes in lived experience.

 

Since every method has weaknesses or bias, it is common for qualitative research projects to collect data in a variety of different ways to build up a better picture. Thus a project can collect data from the same or different participants using different methods, and use method or between-method triangulation to integrate them. Some qualitative techniques can be very complementary, for example semi-structured interviews can be combined with participant diaries or focus groups, to provide different levels of detail and voice. For example, what people share in a group discussion maybe less private than what they would reveal in a one-to-one interview, but in a group dynamic people can be reminded of issues they might forget to talk about otherwise.


Researchers can also design a mixed-method qualitative and quantitative study where very different methods are triangulated. This may take the form of a quantitative survey, where people rank an experience or service, combined with a qualitative focus group, interview or even open-ended comments. It’s also common to see a validated measure from psychology used to give a metric to something like pain, anxiety or depression, and then combine this with detailed data from a qualitative interview with that person.


In ‘theoretical triangulation’, a variety of different theories are used to interpret the data, such as discourse, narrative and context analysis, and these different ways of dissecting and illuminating the data are compared.


Finally there is ‘investigator triangulation’, where different researchers each conduct separate analysis of the data, and their different interpretations are reconciled or compared. In participatory analysis it’s also possible to have a kind of respondent triangulation, where a researcher is trying to compare their own interpretations of data with that of their respondents.

 

 

While there is a lot written about the theory of triangulation, there is not as much about actually doing it (Jick 1979). In practice, researchers often find it very difficult to DO the triangulation: different data sources tend to be difficult to mesh together, and will have very different discourses and interpretations. If you are seeing ‘anger’ and ‘dissatisfaction’ in interviews with a mental health service, it will be difficult to triangulate such emotions with the formal language of a policy document on service delivery.


In general the qualitative literature cautions against seeing triangulation as a way to improve the validity and reliability of research, since this tends to imply a rather positivist agenda in which there is an absolute truth which triangulation gets us closer to. However, there are plenty that suggest that the quality of qualitative research can be improved in this way, such as Golafshani (2003). So you need to be clear of your own theoretical underpinning: can you get to an ‘absolute’ or ‘relative’ truth through your own interpretations of two types of data? Perhaps rather than positivist this is a pluralist approach, creating multiplicities of understandings while still allowing for comparison.


It’s worth bearing in mind that triangulation and multiple methods isn’t an easy way to make better research. You still need to do all different sources justice: make sure data from each method is being fully analysed, and iteratively coded (if appropriate). You should also keep going back and forth, analysing data from alternate methods in a loop to make sure they are well integrated and considered.

 


Qualitative data analysis software can help with all this, since you will have a lot of data to process in different and complementary ways. In software like Quirkos you can create levels, groups and clusters to keep different analysis stages together, and have quick ways to do sub-set analysis on data from just one method. Check out the features overview or mixed-method analysis with Quirkos for more information about how qualitative research software can help manage triangulation.

 


References and further reading

Carter et al. 2014, The use of triangulation in qualitative research, Oncology Nursing Forum, 41(5), https://www.ncbi.nlm.nih.gov/pubmed/25158659

 

Denzin, 1978 The Research Act: A Theoretical Introduction to Sociological Methods, McGraw-Hill, New York.

 

Golafshani, N., 2003, Understanding reliability and validity in qualitative research, 8(4), http://nsuworks.nova.edu/cgi/viewcontent.cgi?article=1870&context=tqr


Bekhet A, Zauszniewski J, 2012, Methodological triangulation: an approach to
understanding data, Nurse Researcher, 20 (2), http://journals.rcni.com/doi/pdfplus/10.7748/nr2012.11.20.2.40.c9442

 

Jick, 1979, Mixing Qualitative and Quantitative Methods: Triangulation in Action,  Administrative Science Quarterly, 24(4),  https://www.jstor.org/stable/2392366

 

 

100 blog articles on qualitative research!

images by Paul Downey and AngMoKio

 

Since our regular series of articles started nearly three years ago, we have clocked up 100 blog posts on a wide variety of topics in qualitative research and analysis! These are mainly short overviews, aimed at students, newcomers and those looking to refresh their practice. However, they are all referenced with links to full-text academic articles should you need more depth. Some articles also cover practical tips that don't get into the literature, like transcribing without getting back-ache, and hot to write handy semi-strucutred interview guides. These have become the most popular part of our website, and there's now more than 80,000 words in my blog posts, easily the length of a good sized PhD thesis!

 

That's quite a lot to digest, so in addition to the full archive of qualitative research articles, I've put together a 'best-of', with top 5 articles on some of the main topics. These include Epistemology, Qualitative methods, Practicalities of qualitative research, Coding qualitative data, Tips and tricks for using Quirkos, and Qualitative evaluations and market research. Bookmark and share this page, and use it as a reference whenever you get stuck with any aspect of your qualitative research.

 

While some of them are specific to Quirkos (the easiest tool for qualitative research) most of the principles are universal and will work whatever software you are using. But don't forget you can download a free trial of Quirkos at any time, and see for yourself!

 


Epistemology

What is a Qualitative approach?
A basic overview of what constitutes a qualitative research methodology, and the differences between quantitative methods and epistimologies

 

What actually is Grounded Theory? A brief introduction
An overview of applying a grounded theory approach to qualitative research

 

Thinking About Me: Reflexivity in science and qualitative research
How to integrate a continuing reflexive process in a qualitative research project

 

Participatory Qualitative Analysis
Quirkos is designed to facilitate participatory research, and this post explores some of the benefits of including respondents in the interpretation of qualitative data

 

Top-down or bottom-up qualitative coding
Deciding whether to analyse data with high-level theory-driven codes, or smaller descriptive topics (hint – it's probably both!)

 

 


Qualitative methods

An overview of qualitative methods
A brief summary of some of the commonly used approaches to collect qualitative data

 

Starting out in Qualitative Analysis
First things to consider when choosing an analytical strategy

 

10 tips for semi-structured qualitative interviewing
Semi-structured interviews are one of the most commonly adopted qualitative methods, this article provides some hints to make sure they go smoothly, and provide rich data

 

Finding, using and some cautions on secondary qualitative data
Social media analysis is an increasingly popular research tool, but as with all secondary data analysis, requires acknowledging some caveats

 

Participant diaries for qualitative research
Longitudinal and self-recorded data can be a real gold mine for qualitative analysis, find out how it can help your study

 


Practicalities of qualitative research

Transcription for qualitative interviews and focus-groups
Part of a whole series of blog articles on getting qualitative audio transcribed, or doing it yourself, and how to avoid some of the pitfalls

 

Designing a semi-structured interview guide for qualitative interviews
An interview guide can give the researcher confidence and the right level of consistency, but shouldn't be too long or too descriptive...

 

Recruitment for qualitative research
While finding people to take part in your qualitative study can seem daunting, there are many strategies to choose, and should be closely matched with the research objectives

 

Sampling considerations in qualitative research
How do you know if you have the right people in your study? Going beyond snowball sampling for qualitative research

 

Reaching saturation point in qualitative research
You'll frequently hear people talking about getting to data saturation, and this post explains what that means, and how to plan for it

 

 

Coding qualitative data

Developing and populating a qualitative coding framework in Quirkos
How to start out with an analytical coding framework for exploring, dissecting and building up your qualitative data

 

Play and Experimentation in Qualitative Analysis
I feel that great insight often comes from experimenting with qualitative data and trying new ways to examine it, and your analytical approach should allow for this

 

6 meta-categories for qualitative coding and analysis
Don't just think of descriptive codes, use qualitative software to log and keep track of the best quotes, surprises and other meta-categories

 

Turning qualitative coding on its head
Sometimes the most productive way forward is to try a completely new approach. This post outlines several strange but insightful ways to recategorise and examine your qualitative data

 

Merging and splitting themes in qualitative analysis
It's important to have an iterative coding process, and you will usually want to re-examine themes and decide whether they need to be more specific or vague

 

 


Quirkos tips and tricks

Using Quirkos for Systematic Reviews and Evidence Synthesis
Qualitative software makes a great tool for literature reviews, and this article outlines how to sep up a project to make useful reports and outputs

 

How to organise notes and memos in Quirkos
Keeping memos is an important tool during the analytical process, and Quirkos allows you to organise and code memo sources in the same way you work with other data

 

Bringing survey data and mixed-method research into Quirkos
Data from online survey platforms often contains both qualitative and quantitative components, which can be easily brought into Quirkos with a quick tool

 

Levels: 3-dimensional node and topic grouping in Quirkos
When clustering themes isn't comprehensive enough, levels allows you to create grouped categories of themes that go across multiple clustered bubbles

 

10 reasons to try qualitative analysis with Quirkos
Some short tips to make the most of Quirkos, and get going quickly with your qualitative analysis

 

 

Qualitative market research and evaluations

Delivering qualitative market insights with Quirkos
A case study from an LA based market research firm on how Quirkos allowed whole teams to get involved in data interpretation for their client

 

Paper vs. computer assisted qualitative analysis
Many smaller market research firms still do most of their qualitative analysis on paper, but there are huge advantages to agencies and clients to adopt a computer-assisted approach

 

The importance of keeping open-ended qualitative responses in surveys
While many survey designers attempt to reduce costs by removing qualitative answers, these can be a vital source of context and satisfaction for users

 

Qualitative evaluations: methods, data and analysis
Evaluating programmes can take many approaches, but it's important to make sure qualitative depth is one of the methods adopted

 

Evaluating feedback
Feedback on events, satisfaction and engagement is a vital source of knowledge for improvement, and Quirkos lets you quickly segment this to identify trends and problems