Tips for running effective focus groups

In the last blog article I looked at some of the justifications for choosing focus groups as a method in qualitative research. This week, we will focus on some practical tips to make sure that focus groups run smoothly, and to ensure you get good engagement from your participants.

 


1. Make sure you have a helper!

It’s very difficult to run focus groups on your own. If you are wanting to layout the room, greet people, deal with refreshment requests, check recording equipment is working, start video cameras, take notes, ask questions, let in late-comers and facilitate discussion it’s much easier with two or even three people for larger groups. You will probably want to focus on listening to the discussion, not have to take notes and problem solve at the same time. Having another facilitator or helper around can make a lot of difference to how well the session runs, as well as how much good data is recorded from it.

 


2. Check your recording strategy

Most people will record audio and transcribe their focus groups later. You need to make sure that your recording equipment will pick up everyone in the room, and also that you have a backup dictaphone and batteries! Many more tips in this blog post article. If you are planning to video the session, think this through carefully.

 

Do you have the right equipment? A phone camera might seem OK, but they usually struggle to record long sessions, and are difficult to position in a way that will show everyone clearly. Special cameras designed for gig and band practice are actually really good for focus groups, they tend to have wide-angle lenses and good microphones so you don’t need to record separate audio. You might also want to have more than one camera (in a round-table discussion, someone will always have their back to the camera. Then you will want to think about using qualitative analysis software like Transana that will support multiple video feeds.

 

You also need to make sure that video is culturally appropriate for your group (some religions and cultures don’t approve of taking images) and that it won’t make people nervous and clam up in discussion. Usually I find a dictaphone less imposing than a camera lens, but you then loose the ability to record the body language of the group. Video also makes it much easier to identify different speakers!

 


3. Consent and introductions

I always prefer to do the consent forms and participant information before the session. Faffing around with forms to sign at the start or end of the workshop takes up a lot of time best used for discussion, and makes people hurried to read the project information. E-mail this to people ahead of time, so at least they can just sign on the day, or bring a completed form with them. I really feel that participants should get the option to see what they are signing up for before they agree to come to a session, so they are not made uncomfortable on the day if it doesn't sound right for them. However, make sure there is an opportunity for people to ask any questions, and state any additional preferences, privately or in public.

 


4. Food and drink

You may decide not to have refreshments at all (your venue might dictate that) but I really love having a good spread of food and drink at a focus group. It makes it feel more like a party or family occasion than an interrogation procedure, and really helps people open up.

 

While tea, coffee and biscuits/cookies might be enough for most people, I love baking and always bring something home-baked like a cake or cookies. Getting to talk about and offer  food is a great icebreaker, and also makes people feel valued when you have spent the time to make something. A key part of getting good data from a good focus group is to set a congenial atmosphere, and an interesting choice of drinks or fruit can really help this. Don’t forget to get dietary preferences ahead of time, and consider the need for vegetarian, diabetic and gluten-free options.

 


5. The venue and layout

A lot has already been said about the best way to set out a focus group discussion (see Chambers 2002), but there are a few basic things to consider. First, a round or rectangle table arrangement works best, not lecture hall-type rows. Everyone should be able to see the face of everyone else. It’s also important not to have the researcher/facilitator at the head or even centre of the table. You are not the boss of the session, merely there to guide the debate. There is already a power dynamic because you have invited people, and are running the session. Try and sit yourself on the side as an observer, not director of the session.

 

In terms of the venue, try and make sure it is as quiet as possible, and good natural light and even high ceilings can help spark creative discussion (Meyers-Levy and Zhu 2007).

 


6. Set and state the norms

A common problem in qualitative focus group discussions is that some people dominate the debate, while others are shy and contribute little. Chambers (2002) just suggests to say at the beginning of the session this tends to happen, to make people conscious of sharing too much or too little. You can also try and actively manage this during the session by prompting other people to speak, go round the room person by person, or have more formal systems where people raise their hands to talk or have to be holding a stone. These methods are more time consuming for the facilitator and can stifle open discussion, so it's best to use them only when necessary.

 

You should also set out ground rules, attempting to create an open space for uncritical discussion. It's not usually the aim for people to criticise the view of others, nor for the facilitator to be seen as the leader and boss. Make these things explicit at the start to make sure there is the right atmosphere for sharing: one where there is no right or wrong answer, and everyone has something valuable to contribute.

 


7. Exercises and energisers

To prompt better discussion when people are tired or not forthcoming, you can use exercises such as card ranking exercises, role play exercises and prompts for discussion such as stories or newspaper articles. Chambers (2002) suggests dozens of these, as well as some some off-the-wall 'energizer' exercises: fun games to get people to wake up and encourage discussion. More on this in the last blog post article. It can really help to go round the room and have people introduce themselves with a fun fact, not just to get the names and voices on tape for later identification, but as a warm up.

 

Also, the first question, exercise or discussion point should be easy. If the first topic is 'How did you feel when you had cancer?' that can be pretty intimidating to start with. Something much simpler, such as 'What was hospital food like?' or even 'How was your trip here?' are topics everyone can easily contribute to and safely argue over, gaining confidence to share something deeper later on.

 


8. Step back, and step out

In focus groups, the aim is usually to get participants to discuss with each-other, not a series of dialogues with the facilitator. The power dynamics of the group need to reflect this, and as soon as things are set in motion, the researcher should try and intervene as little as possible – occasionally asking for clarification or to set things back on track. Thus it's also their role to help participants understand this, and allow the group discussion to be as co-interactive as possible.

 

“When group dynamics worked well the co-participants acted as co-
researchers taking the research into new and often unexpected directions and engaging in interaction which were both complementary (such as sharing common experiences) and argumentative” 
- Kitzinger 1994

 


9. Anticipate depth

Focus groups usually last a long time, rarely less than 2 hours, but even a half or whole day of discussion can be appropriate if there are lots of complex topics to discuss. It's OK to consider having participants do multiple focus groups if there is lots to cover, just consider what will best fit around the lives of your participants.

 

At the end of these you should find there is a lot of detailed and deep qualitative data for analysis. It can really help digesting this to make lots of notes during the session, as a summary of key issues, your own reflexive comments on the process, and the unspoken subtext (who wasn't sharing on what topics, what people mean when they say, 'you know, that lady with the big hair').


You may also find that qualitative analysis software like Quirkos can help pull together all the complex themes and discussions from your focus groups, and break down the mass of transcribed data you will end up with! We designed Quirkos to be very simple and easy to use, so do download and try for yourself...

 

 

 

Considering and planning for qualitative focus groups

focus groups qualitative

 

This is the first in a two-part series on focus groups. This week, we are looking at some of the  why you might consider using them in a research project, and questions to make sure they are well integrated into your research strategy. Next week we will look at some practical tips for effectively running and facilitating a successful session.


Focus groups have been used as a research method since the 1950s, but were not as common in academia until much later (Colucci 2007). Essentially they are time limited sessions, usually in a shared physical space, where a group of individuals are invited to discuss with each other and a facilitator a topic of interest to the researcher.


These should not been seen as ‘natural’ group settings. They are not really an ethnographic method, because even if comprised of an existing group (for example of people who work together or belong to the same social group) the session exists purely to create a dialogue for research purposes.


Together with ‘focused’ or semi-structured interviews, they are one of the most commonly used methods in qualitative research, both in market research and the social sciences. So what situations and research questions are they appropriate for?


If you are considering choosing focus groups as an easy way to quickly collect data from a large number of respondents, think again! Although I have seen a lot of market research firms do a single focus group as the extent of their research, one group generates limited data on its own. It’s also false to consider data from a focus group being the same as interview data from the same number of people: there is a group dynamic which is usually the main benefit to adopting this approach. Focus groups are best at recording the interactions and debate between a group of people, not many separate opinions.


They are also very difficult to schedule and manage from a practical standpoint. The researcher must find a suitably large and quiet space that everyone can attend, and is at a mutually convenient time. Compared with scheduling one-on-one interviews, the practicalities are much more difficult: meeting in a café or small office is rarely a good venue. It may be necessary to hire a dedicated venue or meeting room, as well as proper microphones to make sure everyone’s voice can be heard in a recording. The numbers that actually show up on the day will always fluctuate, so its unusual for all focus groups to have the same number of participants.


Although a lot of research projects seem to just do 3 or 4 focus groups, it’s usually best to try for a larger number, because the dynamics and data are likely to be very different in each one. In general you are less likely to see saturation on complex issues, as things go ‘off the rails’ and participants take things in new directions. If managed right, this should be enlightening rather than scary, but you need to anticipate this possibility, and make sure you are planning to collect enough data to cover all the bases.


So, before you commit to focus groups in your qualitative methods, go through the questions below and make sure you have reasons to justify their inclusion. There isn’t a right answer to any of them, because they will vary so much between different research projects. But once you can answer these issues, you will have a great idea of how focus groups fit into your study, and be able to write them up for your methodology section.

 

Planning Groups

How accessible will focus groups be to your planned participants?  Are participants going to have language or confidence issues? Are you likely to get a good range of participation? If the people you want to talk to are shy or not used to speaking (in the language the researcher wants to conduct the session in) focus groups may not get everyone talking as much as you like.


Are there anonymity issues? Are people with a stigmatising condition going to be willing to disclose their status or experience to others in the group? Will most people already know each other (and their secrets) and some not? When working with sensitive issues, you need to consider these potential problems, and your ethics review board will want to know you’ve considered this too.


What size of group will work best, and is it appropriate to plan focus groups around pre-existing groups? Do you want to choose people in a group that have very different experiences to provoke debate or conflict? Alternatively you can schedule groups of people with similar backgrounds or opinions to better understand a particular subset of your population.

 

Format

What will the format of your focus group be, just an open discussion? Or will you use prompts, games, ranking exercises, card games, pictures, media clippings, flash cards or other tools to get discussion and interactivity (see Colucci (2007)? These can be useful not just as a prompt, but as a point of commonality and comparison between groups. But make sure they are appropriate for the kind of group you want to work with, and they don’t seem forced or patronising. (Kitzinger 1994).


Analysis

Last of all, think about how you are going to analyse the data. Focus groups really require an extra level of analysis: the dynamic and dialectic can be seen as an extra layer on what participants are revealing about themselves. You might also need to be able to identify individual speakers in the transcript and possibly their demographic details if you want to explore these.


What is the aim within your methodology: to generate open discussion, or confirm and detail a specific position? Often focus groups can be very revealing if you have a very loose theoretical grounding, or are trying to initially set a research question.


How will the group data triangulate as part of a mixed methodology? Will the same people be interviewed or surveyed? What explicitly will you get out of the focus groups that will uniquely contribute to the data?

 


So this all sounds very cautionary and negative, but focus groups can be a wonderful, rich and dynamic data tool, that really challenges the researcher and their assumptions. Finally, focus groups are INTENSE experiences for a researcher. There are so many things to juggle, including the data collection, facilitating and managing group dynamics, while also taking notes and running out to let in latecomers. It’s difficult to do with just one person, so make sure you get a friend or colleague to help out!

 

Quirkos can help you to manage and analyse your focus group transcriptions. If you have used other qualitative analysis software before, you might be surprised at how easy and visual Quirkos makes the analysis of qualitative text – you might even get to enjoy it! You can download a trial for free and see how it works, but there are also a bunch of video tutorials and walk-throughs so you quickly get the most out of your qualitative data.

 


Further Reading and References

 

Colucci, E., 2007, Focus groups can be fun: the use of activity-oriented questions in focus group discussions, Qual Health Res, 17(10), http://qhr.sagepub.com/content/17/10/1422.abstract


Grudens-Schuck, N., Allen, B., Larson., 2004, Methodology Brief: Focus group fundamentals, Extension Community and Economic Development Publications. Book 12.
http://lib.dr.iastate.edu/extension_communities_pubs/12


Kitzinger, J., 1994, The methodology of Focus Groups: the importance of interaction between research participants, Sociology of Health and Illness, 16(1), http://onlinelibrary.wiley.com/doi/10.1111/1467-9566.ep11347023/pdf

 

Robinson, N., 1999, The use of focus group methodology with
selected examples from sexual health
research, Journal of Advanced Nursing, 29(4), 905-913

 

 

Circles and feedback loops in qualitative research

qualitative research feedback loops

The best qualitative research forms an iterative loop, examining, and then re-examining. There are multiple reads of data, multiple layers of coding, and hopefully, constantly improving theory and insight into the underlying lived world. During the research process it is best to try to be in a constant state of feedback with your data, and theory.


During your literature review, you may have several cycles through the published literature, with each pass revealing a deeper network of links. You will typically see this when you start going back to ‘seminal’ texts on core concepts from older publications, showing cycles of different interpretations and trends in methodology that are connected. You can see this with paradigm trends like social captial, neo-liberalism and power. It’s possible to see major theorists like Foucault, Chomsky and Butler each create new cycles of debate in the field, building up from the previous literature.


A research project will often have a similar feedback loop between the literature and the data, where the theory influences the research questions and methodology, but engagement with the real ‘folk world’ provides challenge to interpretations of data and the practicalities of data collection. Thus the literature is challenged by the research process and findings, and so a new reading of the literature is demanded to correlate or challenge new interpretations.

 

Thus it’s a mistake to think that a literature review only happens at the beginning of the research process, it is important to engage with theory again, not just at the end of a project when drawing conclusions and writing up, but during the analysis process itself. Especially with qualitative research, the data will rarely neatly fit with one theory or another, but demand a synthesis or new angle on existing research.

 

The coding process is also like this, in that it usually requires many cycles through the data. After reading one source, it can feel like the major themes and codes for the project are clear, and will set the groundwork for the analytic framework. But what if you had started with another source? Would the codes you would have created have been the same? It’s easy to either get complacent with the first codes you start with, worrying that the coding structure gets too complicated if there you keep creating new nodes.

 

However, there will always be sources which contain unique data or express different opinions and experiences that don’t chime with existing codes. And what if this new code actually fits some of the previous data better? You would need to go back to previously analysed data sources and explore them again. This is why most experts will recommend multiple tranches through the data, not just to be consistent and complete, but because there is a feedback loop in the codes and themes themselves. Once you have a first coding structure, the framework itself can be examined and reinterpreted, looking for groupings and higher level interpretations. I’ve talked about this more in this blog article about qualitative coding.


Quirkos is designed to keep researchers deeply embedded in this feedback process, with each coding event subtly changing the dynamics of the coding structure. Connections and coding is shown in real time, so you can always see what is happening, what is being coded most, and thus constantly challenge your interpretation and analysis process.

 

Queries, questions and sub-set analysis should also be easy to run and dynamic, because good qualitative researchers shouldn’t only do interrogation and interpretation of the data at the end of the analysis process, it should be happening throughout it. That way surprises and uncertainties can be identified early, and new readings of the data illuminate these discoveries.

 

In a way, qualitative analysis is never done: and it is not usually a linear process. Even when project practicalities dictate an end point, a coded research project in software like Quirkos sits on your hard drive, awaiting time for secondary analysis, or for the data to be challenged from a different perspective and research question. And to help you when you get there, your data and coding bubbles will immediately show you where you left off – what the biggest themes where, how they connected, and allow you to go to any point in the text to see what was said.

 

And you shouldn’t need to go back and do retraining to use the software again. I hear so many stories of people who have done training courses for major qualitative data analysis software, and when it comes to revisiting their data, the operations are all forgotten. Now, Quirkos may not have as many features as other software, but the focus on keeping things visual and in plain sight means that these should comfortably fit under your thumb again, even after not using it for a long stretch.

 

So download the free trial of Quirkos today, and see how it’s different way of presenting the data helps you continuously engage with your data in fresh ways. Once you start thinking in circles, it’s tough to go back!

 

Triangulation in qualitative research

triangulation facets face qualitative

 

Triangles are my favourite shape,
  Three points where two lines meet

                                                                           alt-J

 

Qualitative methods are sometimes criticised as being subjective, based on single, unreliable sources of data. But with the exception of some case study research, most qualitative research will be designed to integrate insights from a variety of data sources, methods and interpretations to build a deep picture. Triangulation is the term used to describe this comparison and meshing of different data, be it combining quantitative with qualitative, or ‘qual on qual’.


I don’t think of a data in qualitative research as being a static and definite thing. It’s not like a point of data on a plot of graph: qualitative data has more depth and context than that. In triangulation, we think of two points of data that move towards an intersection. In fact, if you are trying to visualise triangulation, consider instead two vectors – directions suggested by two sources of data, that may converge at some point, creating a triangle. This point of intersection is where the researcher has seen a connection between the inference of the world implied by two different sources of data. However, there may be angles that run parallel, or divergent directions that will never cross: not all data will agree and connect, and it’s important to note this too.


You can triangulate almost all the constituent parts of the research process: method, theory, data and investigator.


Data triangulation, (also called participant or source triangulation) is probably the most common, where you try to examine data from different respondents but collected using the same method. If we consider that each participant has a unique and valid world view, the researcher’s job is often to try and look for a pattern or contradictions beyond the individual experience. You might also consider the need to triangulate between data collected at different times, to show changes in lived experience.

 

Since every method has weaknesses or bias, it is common for qualitative research projects to collect data in a variety of different ways to build up a better picture. Thus a project can collect data from the same or different participants using different methods, and use method or between-method triangulation to integrate them. Some qualitative techniques can be very complementary, for example semi-structured interviews can be combined with participant diaries or focus groups, to provide different levels of detail and voice. For example, what people share in a group discussion maybe less private than what they would reveal in a one-to-one interview, but in a group dynamic people can be reminded of issues they might forget to talk about otherwise.


Researchers can also design a mixed-method qualitative and quantitative study where very different methods are triangulated. This may take the form of a quantitative survey, where people rank an experience or service, combined with a qualitative focus group, interview or even open-ended comments. It’s also common to see a validated measure from psychology used to give a metric to something like pain, anxiety or depression, and then combine this with detailed data from a qualitative interview with that person.


In ‘theoretical triangulation’, a variety of different theories are used to interpret the data, such as discourse, narrative and context analysis, and these different ways of dissecting and illuminating the data are compared.


Finally there is ‘investigator triangulation’, where different researchers each conduct separate analysis of the data, and their different interpretations are reconciled or compared. In participatory analysis it’s also possible to have a kind of respondent triangulation, where a researcher is trying to compare their own interpretations of data with that of their respondents.

 

 

While there is a lot written about the theory of triangulation, there is not as much about actually doing it (Jick 1979). In practice, researchers often find it very difficult to DO the triangulation: different data sources tend to be difficult to mesh together, and will have very different discourses and interpretations. If you are seeing ‘anger’ and ‘dissatisfaction’ in interviews with a mental health service, it will be difficult to triangulate such emotions with the formal language of a policy document on service delivery.


In general the qualitative literature cautions against seeing triangulation as a way to improve the validity and reliability of research, since this tends to imply a rather positivist agenda in which there is an absolute truth which triangulation gets us closer to. However, there are plenty that suggest that the quality of qualitative research can be improved in this way, such as Golafshani (2003). So you need to be clear of your own theoretical underpinning: can you get to an ‘absolute’ or ‘relative’ truth through your own interpretations of two types of data? Perhaps rather than positivist this is a pluralist approach, creating multiplicities of understandings while still allowing for comparison.


It’s worth bearing in mind that triangulation and multiple methods isn’t an easy way to make better research. You still need to do all different sources justice: make sure data from each method is being fully analysed, and iteratively coded (if appropriate). You should also keep going back and forth, analysing data from alternate methods in a loop to make sure they are well integrated and considered.

 


Qualitative data analysis software can help with all this, since you will have a lot of data to process in different and complementary ways. In software like Quirkos you can create levels, groups and clusters to keep different analysis stages together, and have quick ways to do sub-set analysis on data from just one method. Check out the features overview or mixed-method analysis with Quirkos for more information about how qualitative research software can help manage triangulation.

 


References and further reading

Carter et al. 2014, The use of triangulation in qualitative research, Oncology Nursing Forum, 41(5), https://www.ncbi.nlm.nih.gov/pubmed/25158659

 

Denzin, 1978 The Research Act: A Theoretical Introduction to Sociological Methods, McGraw-Hill, New York.

 

Golafshani, N., 2003, Understanding reliability and validity in qualitative research, 8(4), http://nsuworks.nova.edu/cgi/viewcontent.cgi?article=1870&context=tqr


Bekhet A, Zauszniewski J, 2012, Methodological triangulation: an approach to
understanding data, Nurse Researcher, 20 (2), http://journals.rcni.com/doi/pdfplus/10.7748/nr2012.11.20.2.40.c9442

 

Jick, 1979, Mixing Qualitative and Quantitative Methods: Triangulation in Action,  Administrative Science Quarterly, 24(4),  https://www.jstor.org/stable/2392366

 

 

100 blog articles on qualitative research!

images by Paul Downey and AngMoKio

 

Since our regular series of articles started nearly three years ago, we have clocked up 100 blog posts on a wide variety of topics in qualitative research and analysis! These are mainly short overviews, aimed at students, newcomers and those looking to refresh their practice. However, they are all referenced with links to full-text academic articles should you need more depth. Some articles also cover practical tips that don't get into the literature, like transcribing without getting back-ache, and hot to write handy semi-strucutred interview guides. These have become the most popular part of our website, and there's now more than 80,000 words in my blog posts, easily the length of a good sized PhD thesis!

 

That's quite a lot to digest, so in addition to the full archive of qualitative research articles, I've put together a 'best-of', with top 5 articles on some of the main topics. These include Epistemology, Qualitative methods, Practicalities of qualitative research, Coding qualitative data, Tips and tricks for using Quirkos, and Qualitative evaluations and market research. Bookmark and share this page, and use it as a reference whenever you get stuck with any aspect of your qualitative research.

 

While some of them are specific to Quirkos (the easiest tool for qualitative research) most of the principles are universal and will work whatever software you are using. But don't forget you can download a free trial of Quirkos at any time, and see for yourself!

 


Epistemology

What is a Qualitative approach?
A basic overview of what constitutes a qualitative research methodology, and the differences between quantitative methods and epistimologies

 

What actually is Grounded Theory? A brief introduction
An overview of applying a grounded theory approach to qualitative research

 

Thinking About Me: Reflexivity in science and qualitative research
How to integrate a continuing reflexive process in a qualitative research project

 

Participatory Qualitative Analysis
Quirkos is designed to facilitate participatory research, and this post explores some of the benefits of including respondents in the interpretation of qualitative data

 

Top-down or bottom-up qualitative coding
Deciding whether to analyse data with high-level theory-driven codes, or smaller descriptive topics (hint – it's probably both!)

 

 


Qualitative methods

An overview of qualitative methods
A brief summary of some of the commonly used approaches to collect qualitative data

 

Starting out in Qualitative Analysis
First things to consider when choosing an analytical strategy

 

10 tips for semi-structured qualitative interviewing
Semi-structured interviews are one of the most commonly adopted qualitative methods, this article provides some hints to make sure they go smoothly, and provide rich data

 

Finding, using and some cautions on secondary qualitative data
Social media analysis is an increasingly popular research tool, but as with all secondary data analysis, requires acknowledging some caveats

 

Participant diaries for qualitative research
Longitudinal and self-recorded data can be a real gold mine for qualitative analysis, find out how it can help your study

 


Practicalities of qualitative research

Transcription for qualitative interviews and focus-groups
Part of a whole series of blog articles on getting qualitative audio transcribed, or doing it yourself, and how to avoid some of the pitfalls

 

Designing a semi-structured interview guide for qualitative interviews
An interview guide can give the researcher confidence and the right level of consistency, but shouldn't be too long or too descriptive...

 

Recruitment for qualitative research
While finding people to take part in your qualitative study can seem daunting, there are many strategies to choose, and should be closely matched with the research objectives

 

Sampling considerations in qualitative research
How do you know if you have the right people in your study? Going beyond snowball sampling for qualitative research

 

Reaching saturation point in qualitative research
You'll frequently hear people talking about getting to data saturation, and this post explains what that means, and how to plan for it

 

 

Coding qualitative data

Developing and populating a qualitative coding framework in Quirkos
How to start out with an analytical coding framework for exploring, dissecting and building up your qualitative data

 

Play and Experimentation in Qualitative Analysis
I feel that great insight often comes from experimenting with qualitative data and trying new ways to examine it, and your analytical approach should allow for this

 

6 meta-categories for qualitative coding and analysis
Don't just think of descriptive codes, use qualitative software to log and keep track of the best quotes, surprises and other meta-categories

 

Turning qualitative coding on its head
Sometimes the most productive way forward is to try a completely new approach. This post outlines several strange but insightful ways to recategorise and examine your qualitative data

 

Merging and splitting themes in qualitative analysis
It's important to have an iterative coding process, and you will usually want to re-examine themes and decide whether they need to be more specific or vague

 

 


Quirkos tips and tricks

Using Quirkos for Systematic Reviews and Evidence Synthesis
Qualitative software makes a great tool for literature reviews, and this article outlines how to sep up a project to make useful reports and outputs

 

How to organise notes and memos in Quirkos
Keeping memos is an important tool during the analytical process, and Quirkos allows you to organise and code memo sources in the same way you work with other data

 

Bringing survey data and mixed-method research into Quirkos
Data from online survey platforms often contains both qualitative and quantitative components, which can be easily brought into Quirkos with a quick tool

 

Levels: 3-dimensional node and topic grouping in Quirkos
When clustering themes isn't comprehensive enough, levels allows you to create grouped categories of themes that go across multiple clustered bubbles

 

10 reasons to try qualitative analysis with Quirkos
Some short tips to make the most of Quirkos, and get going quickly with your qualitative analysis

 

 

Qualitative market research and evaluations

Delivering qualitative market insights with Quirkos
A case study from an LA based market research firm on how Quirkos allowed whole teams to get involved in data interpretation for their client

 

Paper vs. computer assisted qualitative analysis
Many smaller market research firms still do most of their qualitative analysis on paper, but there are huge advantages to agencies and clients to adopt a computer-assisted approach

 

The importance of keeping open-ended qualitative responses in surveys
While many survey designers attempt to reduce costs by removing qualitative answers, these can be a vital source of context and satisfaction for users

 

Qualitative evaluations: methods, data and analysis
Evaluating programmes can take many approaches, but it's important to make sure qualitative depth is one of the methods adopted

 

Evaluating feedback
Feedback on events, satisfaction and engagement is a vital source of knowledge for improvement, and Quirkos lets you quickly segment this to identify trends and problems

 

 

 

Thinking About Me: Reflexivity in science and qualitative research

self rembrandt reflexivity

Reflexivity is a process (and it should be a continuing process) of reflecting on how the researcher could be influencing a research project.


In a traditional positivist research paradigm, the researcher attempts to be a neutral influence on  research. They make rational and logical interpretations, and assume a ‘null hypothesis’, in which they expect all experiments to have no effect, and have no pre-defined concept of what the research will show.


However, this is a lofty aspiration and difficult to achieve in practice. Humans are fallible and emotional beings, with conflicting pressures on jobs, publication records and their own hunches. There are countless stories of renowned academics having to retract papers, or their whole research careers because of faked results, flawed interpretations or biased coding procedures.


Many consider it to be impossible to fully remove the influence of the researcher from the process, and so all research would be ‘tainted’ in some way by the prejudices of those in the project. This links into the concept of “implicit bias” where even well-meaning individuals are influenced by subconscious prejudices. These have been shown to have a significant discriminatory impact on pay, treatment in hospitals and recruitment along lines of gender and ethnicity.


So does this mean that we should abandon research, and the pursuit of truly understanding the world around us? No! Although we might reject the notion of attaining an absolute truth, that doesn’t mean we can’t learn something. Instead of pretending that the researcher is an invisible and neutral piece of the puzzle, a positionality and reflexivity approach argues that the background of the researcher should be detailed in the same way as the data collection methods and analytical techniques.


But how is this done in practice? Does a researcher have to bare their soul to the world, and submit their complete tax history? Not quite, but many in feminist and post-positivist methodologies will create a ‘positionality statement’ or ‘reflexivity statement’. This is a little like a CV or self-portrait of potential experiences and bias, in which the researcher is honest about personal factors that might influence their decisions and interpretations. These might include the age, gender, ethnicity and class of the researcher, social and research issues they consider important, their country and culture, political leanings, life experiences and education. In many cases a researcher will include such a statement with their research publications and outputs, just Googling ‘positionality statements’ will provide dozens of links to examples.

 

However, I feel that this is a minimum level of engagement with the issue, and it’s actually important to keep a reflexive stance throughout the research process. Just like how a one-off interview is not as accurate a record as a daily diary, keeping reflexivity notes as an ongoing part of a research journal is much more powerful. Here a researcher can log changes in their situation, assumptions and decisions made throughout the research process that might be affected by their personal stance. It’s important that the researcher is constantly aware of when they are making decisions, because each is a potential source of influence. This includes deciding what to study, who to sample, what questions to ask, and which sections of text to code and present in findings.


Why this is especially pertinent to qualitative research? It’s often raised in social science, especially ethnography and close case study work with disadvantaged or hard-to-reach populations where researchers have a much closer engagement with their subjects and data. It could be considered that there are more opportunities for personal stance to have an impact here, and that many qualitative methods, especially the analysis process using grounded theory, are open to multiple interpretations that vary by researcher. Many make the claim that qualitative research and data analysis is more subjective than quantitative methods, but as we’ve argued above, it might be better to say that they are both subjective. Many qualitative epistemological approaches are not afraid of this subjectivity, but will argue it is better made forthright and thus challenged, rather than trying to keep it in the dark.


Now, this may sound a little crazy, especially to those in traditionally positivist fields like STEM subjects (Science, Technology Engineering, Mathematics). Here there is generally a different move: to use process and peer review to remove as many aspects of the research that are open to subjective interpretation as possible. This direction is fine too!


However, I would argue that researchers already have to make a type of reflexivity document: a conflict of interest statement. Here academics are supposed to declare any financial or personal interest in the research area that might influence their neutrality. This is just like a positionality statement! An admission that researchers can be influenced by prejudices and external factors, and that readers should be aware of such conflicts of interest when doing their own interpretation of the results.


If it can be the case that money can influence science (and it totally can) it’s also been shown that gender and other aspects of an academic's background can too. All reflexivity asks us to do is be open and honest with our readers about who we are, so they can better understand and challenge the decisions we make.

 

 

Like all our blog articles, this is intended to be a primer on some very complex issues. You’ll find a list of references and further reading below (in addition to the links included above). Don’t forget to try Quirkos for all your qualitative data analysis needs! It can help you keep, manage and code a reflexive journal throughout your analysis procedure. See this blog article for more!

 

 

References

 

Bourke, B., 2014, Positionality: Reflecting on the Research Process, The Qualitative Report 19, http://www.nova.edu/ssss/QR/QR19/bourke18.pdf


Day, E., 2002, Me, My*self and I: Personal and Professional Re-Constructions in Ethnographic Research, FQS 3(3) http://www.qualitative-research.net/index.php/fqs/article/view/824/1790


Greenwald, A., Krieger, L., 2006, Implicit Bias: Scientific Foundations, California Law Review, 94(4). http://www.jstor.org/stable/20439056


Lynch, M., 2000, Against Reflexivity as an Academic Virtue and Source of Privileged Knowledge, Theory, Culture & Society 17(3), http://tcs.sagepub.com/content/17/3/26.short


Savin-Baden, M., Major C., 2013, Personal stance, positionality and reflexivity, in Qualitative Research: The essential guide to theory and practice. Routledge, London.


Soros, G., 2013, Fallibility, reflexivity and the human uncertainty principle, Journal of Economic Methodology, 20(4) https://www.georgesoros.com/essays/fallibility-reflexivity-and-the-human-uncertainty-principle-2/

 

 

The importance of keeping open-ended qualitative responses in surveys

open-ended qualitative responses in surveys

I once had a very interesting conversation at a MRS event with a market researcher from a major media company. He told me that they were increasingly ‘costing-out’ the qualitative open-ended questions from customer surveys because they were too expensive and time consuming to analyse. Increasingly they were replacing open-ended questions with a series of Likert scale questions which could be automatically and statistically examined.

 

I hear similar arguments a lot, and I totally understand the sentiment: doing good qualitative research is expensive, and requires good interpretation. However, it’s just as possible to do statistical analysis poorly, and come up with meaningless and inaccurate answers. For example, when working with Likert scales, you have to be careful about which parametric tests you use, and make sure that the data is normally distributed (Sullivan and Artino 2013).

 

There is evidence that increasing the number of options in closed questions does not significantly change the responses participants share (Dawes 2008), so if you need a good level of nuance into customer perceptions, why not let your users choose their own words. “Quick Qual” approaches, like asking people to use one word to describe the product or their experience can be really illuminating. Better yet, these responses are easy to analyse, and present as an engaging word cloud!

 

Even when you have longer responses, it’s not necessary to always take a full classification and quantification approach to qualitative survey data such as in Nardo (2003). For most market research investigations, this level of detail is not needed by researcher or client.

 

Indeed, you don’t need to do deep analysis of the data to get some value from it. A quick read through some of the comments can make sure your questions are on track, and there aren’t other common issues being raised. It helps check you were asking the right questions, and can help explain why answers for some people aren’t matching up with the rest. As ever, qualitative data is great for surprises, responses you hadn’t thought of, and understanding motivations.

 

Removing open ended questions means you can’t provide nice quotes or verbatims from the feedback, which are great for grounding a report and making it come to life. If you have no quotes from respondents, you also are missing the opportunity to create marketing campaigns around comments from customer evangelists, something Lidl UK has done well by featuring positive Tweets about their brand. In this article marketing director Claire Farrant notes the importance of listening and engaging with customer feedback in this way. It can also make people more satisfied with the feedback process if they have a chance to voice their opinions in more depth.

 

I think it’s also vital to include open-ended questions when piloting a survey or questionnaire. Having qualitative data at an early stage can let you refine your questions, and the possible responses. Sometimes the language used by respondents is important to reflect when setting closed questions: you don’t want to be asking questions like “How practical did you find this product” when the most common term coming from the qualitative data is “Durable”. It’s not always necessary to capture and analyse qualitative data for thousands of responses, but looking at a sample of a few dozen or hundred can show if you are on the right track before a big push.

 

You also shouldn’t worry too much about open-ended surveys having lower completion rates. A huge study by SurveyMonkey found that a single open question actually increased engagement slightly, and only when there were 5 or more open-ended response boxes did this have a negative impact on completion.

 

Finally, without qualitative responses, you lose the ability to triangulate and integrate your qualitative and quantitative data: one of the most powerful tools in survey analysis. For example, in Quirkos it is trivial to do very quick comparative subset analysis, using any of the closed questions as a pivot point. So you can look at the open ended responses from people who gave high satisfaction scores next to those that were low, and rather than then being stuck trying to explain the difference in opinion, you can look at the written comments to get an insight into why they differ.

 

And I think this is key to creating good reports for clients. Usually, the end point for a customer is not being told that 83% of their customers are satisfied with their helpline: they want to actions that will improve or optimise delivery. What exactly was the reason 17% of people had a bad experience? It’s all very well to create an elaborate chain of closed questions, such as ‘You said you were unsatisfied. Which of these reasons bests explains this? You said the response time made you unsatisfied. How long did you wait? 0-3min, 3-5min etc. etc. But these types of surveys are time consuming to program and make comprehensive, and sometimes just allowing someone to type “I had to wait more than 15 minutes for a response” would have given you all the data you needed on a critical point.

 

The depth and insight from qualitative data can illuminate differences in respondent’s experiences, and give the key information to move things forward. Instead of thinking how can you cost-out qualitative responses, think instead how you can make sure they are integrated to provide maximum client value! A partnership between closed and open questions is usually the most powerful way to get both a quick summary and deep insight into complex interactions, and there is no need to be afraid of the open box!

 

Quirkos is designed to make it easy to bring both qualitative and quantitative data from surveys together, and use the intuitive visual interface to explore and play with market research data. Download a free trial of our qualitative analysis software, or contact us for a demo, and see how quickly you can step-up from paper based analysis into a streamlined and insightful MRX workflow!

 

Analytical memos and notes in qualitative data analysis and coding

Image adapted from https://commons.wikimedia.org/wiki/File:Male_forehead-01_ies.jpg - Frank Vincentz

There is a lot more to qualitative coding than just deciding which sections of text belong in which theme. It is a continuing, iterative and often subjective process, which can take weeks or even months. During this time, it’s almost essential to be recording your thoughts, reflecting on the process, and keeping yourself writing and thinking about the bigger picture. Writing doesn’t start after the analysis process, in qualitative research it often should precede, follow and run in parallel to a iterative interpretation.


The standard way to do this is either through a research journal (which is also vital during the data collection process) or through analytic memos. Memos create an important extra level of narrative: an interface between the participant’s data, the researcher’s interpretation and wider theory.


You can also use memos as part of a summary process, to articulate your interpretations of the data in a more concise format, or even throw the data wider and larger by drawing from larger theory.


It’s also a good cognitive exercise: regularly make yourself write what you are thinking, and keep yourself articulating yourself. It will make writing up at the end a lot easier in the end! Memos can be a very flexible tool, and qualitative software can help keep these notes organised. Here are 9 different ways you might use memos as part of your work-flow for qualitative data analysis:

 

Surprises and intrigue
This is probably the most obvious way to use memos: note during your reading and coding things that are especially interesting, challenging or significant in the data. It’s important to do more than just ‘tag’ these sections, reflect to yourself (and others) why these sections or statements stand out.

 

Points where you are not sure
Another common use of memos is to record sections of the data that are ambiguous, could be interpreted in different ways, or just plain don’t fit neatly in to existing codes or interpretations. But again, this should be more than just ‘flagging’ bits that need to be looked at again later, it’s important to record why the section is different: sometimes the act of having to describe the section can help comprehension and illuminate the underlying causation.

 

Discussion with other researchers
Large qualitative research projects will often have multiple people coding and analysing the data. This can help to spread the workload, but also allows for a plurality of interpretations, and peer-checking of assumptions and interpretations. Thus memos are very important in a team project, as they can be used to explain why one researcher interpreted or coded sources in a certain way, and flag up ambiguous or interesting sections for discussion.

 

Paper-trail
Even if you are not working as part of a team, it can be useful to keep memos to explain your coding and analytical choices. This may be important to your supervisors (or viva panel) as part of a research thesis, and can be seen as good practice for sharing findings in which you are transparent about your interpretations. There are also some people with a positivist/quantitative outlook who find qualitative research difficult to trust because of the large amount of seemingly subjective interpretation. Memos which detail your decision making process can help ‘show your working out’ and justify your choices to others.

 

Challenging or confirming theory
This is another common use of memos, to discuss how the data either supports or challenges theory. It is unusual for respondents to neatly say something like “I don’t think my life fits with the classical structure of an Aeschylean tragedy” should this happen to be your theoretical approach! This means you need to make these observations and higher interpretation, and note how particular statements will influence your interpretations and conclusions. If someone says something that turns your theoretical framework on its head, note it, but also use the memos as a space to record context that might be used later to explain this outlier. Memos like this might also help you identify patterns in the data that weren’t immediately obvious.

 

Questioning and critiquing the data/sources
Respondents will not always say what they mean, and sometimes there is an unspoken agenda below the surface. Depending on the analytical approach, an important role of the researcher is often to draw deeper inferences which may be implied or hinted at by the discourse. Sometimes, participants will outright contradict themselves, or suggest answers which seem to be at odds with the rest of what they have shared. It’s also a great place to note the unsaid. You can’t code data that isn’t there, but sometimes it’s really obvious that a respondent is avoiding discussing a particular issue (or person). Memos can note this observation, and discuss why topics might be uncomfrotable or left out in the narrative.


Part of an iterative process
Most qualitative research does not follow a linear structure, it is iterative and researchers go back and re-examine the data at different stages in the process. Memos should be no different, they can be analysed themselves, and should be revisited and reviewed as you go along to show changes in thought, or wider patterns that are emerging.


Record your prejudices and assumptions
There is a lot of discussion in the literature about the importance of reflexivity in qualitative research, and recognising the influence of the non-neutral researcher voice. Too often, this does not go further than a short reflexivity/positionality statement, but should really be a constantly reconsidered part of the analytical process. Memos can be used as a prompt and record of your reflexive process, how the data is challenges your prejudices, or how you might be introducing bias in the interpretation of the data.


Personal thoughts and future directions
As you go through the data, you may be noticing interesting observations which are tangential, but might form the basis of a follow-on research project or reinterpretation of the data. Keeping memos as you go along will allow you to draw from this again and remember what excited you about the data in the first place.

 

 

Qualitative analysis software can help with the memo process, keeping them all in the same place, and allowing you to see all your memos together, or connected to the relevant section of data. However, most of the major software packages (Quirkos included) don’t exactly forefront the memo tools, so it is important to remember they are there and use them consistently through the analytical process.

 

Memos in Quirkos are best done using a separate source which you edit and write your memos in. Keeping your notes like this allows you to code your memos in the same way you would with your other data, and use the source properties to include or exclude your memos in reports and outputs as needed. However, it can be a little awkward to flip between the memo and active source, and there is currently no way to attach memos to a particular coding event. However, this is something we are working on for the next major release, and this should help researchers to keep better notes of their process as they go along. More detail on qualitative memos in Quirkos can be found in this blog post article.

 

 

There is a one-month free trial of Quirkos, and it is so simple to use that you should be able to get going just by watching one of our short intro videos, or the built-in guide. We are also here to help at any stage of your process, with advice about the best way to record your analytical memos, coding frameworks or anything else. Don’t be shy, and get in touch!

 


References and further reading:


Chapman, Y., Francis, K., 2008. Memoing in qualitative research, Journal of Research in Nursing, 13(1). http://jrn.sagepub.com/content/13/1/68.short?rss=1&ssource=mfc

 

Gibbs, G., 2002, Writing as Analysis, http://onlineqda.hud.ac.uk/Intro_QDA/writing_analysis.php

Saldana, J., 2015, The Coding Manual for Qualitative Researchers, Writing Analytic Memos about Narritative and Visual Data, Sage, London. https://books.google.co.uk/books?id=ZhxiCgAAQBAJ

 

 

Starting a qualitative research thesis, and choosing a CAQDAS package

qualitative thesis software

 

For those about to embark on a qualitative Masters or PhD thesis, we salute you!

 

More and more post-graduate students are using qualitative methods in their research projects, or adopting mixed-method data collection and using a small amount of qualitative data which needs to be combined with quantitative data. So this year, how can students decide the best approach for the analysis of their data, and can CAQDAS or QDA software help their studies?

 

First, as far as possible, don’t chose the software, choose the method. Think about what you are trying to research, the best way to get deep data to answer your research questions. The type and amount of data you have will be an important factor. Next, how much existing literature and theory there is around your research area? This will affect whether you will adopt a grounded theory approach, or will be testing or challenging existing theory.

 

Indeed, you may decide that that you don’t need software for your research project. For small projects, especially case studies, you may be more comfortable using printouts of your data, and while reading mark important sections with highlighters and post-it notes. Read Séror (2005) for a comparison of computer vs paper methods. You could also look at the 5 Level QDA, an approach to planning and learning the use of qualitative software so that you develop strategies and tactics that help you make the most of the QDA software.

 

Unfortunately, if you decide you want to use a particular software solution it’s not always as simple as it should be. You will have to eventually make a practical choice based on what software your university has, what support they provide, and what your peers and supervisors use.

 

However, while you are a student, it’s also a good time to experiment and see what works best for you. Not only do all the major qualitative software packages offer a free trial, student licences are hugely discounted against the full versions. This gives you the option to buy a copy for yourself (for a relatively small amount of money).

 

There’s a lot of variety in the different qualitative data analysis software available. The most common one is Nvivo, which your university or department may already have a licence for. This is a very powerful package, but can be intimidating for first-time users. Common alternatives like MAXQDA or Atlas.ti are more user friendly, but also adopt similar spreadsheet-like interfaces. There are also lots of more niche alternatives, for example Transana is unmatched for video analysis, and Dedoose works entirely in the cloud so you can access it from any computer. For a more comprehensive list, check out the Wikipedia list, or the profiles on textanalysis.info

 

Quirkos does a couple of things differently though. First, our student licences don’t expire, and are some of the cheapest around. This means that it doesn’t matter if your PhD takes 3 or 13 years, you will still be able to access your work and data without paying again. And yes, you can keep using your licence into your professional career. It also aims to be the easiest software package to use, and puts visualisations of the data first and foremost in the interface.

 

So give Quirkos a try, but don’t forget about all the other alternatives out there: between them all you will find something that works in the way you want it to and makes your research a little less painful!

 

 

Reflections on qualitative software from KWALON 2016

rotterdam centraal station

Last week saw a wonderful conference held by the the Dutch network for qualitative research KWALON, based at the Erasmus University, Rotterdam. The theme was no less than the future of Qualitative Data Analysis (QDA) software.

 

Chair Jeanine Evers opened the session by outlining 8 important themes the group had identified on qualitative analysis software.

 

The first was the challenge of adding features to software that is requested by users or present in competitors software, without breaking the underlying design of the software. Quirkos really connects to this theme, because we have always tried to have a very simple tool-set, based on a philosophy that the software should be very easy to use. While we obviously take heed of suggestions made by our users, we actually have a comprehensive and limited set of features which we have always planned to introduce, and will continue delivering these over the next few years.

 

However, it is not the intention of Quirkos to become a large software package with lots of features, something Jeanine described as a ‘obese software’ that needs to be put on a diet. It was noted that many software providers have released ‘lite’ versions of their software, and another discussion point was if this fragmented approach can benefit universities and users.

 

User friendliness was another theme of the session, and by keeping Quirkos simple we hope to always have this at the fore of our design philosophy. In my talk (you can now get the slides here) I discussed these themes as mostly being about improving accessibility. To this end, we have tried to make Quirkos not just easier to use, but also to teach and own, with permanent licences and discounts for researchers from  countries that can’t usually afford this type of software. For us, the long-term goal is not just increasing the number of people that use software for qualitative analysis, but the number that are able to take up qualitative research in general.

 

There was also some good discussion at the end of our talk about the risks of making software easy to use: especially that it also makes it easy to use badly. As we’ve discussed many times on this blog, software in general can make it very satisfying to code, and this can appear to be more productive than stepping back and thinking about themes or a undertaking deep readings of the data. These problems can apply to all software packages, so it is important that students and educators work together to learn about the whole analysis procedure, and what part CAQDAS can play.

 

Comments also touched on how memo making is a critical part of a good iterative and reflexive qualitative analysis process: which at the moment Quirkos doesn’t forefront (see for example how F4analyse and a future version of Cassandre will operate). Although it is possible to record memos by typing in a source, which gives you the ability to tag and code your memos, as well as writing notes as source properties, this is currently not highlighted enough and we plan on revamping the memo features in a future update.

 


The final theme of the conference, and a major push, was to promote a standard way to exchange software between qualitative software. At the moment it is very difficult for users to move their coded data from one software package to the other. Although most major packages provide options to export their data to other formats (such as spreadsheet CSV data like Quirkos), there is currently no single standard for how should be formatted, so it is very difficult to bring this data – complete with themes and coding - into another package.

 

There was strong support from the software developers to develop and support such a standard, as well as discussions about existing initiatives such as CATA-XML and QuDEx.


This is very important: but not just for users of different of qualitative analysis software, who want to be able to collaborate with universities and colleagues who use different packages. It’s also important for archival purposes, so that qualitative coded data can be universally shared and stored for secondary analysis, and to make it easier for data to be brought in for analysis from the huge number of digital sources in the digital humanities, such as history, journalism, and social media. Such a standard could also be important for formatting data so that machine learning and natural language processing can automate some of the simpler analysis processes on very large ‘big-data’ datasets.


So there is a lot to be done, but a lot of interest in the area in the next few years, with major and minor players all taking different approaches, and seeking common ground. Quirkos is honoured to be a small part of this, and will do whatever we can to improve the world of qualitative analysis for this and the next generation of researchers.