Word clouds and word frequency analysis in qualitative data

word clouds quirkos

 

What’s this blog post about? Well, it’s visualised in the graphic above!

 

In the latest update for Quirkos, we have added a new and much requested feature, word clouds! I'm sure you've used these pretty tools before, they show a random display of all the words in a source of text, where the size of each word is proportional to the number of times it has been counted in the text. There are several free online tools that will generate word clouds for you, Wordle.net being one of the first and most popular.

 

These visualisations are fun, and can be a quick way to give an overview of what your respondents are talking about. They also can reveal some surprises in the data that prompt further investigation. However, there are also some limitations to tools based on word frequency analysis, and these tend to be the reason that you rarely see word clouds used in academic papers. They are a nice start, but no replacement for good, deep qualitative analysis!

 

We've put together some tips for making sure your word clouds present meaningful information, and also some cautions about how they work and their limitations.

 


1. Tweak your stop list!

As these tools count every word in the data, results would normally be dominated by basic words that occur most often, 'the', 'of, 'and' and similar small and usually meaningless words. To make sure that this doesn't swamp the data, most tools will have a list of 'stop' words which should be ignored when displaying the word cloud. That way, more interesting words should be the largest. However, there is always a great deal of variation in what these common words are. They differ greatly between verbal and written language for example (just think how often people might say 'like' or 'um' in speech but not in a typed answer). Each language will also need a corresponding stop list!

 

So Quirkos (and many other tools) offer ways to add or remove words from the stop list when you generate a word cloud. By default, Quirkos takes the most 50 frequent words from the verbal and written British National Corpus of words, but 50 is actually a very small stop list. You will still get very common words like 'think' and 'she' which might be useful to certain projects looking at expressions of opinions or depictions of gender. So it's a good idea to look at the word cloud, and remove words that aren't important to you by adding them to the stop list. Just make sure you record what has been removed for writing up, and what your justification was for excluding it!

 


2. There is no weighting or significance

Since word frequency tools just count the occurrence of each word (one point for each utterance) they really only show one thing: how often a word was said. This sounds obvious, but it doesn't give any indication of how important the use of a word was for each event. So if one person says 'it was a little scary', another says 'it was horrifyingly scary' and another 'it was not scary' the corresponding word count doesn't have any context or weight. So this can look deceptive in something like a word cloud, where the above examples count the negative (not scary) and the minor (little scary) the same way, and 'scary' could look like a significant trend. So remember to always go back and read the data carefully to understand why specific words are being used.

 


3. Derivations don't get counted together

Remember that most word cloud tools are not even really counting words, only combinations of letters. So 'fish', 'fishy' and 'fishes' will all get counted as separate words (as will any typos or mis-spellings). This might not sound important, but if you are trying to draw conclusions just from a word cloud, you could miss the importance of fish to your participants, because the different derivations weren't put together. Yet, sometimes these distinctions in vocabulary are important – obviously 'fishy' can have a negative connotation in terms of something feeling off, or smelling bad – and you don't want to put this in the same category as things that swim. So a researcher is still needed to craft these visualisations, and make decisions about what should be shown and grouped. Speaking of which...

 


4. They won't amalgamate different terms used by participants

It's fascinating how different people have their own terms and language to describe the same thing, and illuminating this can bring colour to qualitative data or show important subtle differences that are important for IPA[[]] or discourse analysis. But when doing any kind of word count analysis, this richness is a problem – as the words are counted separately. Thus neither term 'shiny', 'bright' or 'blinding' may show up often, but if grouped together they could show a significant theme. Whether you want to treat certain synonyms in the same way is up to the researcher, but in a word cloud these distinctions can be masked.

 

Also, don’t forget that unless told otherwise (or sometimes hyphenated), word clouds won’t pick up multiple word phrases like ‘word cloud’ and ‘hot topic’.

 

 

5. Don’t focus on just the large trends


Word clouds tend to make the big language trends very obvious, but this is usually only part of the story. Just as important are words that aren’t there – things you thought would come up, topics people might be hesitant to speak about. A series of word clouds can be a good way to show changes in popular themes over time, like what terms are being used in political speeches or in newspaper headlines. In these cases words dropping out of use are probably just as interesting as the new trends.

 

Download a free trial

 


6. This isn't qualitative analysis

At best, this is quantification of qualitative data, presenting only counting. Since word frequency tools are just count sequences of letters, not even words and their meanings, they are a basic supplemental numerical tool to deep qualitative interpretation (McNaught and Lam 2010). And as with all statistical tools, they are easy to misapply and poorly interpret. You need to know what is being counted, what is being missed (see above), and before drawing any conclusions, make sure you understand the underlying data and how it was collected. However…

 

 

7. Word clouds work best as summaries or discussion pieces


If you need to get across what’s coming out of your research quickly, showing the lexicon of your data in word clouds can be a fun starting point. When they show a clear and surprising trend, the ubiquity and familiarity most audiences have with word clouds make these visualisations engaging and insightful. They should also start triggering questions – why does this phrase appear more? These can be good points to start guiding your audience through the story of your data, and creating interesting discussions.

 

As a final point, word clouds often have a level of authority that you need to be careful about. As the counting of words is seen as non-interpretive and non-subjective, some people may feel they ‘trust’ what is shown by them more than the verbose interpretation of the full rich data. Hopefully with the guidance above, you can persuade your audience that while colourful, word clouds are only a one-dimensional dive into the data. Knowing your data and reading the nuance will be what separates your analysis from a one click feature into a well communicated ‘aha’ moment for your field.

 

 

If you'd like to play with word clouds, why not download a free trial of Quirkos? It also has raw word frequency data, and an easy to use interface to manage, code and explore your qualitative data.

 

 

 

 

Writing qualitative research papers

writing qualitative research articles papers

We’ve actually talked about communicating qualitative research and data to the public before, but never covered writing journal articles based on qualitative research. This can often seem daunting, as the prospect of converting dense, information rich studies into a fairly brief and tightly structured paper takes a lot of work and refinement. However, we’ve got some tips below that should help demystify the process, and let you break it down into manageable steps.

 

Choose your journal

The first thing to do is often what left till last: choose the journal you want to submit your article to. Since each journal will have different style guidelines, types of research they publish and acceptable lengths, you should actually have a list of a few journals you want to publish with BEFORE you start writing.

 

To make this choice, there are a few classic pointers. First, make sure your journal will publish qualitative research. Many are not interested in qualitative methodologies, see debates about the BMJ recently to see how contested this continues to be. It’s a good idea to choose a journal that has other articles you have referenced, or are on a similar topic. This is a good sign that the editors (and reviewers) are interested in, and understand this area.

 

Secondly, there are some practical considerations. For those looking for tenure or to one day be part of schemes that assess the quality of academic institutions by their published work such as the REF (in the UK) or PBRF (in New Zealand) you should consider ‘high impact’ or ‘high tier’ journals. These are considered to be the most popular journals in certain areas, but will also be the most competitive to get into.

 

Before you start writing, you should also read the guidance for authors from the journal, which will give you information about length, required sections, how they want the summary and keywords formatted, and the type of referencing. Many are based on the APA style guidelines, so it is a good idea to get familiar with these.

 


Describing your methodology, literature review, theoretical underpinnings

When I am reviewing qualitative articles, the best ones describe why the research is important, and how it fits in with the existing literature. They then make it clear how the researcher(s) chose their methods, who they spoke to and why they were chosen. It’s then clear throughout the paper which insights came from respondent data, and when claims are made how common they were across respondents.

 

To make sure you do this, make sure you have a separate section to detail your methods, recruitment aims and detail the people you spoke to – not just how many, but what their background was, how they were chosen, as well as eventually noting any gaps and what impact that could have on your conclusion. Just because this is a qualitative paper doesn’t mean you don’t have to say the number of people you spoke to, but there is no shame in that number being as low as one for a case study or autoethnography!

 

Secondly, you must situate your paper in the existing literature. Read what has come before, critique it, and make it clear how your article contributes to the debate. This is the thing that editors are looking for most – make the significance of your research and paper clear, and why other people will want to read it.

 

Finally, it’s very important in qualitative research papers to clearly state your theoretical background and assumptions. So you need to reference literature that describes your approach to understanding the world, and be specific about the interpretation you have taken. Just saying ‘grounded theory’ for example is not enough – there are a dozen different conceptualisations of this one approach.
 

 

Reflexivity

It’s not something that all journals ask for, but if you are adopting many qualitative epistemologies, you are usually taking a stance on positivism, impartibility, and the impact of the researcher on the collection and interpretation of the data. This sometimes leads to the need for the person(s) who conducted the research to describe themselves and their backgrounds to the reader, so they can understand the world view, experience and privilege that might influence how the data was interpreted. There is a lot more on reflexivity in this blog post.


How to use quotations

Including quotations and extracts from your qualitative data is a great feature, and a common way to make sure that you back up your description of the data with quotes that support your findings. However, it’s important not to make the text too dense with quotations. Try and keep to just a few per section, and integrate them into your prose as much as possible rather than starting every one with ‘participant x said’. I also like to try and show divergence in the respondents, so have a couple of quotes that show alternative view points.

 

On a practical note, make sure any quotations are formatted according to the journal’s specifications. However, if they don’t have specific guidelines, try and make them clear by always giving them their own indented paragraph (if more than a sentence) and clearly label them with a participant identifier, or significant anonymised characteristic (for example School Administrator or Business Leader). Don’t be afraid to shorten the quotation to keep it relevant to the point you are trying to make, while keeping it an accurate reflection of the participant’s contribution. Use ellipsis (…) to show where you have removed a section, and insert square brackets to clarify what the respondent is talking about if they refer to ‘it’ or ‘they’, for example [the school] or [Angela Merkel].

 


Don’t forget visualisations

If you are using qualitative analysis software, make sure you don’t just use it as a quotation finder. The software will also help you do visualisations and sub-set analysis, and these can be useful and enlightening to include in the paper. I see a lot of people use an image of their coding structure from Quirkos, as this quickly shows the relative importance of each code in the size of the bubble, as well as the relationships between quotes. Visual outputs like this can get across messages quickly, and really help to break up text heavy qualitative papers!

 


Describe your software process!

No, it’s not enough to just say ‘We used Nvivo’. There are a huge number of ways you could have used qualitative analysis software, and you need to be more specific about what you used the software for, how you did the analysis (for example framework / emergent) and how you got outputs from the software. If you did coding with other people, how did this work? Did you sit together and code at one time? Did you each code different sources or go over the same ones? Did you do some form of inter-rater reliability, even if it was not a quantitative assessment? Finally, make sure you include your software in the references – see the APA guides for how to format this. For Quirkos this would look something like:

 

Quirkos Software (2017). Quirkos version 1.4.1 [Computer software]. Edinburgh: Quirkos Limited.

 

Quirkos - qualitative analysis software

 


Be persistent!

Journal publication is a slow process. Unless you get a ‘desk rejection’, where the editor immediately decides that the article is not the right fit for the journal, hearing back from the reviewers could take months or even a year. Ask colleagues and look at the journal information to get an idea of how long the review process takes for each journal. Finally, when you get some feedback it might be negative (a rejection) or unhelpful (when the reviewers don’t give constructive feedback). This can be frustrating, especially when it is not clear how the article can be made better. However, there are excellent journals such as The Qualitative Report that take a collaborative rather than combatitative approach to reviewing articles. This can be really helpful for new authors.

 

Remember that a majority of articles are rejected at any paper, and some top-tier journals have acceptance rates of 10% or less. Don’t be disheartened; try and read the comments, keep on a cycle of quickly improving your paper based on the feedback you can get, and either send it back to the journal or find a more appropriate home for it.

 

Good luck, and don’t forget to try out Quirkos for your qualitative analysis. Our software is easy to use, and makes it really easy to get quotes into Word or other software for writing up your research. Learn more about the features, and download a free, no-obligation trial.

 

 

Snapshot data and longitudinal qualitative studies

longitudinal qualitative data


In the last blog post, we looked at creating archives of qualitative data that can be used by other researchers (or yourself in the future) for secondary analysis. In that article I postulated that secondary data analysis could make collecting new data a rarer, and expensive event. However, some (including Dr Susanne Friese) pointed out that as the social world is always changing, there is a constant need to collect new data. I totally agree with this, but it made me think about another problem with research studies – they are time-limited collection events that end up capturing a snapshot of the world when they were recorded.


Most qualitative research collects data as a series of one-off dives into the lives and experience of participants. These might be a single interview or focus group, or the results of a survey that captures opinions at a particular point in time. This might not be a certain date, it might also be at a key point in a participant’s journey – such as when people are discharged from hospital, or after they attend a training event. However, even within a small qualitative research project it is possible to measure change, by surveying the people at different stages in their individual or collective journeys.


This is sometimes called ‘Qualitative Longitudinal Research’ (Farrall et al. 2006), and often involves contacting people at regular intervals, possibly even after years or decades. Examples of this type of project include the five year ‘Timescapes’ project in the UK which looked at changes in personal and family relationships (Holland 2011), or a mutli-year look at homelessness and low-income families in Worcester, USA for which the archived data is available (yay!).


However, such projects tend to be expensive, as they require having researchers working on a project for many years. They also need quite a high sample size because there is an inevitable annual attrition of people who move, fall ill or stop responding to the researchers. Most projects don’t have the luxury of this sort of scale and resources, but there is another possibility to consider: if it is appropriate to collect data over a shorter timescale. This is often collecting data at ‘before and after’ data points – for example bookending a treatment or event, so are often used in well planned evaluations. Researchers can ask questions about expectations before the occasion, and how they felt afterwards. This is useful in a lot of user experience research, but also to understand motivations of actors, and improve delivery of key services.


But there are other reasons to do more than one interview with a participant. Even having just two data points from a participant can show changes in lives, circumstances and opinions, even when there isn’t an obvious event distinguishing the two snapshots. It also gets people to reflect on  answers they gave in the first interview, and see if their feelings or opinions have changed. It can also give an opportunity to further explore topics that were interesting or not covered in the first interview, and allow for some checking – do people answer the questions in a consistent way? Sometimes respondents (or even interviewers) are having a bad day, and this doubles the chance of getting high quality data.


In qualitative analysis it is also an opportunity to look through all the data from a number of respondents and go back to ask new questions that are revealed from the data. In a grounded theory approach this is very valuable, but it can also be used to check researcher’s own interpretations and hypothesis about the research topic.


There are a number of research methods which are particularly suitable to longitudinal qualitative data studies. Participant diaries for example can be very illuminating, and can work over long or short periods of time. Survey methods also work well for multiple data points, and it is fairly easy to administer these at regular intervals by collecting data via e-mail, telephone or online questionnaires. These don’t even have to have very fixed questions, it is possible to do informal interviews via e-mail as well (eg Meho 2006), and this works well over long and medium periods of time.


So when planning a research project it’s worth thinking about whether your research question could be better answered with a longitudinal or multiple contact approach. If you decide this is the case, just be aware of the possibility that you won’t be able to contact everyone multiple times, and if means that some people’s data can’t be used, you need to over-recruit at the initial stages. However, the richer data and greater potential for reliability can often outweigh the extra work required. I also find it very illuminating as a researcher, to talk to participants after analysing some of their data. Qualitative data can become very abstract from participants (especially once it is transcribed, analysed and dissected) and meeting the person again and talking through some of the issues raised can help reconnect with the person and story.


Researchers should also consider how they will analyse such data. In qualitative analysis software like Quirkos, it is usually fairly easy to categorise the different sources you have by participant or time scale, so that you can look at just 1st or 2nd interviews or all together. I have had many questions from users into how best to organise their projects for such studies: and my advice is generally to keep everything in one project file. Quirkos makes it easy to do analysis on just certain data sources, and show results and reports from everything, or just one set of data.


So consider giving Quirkos a try with a free download for Windows, Mac or Linux, and read about how to build complex queries to explore your data over time, or between different groups of respondents.

 

 

Archiving qualitative data: will secondary analysis become the norm?

archive secondary data

 

Last month, Quirkos was invited to a one day workshop in New York on archiving qualitative data. The event was hosted by Syracuse University, and you can read a short summary of the event here. This links neatly into the KWALON led initiative to create a common standard for interchange of coded data between qualitative software packages.


The eventual aim is to develop a standardised file format for qualitative data, which not only allows use of data on any qualitative analysis software, but also for coded qualitative data sets to be available in online archives for researchers to access and explore. There are several such initiatives around the world, for example QDR in the United States, and the UK Data Archive in the United Kingdom. Increasingly, research funders are requiring that data from research is deposited in such public archives.


A qualitative archive should be a dynamic and accessible resource. It is of little use creating something like the warehouse at the end of ‘Raiders of the Lost Ark’: a giant safehouse of boxed up data which is difficult to get to. Just like the UK Data Archive it must be searchable, easy to download and display, and have enough detail and metadata to make sure data is discoverable and properly catalogued. While there is a direct benefit to having data from previous projects archived, the real value comes from the reuse of that data: to answer new questions or provide historical context. To do this, we need a change in practice from researchers, participants and funding bodies.


In some disciplines, secondary data analysis of archival data is common place, think for example of history research that looks at letters and news from the Jacobean period (eg Coast 2014). The ‘Digital Humanities’ movement in academia is a cross-dicipline look at how digital archives of data (often qualitative and text based) can be better utilised by researchers. However, there is a missed opportunity here, as many researchers in digital humanities don’t use standard qualitative analysis software, preferring instead their own bespoke solutions. Most of the time this is because the archived data is in such a variety of different formats and formatting.


However, there is a real benefit to making not just historical data like letters and newspapers, but contemporary qualitative research projects (of the typical semi-structured interview / survey / focus-group type) openly available. First, it allows others to examine the full dataset, and check or challenge conclusions drawn from the data. This allows for a higher level of rigour in research - a benefit not just restricted to qualitative data, but which can help challenge the view that qualitative research is subjective, by making the full data and analysis process available to everyone.

 

Second, there is huge potential for secondary analysis of qualitative data. Researchers from different parts of the country working on similar themes can examine other data sources to compare and contrast differences in social trends regionally or historically. Data that was collected for a particular research question may also have valuable insights about other issues, something especially applicable to rich qualitative data. Asking people about healthy eating for example may also record answers which cover related topics like attitudes to health fads in the media, or diet and exercise regimes.

 

At the QDR meeting last month, it was postulated that the generation of new data might become a rare activity for researchers: it is expensive, time consuming, and often the questions can be answered by examining existing data. With research funding facing an increasing squeeze, many funding bodies are taking the opinion that they get better value from grants when the research has impact beyond one project, when the data can be reused again and again.

 

There is still an element of elitism about generating new data in research – it is an important career pathway, and the prestige given to PIs running their own large research projects are not shared with those doing ‘desk-based’ secondary analysis of someone else’s data. However, these attitudes are mostly irrational and institutionally driven: they need to change. Those undertaking new data collection will increasingly need to make sure that they design research projects to maximise secondary analysis of their data, by providing good documentation on the collection process, research questions and detailed metadata of the sources.

 

The UK now has a very popular research grant programme specifically to fund researchers to do secondary data analysis. Loiuse Corti told the group that despite uptake being slow in the first year, the call has become very competitive (like the rest of grant funding). These initiatives will really help raise the profile and acceptability of doing secondary analysis, and make sure that the most value is being gained from existing data.


However, making qualitative data available for secondary analysis does not necessarily mean that it is publicly available. Consent and ethics may not allow for the release of the complete data, making it difficult to archive. While researchers should increasingly start planning research projects and consent forms to make data archivable and shareable, it is not always appropriate. Sometimes, despite attempts to anonymise qualitative data, the detail of life stories and circumstances that participants share can make them identifiable. However, the UK Data Archive has a sort of ‘vetting’ scheme, where more sensitive datasets can only be accessed by verified researchers who have signed appropriate confidentiality agreements. There are many levels of access so that the maximum amount of data can be made available, with appropriate safeguards to protect participants.


I also think that it is a fallacy to claim that participants wouldn’t want their data to be shared with other researchers – in fact I think many respondents assume this will happen. If they are going to give their time to take part in a research project, they expect this to have maximum value and impact, many would be outraged to think of it sitting on the shelf unused for many years.


But these data archives still don’t have a good way to share coded qualitative data or the project files generated by qualitative software. Again, there was much discussion on this at the meeting, and Louise Corti gave a talk about her attempts to produce a standard for qualitative data (QuDex). But such a format can become very complicated, if we wish to support and share all the detailed aspects of qualitative research projects. These include not just multimedia sources and coding, but date and metadata for sources, information about the researchers and research questions, and possibly even the researcher’s journals and notes, which could be argued to be part of the research itself.

 

The QuDex format is extensive, but complicated to implement – especially as it requires researchers to spend a lot of time entering metadata to be worthwhile. Really, this requires a behaviour change for researchers: they need to record and document more aspects of their research projects. However, in some ways the standard was not comprehensive enough – it could not capture the different ways that notes and memos were recorded in Atlas.ti for example. This standard has yet to gain traction as there was little support from the qualitative software developers (for commercial and other reasons). However, the software landscape and attitudes to open data are changing, and major agreements from qualitative software developers (including Quirkos) mean that work is underway to create a standard that should eventually create allow not just for the interchange of coded qualitative data, but hopefully for easy archival storage as well.


So the future of archived qualitative archive requires behaviour change from researchers, ethics boards, participants, funding bodies and the designers of qualitative analysis software. However, I would say that these changes, while not necessarily fast enough, are already in motion, and the word is moving towards a more open world for qualitative (and quantitative) research.


For things to be aware of when considering using secondary sources of qualitative data and social media, read this post. And while you are at it, why not give Quirkos a try and see if it would be a good fit for your next qualitative research project – be it primary or secondary data. There is a free trial of the full version to download, with no registration or obligations. Quirkos is the simplest and most visual software tool for qualitative research, so see if it is right for you today!

 

 

Tips for running effective focus groups

In the last blog article I looked at some of the justifications for choosing focus groups as a method in qualitative research. This week, we will focus on some practical tips to make sure that focus groups run smoothly, and to ensure you get good engagement from your participants.

 


1. Make sure you have a helper!

It’s very difficult to run focus groups on your own. If you are wanting to layout the room, greet people, deal with refreshment requests, check recording equipment is working, start video cameras, take notes, ask questions, let in late-comers and facilitate discussion it’s much easier with two or even three people for larger groups. You will probably want to focus on listening to the discussion, not have to take notes and problem solve at the same time. Having another facilitator or helper around can make a lot of difference to how well the session runs, as well as how much good data is recorded from it.

 


2. Check your recording strategy

Most people will record audio and transcribe their focus groups later. You need to make sure that your recording equipment will pick up everyone in the room, and also that you have a backup dictaphone and batteries! Many more tips in this blog post article. If you are planning to video the session, think this through carefully.

 

Do you have the right equipment? A phone camera might seem OK, but they usually struggle to record long sessions, and are difficult to position in a way that will show everyone clearly. Special cameras designed for gig and band practice are actually really good for focus groups, they tend to have wide-angle lenses and good microphones so you don’t need to record separate audio. You might also want to have more than one camera (in a round-table discussion, someone will always have their back to the camera. Then you will want to think about using qualitative analysis software like Transana that will support multiple video feeds.

 

You also need to make sure that video is culturally appropriate for your group (some religions and cultures don’t approve of taking images) and that it won’t make people nervous and clam up in discussion. Usually I find a dictaphone less imposing than a camera lens, but you then loose the ability to record the body language of the group. Video also makes it much easier to identify different speakers!

 


3. Consent and introductions

I always prefer to do the consent forms and participant information before the session. Faffing around with forms to sign at the start or end of the workshop takes up a lot of time best used for discussion, and makes people hurried to read the project information. E-mail this to people ahead of time, so at least they can just sign on the day, or bring a completed form with them. I really feel that participants should get the option to see what they are signing up for before they agree to come to a session, so they are not made uncomfortable on the day if it doesn't sound right for them. However, make sure there is an opportunity for people to ask any questions, and state any additional preferences, privately or in public.

 


4. Food and drink

You may decide not to have refreshments at all (your venue might dictate that) but I really love having a good spread of food and drink at a focus group. It makes it feel more like a party or family occasion than an interrogation procedure, and really helps people open up.

 

While tea, coffee and biscuits/cookies might be enough for most people, I love baking and always bring something home-baked like a cake or cookies. Getting to talk about and offer  food is a great icebreaker, and also makes people feel valued when you have spent the time to make something. A key part of getting good data from a good focus group is to set a congenial atmosphere, and an interesting choice of drinks or fruit can really help this. Don’t forget to get dietary preferences ahead of time, and consider the need for vegetarian, diabetic and gluten-free options.

 


5. The venue and layout

A lot has already been said about the best way to set out a focus group discussion (see Chambers 2002), but there are a few basic things to consider. First, a round or rectangle table arrangement works best, not lecture hall-type rows. Everyone should be able to see the face of everyone else. It’s also important not to have the researcher/facilitator at the head or even centre of the table. You are not the boss of the session, merely there to guide the debate. There is already a power dynamic because you have invited people, and are running the session. Try and sit yourself on the side as an observer, not director of the session.

 

In terms of the venue, try and make sure it is as quiet as possible, and good natural light and even high ceilings can help spark creative discussion (Meyers-Levy and Zhu 2007).

 


6. Set and state the norms

A common problem in qualitative focus group discussions is that some people dominate the debate, while others are shy and contribute little. Chambers (2002) just suggests to say at the beginning of the session this tends to happen, to make people conscious of sharing too much or too little. You can also try and actively manage this during the session by prompting other people to speak, go round the room person by person, or have more formal systems where people raise their hands to talk or have to be holding a stone. These methods are more time consuming for the facilitator and can stifle open discussion, so it's best to use them only when necessary.

 

You should also set out ground rules, attempting to create an open space for uncritical discussion. It's not usually the aim for people to criticise the view of others, nor for the facilitator to be seen as the leader and boss. Make these things explicit at the start to make sure there is the right atmosphere for sharing: one where there is no right or wrong answer, and everyone has something valuable to contribute.

 


7. Exercises and energisers

To prompt better discussion when people are tired or not forthcoming, you can use exercises such as card ranking exercises, role play exercises and prompts for discussion such as stories or newspaper articles. Chambers (2002) suggests dozens of these, as well as some some off-the-wall 'energizer' exercises: fun games to get people to wake up and encourage discussion. More on this in the last blog post article. It can really help to go round the room and have people introduce themselves with a fun fact, not just to get the names and voices on tape for later identification, but as a warm up.

 

Also, the first question, exercise or discussion point should be easy. If the first topic is 'How did you feel when you had cancer?' that can be pretty intimidating to start with. Something much simpler, such as 'What was hospital food like?' or even 'How was your trip here?' are topics everyone can easily contribute to and safely argue over, gaining confidence to share something deeper later on.

 


8. Step back, and step out

In focus groups, the aim is usually to get participants to discuss with each-other, not a series of dialogues with the facilitator. The power dynamics of the group need to reflect this, and as soon as things are set in motion, the researcher should try and intervene as little as possible – occasionally asking for clarification or to set things back on track. Thus it's also their role to help participants understand this, and allow the group discussion to be as co-interactive as possible.

 

“When group dynamics worked well the co-participants acted as co-
researchers taking the research into new and often unexpected directions and engaging in interaction which were both complementary (such as sharing common experiences) and argumentative” 
- Kitzinger 1994

 


9. Anticipate depth

Focus groups usually last a long time, rarely less than 2 hours, but even a half or whole day of discussion can be appropriate if there are lots of complex topics to discuss. It's OK to consider having participants do multiple focus groups if there is lots to cover, just consider what will best fit around the lives of your participants.

 

At the end of these you should find there is a lot of detailed and deep qualitative data for analysis. It can really help digesting this to make lots of notes during the session, as a summary of key issues, your own reflexive comments on the process, and the unspoken subtext (who wasn't sharing on what topics, what people mean when they say, 'you know, that lady with the big hair').


You may also find that qualitative analysis software like Quirkos can help pull together all the complex themes and discussions from your focus groups, and break down the mass of transcribed data you will end up with! We designed Quirkos to be very simple and easy to use, so do download and try for yourself...

 

 

 

Circles and feedback loops in qualitative research

qualitative research feedback loops

The best qualitative research forms an iterative loop, examining, and then re-examining. There are multiple reads of data, multiple layers of coding, and hopefully, constantly improving theory and insight into the underlying lived world. During the research process it is best to try to be in a constant state of feedback with your data, and theory.


During your literature review, you may have several cycles through the published literature, with each pass revealing a deeper network of links. You will typically see this when you start going back to ‘seminal’ texts on core concepts from older publications, showing cycles of different interpretations and trends in methodology that are connected. You can see this with paradigm trends like social captial, neo-liberalism and power. It’s possible to see major theorists like Foucault, Chomsky and Butler each create new cycles of debate in the field, building up from the previous literature.


A research project will often have a similar feedback loop between the literature and the data, where the theory influences the research questions and methodology, but engagement with the real ‘folk world’ provides challenge to interpretations of data and the practicalities of data collection. Thus the literature is challenged by the research process and findings, and so a new reading of the literature is demanded to correlate or challenge new interpretations.

 

Thus it’s a mistake to think that a literature review only happens at the beginning of the research process, it is important to engage with theory again, not just at the end of a project when drawing conclusions and writing up, but during the analysis process itself. Especially with qualitative research, the data will rarely neatly fit with one theory or another, but demand a synthesis or new angle on existing research.

 

The coding process is also like this, in that it usually requires many cycles through the data. After reading one source, it can feel like the major themes and codes for the project are clear, and will set the groundwork for the analytic framework. But what if you had started with another source? Would the codes you would have created have been the same? It’s easy to either get complacent with the first codes you start with, worrying that the coding structure gets too complicated if there you keep creating new nodes.

 

However, there will always be sources which contain unique data or express different opinions and experiences that don’t chime with existing codes. And what if this new code actually fits some of the previous data better? You would need to go back to previously analysed data sources and explore them again. This is why most experts will recommend multiple tranches through the data, not just to be consistent and complete, but because there is a feedback loop in the codes and themes themselves. Once you have a first coding structure, the framework itself can be examined and reinterpreted, looking for groupings and higher level interpretations. I’ve talked about this more in this blog article about qualitative coding.


Quirkos is designed to keep researchers deeply embedded in this feedback process, with each coding event subtly changing the dynamics of the coding structure. Connections and coding is shown in real time, so you can always see what is happening, what is being coded most, and thus constantly challenge your interpretation and analysis process.

 

Queries, questions and sub-set analysis should also be easy to run and dynamic, because good qualitative researchers shouldn’t only do interrogation and interpretation of the data at the end of the analysis process, it should be happening throughout it. That way surprises and uncertainties can be identified early, and new readings of the data illuminate these discoveries.

 

In a way, qualitative analysis is never done: and it is not usually a linear process. Even when project practicalities dictate an end point, a coded research project in software like Quirkos sits on your hard drive, awaiting time for secondary analysis, or for the data to be challenged from a different perspective and research question. And to help you when you get there, your data and coding bubbles will immediately show you where you left off – what the biggest themes where, how they connected, and allow you to go to any point in the text to see what was said.

 

And you shouldn’t need to go back and do retraining to use the software again. I hear so many stories of people who have done training courses for major qualitative data analysis software, and when it comes to revisiting their data, the operations are all forgotten. Now, Quirkos may not have as many features as other software, but the focus on keeping things visual and in plain sight means that these should comfortably fit under your thumb again, even after not using it for a long stretch.

 

So download the free trial of Quirkos today, and see how it’s different way of presenting the data helps you continuously engage with your data in fresh ways. Once you start thinking in circles, it’s tough to go back!

 

Triangulation in qualitative research

triangulation facets face qualitative

 

Triangles are my favourite shape,
  Three points where two lines meet

                                                                           alt-J

 

Qualitative methods are sometimes criticised as being subjective, based on single, unreliable sources of data. But with the exception of some case study research, most qualitative research will be designed to integrate insights from a variety of data sources, methods and interpretations to build a deep picture. Triangulation is the term used to describe this comparison and meshing of different data, be it combining quantitative with qualitative, or ‘qual on qual’.


I don’t think of a data in qualitative research as being a static and definite thing. It’s not like a point of data on a plot of graph: qualitative data has more depth and context than that. In triangulation, we think of two points of data that move towards an intersection. In fact, if you are trying to visualise triangulation, consider instead two vectors – directions suggested by two sources of data, that may converge at some point, creating a triangle. This point of intersection is where the researcher has seen a connection between the inference of the world implied by two different sources of data. However, there may be angles that run parallel, or divergent directions that will never cross: not all data will agree and connect, and it’s important to note this too.


You can triangulate almost all the constituent parts of the research process: method, theory, data and investigator.


Data triangulation, (also called participant or source triangulation) is probably the most common, where you try to examine data from different respondents but collected using the same method. If we consider that each participant has a unique and valid world view, the researcher’s job is often to try and look for a pattern or contradictions beyond the individual experience. You might also consider the need to triangulate between data collected at different times, to show changes in lived experience.

 

Since every method has weaknesses or bias, it is common for qualitative research projects to collect data in a variety of different ways to build up a better picture. Thus a project can collect data from the same or different participants using different methods, and use method or between-method triangulation to integrate them. Some qualitative techniques can be very complementary, for example semi-structured interviews can be combined with participant diaries or focus groups, to provide different levels of detail and voice. For example, what people share in a group discussion maybe less private than what they would reveal in a one-to-one interview, but in a group dynamic people can be reminded of issues they might forget to talk about otherwise.


Researchers can also design a mixed-method qualitative and quantitative study where very different methods are triangulated. This may take the form of a quantitative survey, where people rank an experience or service, combined with a qualitative focus group, interview or even open-ended comments. It’s also common to see a validated measure from psychology used to give a metric to something like pain, anxiety or depression, and then combine this with detailed data from a qualitative interview with that person.


In ‘theoretical triangulation’, a variety of different theories are used to interpret the data, such as discourse, narrative and context analysis, and these different ways of dissecting and illuminating the data are compared.


Finally there is ‘investigator triangulation’, where different researchers each conduct separate analysis of the data, and their different interpretations are reconciled or compared. In participatory analysis it’s also possible to have a kind of respondent triangulation, where a researcher is trying to compare their own interpretations of data with that of their respondents.

 

 

While there is a lot written about the theory of triangulation, there is not as much about actually doing it (Jick 1979). In practice, researchers often find it very difficult to DO the triangulation: different data sources tend to be difficult to mesh together, and will have very different discourses and interpretations. If you are seeing ‘anger’ and ‘dissatisfaction’ in interviews with a mental health service, it will be difficult to triangulate such emotions with the formal language of a policy document on service delivery.


In general the qualitative literature cautions against seeing triangulation as a way to improve the validity and reliability of research, since this tends to imply a rather positivist agenda in which there is an absolute truth which triangulation gets us closer to. However, there are plenty that suggest that the quality of qualitative research can be improved in this way, such as Golafshani (2003). So you need to be clear of your own theoretical underpinning: can you get to an ‘absolute’ or ‘relative’ truth through your own interpretations of two types of data? Perhaps rather than positivist this is a pluralist approach, creating multiplicities of understandings while still allowing for comparison.


It’s worth bearing in mind that triangulation and multiple methods isn’t an easy way to make better research. You still need to do all different sources justice: make sure data from each method is being fully analysed, and iteratively coded (if appropriate). You should also keep going back and forth, analysing data from alternate methods in a loop to make sure they are well integrated and considered.

 


Qualitative data analysis software can help with all this, since you will have a lot of data to process in different and complementary ways. In software like Quirkos you can create levels, groups and clusters to keep different analysis stages together, and have quick ways to do sub-set analysis on data from just one method. Check out the features overview or mixed-method analysis with Quirkos for more information about how qualitative research software can help manage triangulation.

 


References and further reading

Carter et al. 2014, The use of triangulation in qualitative research, Oncology Nursing Forum, 41(5), https://www.ncbi.nlm.nih.gov/pubmed/25158659

 

Denzin, 1978 The Research Act: A Theoretical Introduction to Sociological Methods, McGraw-Hill, New York.

 

Golafshani, N., 2003, Understanding reliability and validity in qualitative research, 8(4), http://nsuworks.nova.edu/cgi/viewcontent.cgi?article=1870&context=tqr


Bekhet A, Zauszniewski J, 2012, Methodological triangulation: an approach to
understanding data, Nurse Researcher, 20 (2), http://journals.rcni.com/doi/pdfplus/10.7748/nr2012.11.20.2.40.c9442

 

Jick, 1979, Mixing Qualitative and Quantitative Methods: Triangulation in Action,  Administrative Science Quarterly, 24(4),  https://www.jstor.org/stable/2392366

 

 

100 blog articles on qualitative research!

images by Paul Downey and AngMoKio

 

Since our regular series of articles started nearly three years ago, we have clocked up 100 blog posts on a wide variety of topics in qualitative research and analysis! These are mainly short overviews, aimed at students, newcomers and those looking to refresh their practice. However, they are all referenced with links to full-text academic articles should you need more depth. Some articles also cover practical tips that don't get into the literature, like transcribing without getting back-ache, and hot to write handy semi-strucutred interview guides. These have become the most popular part of our website, and there's now more than 80,000 words in my blog posts, easily the length of a good sized PhD thesis!

 

That's quite a lot to digest, so in addition to the full archive of qualitative research articles, I've put together a 'best-of', with top 5 articles on some of the main topics. These include Epistemology, Qualitative methods, Practicalities of qualitative research, Coding qualitative data, Tips and tricks for using Quirkos, and Qualitative evaluations and market research. Bookmark and share this page, and use it as a reference whenever you get stuck with any aspect of your qualitative research.

 

While some of them are specific to Quirkos (the easiest tool for qualitative research) most of the principles are universal and will work whatever software you are using. But don't forget you can download a free trial of Quirkos at any time, and see for yourself!

 


Epistemology

What is a Qualitative approach?
A basic overview of what constitutes a qualitative research methodology, and the differences between quantitative methods and epistimologies

 

What actually is Grounded Theory? A brief introduction
An overview of applying a grounded theory approach to qualitative research

 

Thinking About Me: Reflexivity in science and qualitative research
How to integrate a continuing reflexive process in a qualitative research project

 

Participatory Qualitative Analysis
Quirkos is designed to facilitate participatory research, and this post explores some of the benefits of including respondents in the interpretation of qualitative data

 

Top-down or bottom-up qualitative coding
Deciding whether to analyse data with high-level theory-driven codes, or smaller descriptive topics (hint – it's probably both!)

 

 


Qualitative methods

An overview of qualitative methods
A brief summary of some of the commonly used approaches to collect qualitative data

 

Starting out in Qualitative Analysis
First things to consider when choosing an analytical strategy

 

10 tips for semi-structured qualitative interviewing
Semi-structured interviews are one of the most commonly adopted qualitative methods, this article provides some hints to make sure they go smoothly, and provide rich data

 

Finding, using and some cautions on secondary qualitative data
Social media analysis is an increasingly popular research tool, but as with all secondary data analysis, requires acknowledging some caveats

 

Participant diaries for qualitative research
Longitudinal and self-recorded data can be a real gold mine for qualitative analysis, find out how it can help your study

 


Practicalities of qualitative research

Transcription for qualitative interviews and focus-groups
Part of a whole series of blog articles on getting qualitative audio transcribed, or doing it yourself, and how to avoid some of the pitfalls

 

Designing a semi-structured interview guide for qualitative interviews
An interview guide can give the researcher confidence and the right level of consistency, but shouldn't be too long or too descriptive...

 

Recruitment for qualitative research
While finding people to take part in your qualitative study can seem daunting, there are many strategies to choose, and should be closely matched with the research objectives

 

Sampling considerations in qualitative research
How do you know if you have the right people in your study? Going beyond snowball sampling for qualitative research

 

Reaching saturation point in qualitative research
You'll frequently hear people talking about getting to data saturation, and this post explains what that means, and how to plan for it

 

 

Coding qualitative data

Developing and populating a qualitative coding framework in Quirkos
How to start out with an analytical coding framework for exploring, dissecting and building up your qualitative data

 

Play and Experimentation in Qualitative Analysis
I feel that great insight often comes from experimenting with qualitative data and trying new ways to examine it, and your analytical approach should allow for this

 

6 meta-categories for qualitative coding and analysis
Don't just think of descriptive codes, use qualitative software to log and keep track of the best quotes, surprises and other meta-categories

 

Turning qualitative coding on its head
Sometimes the most productive way forward is to try a completely new approach. This post outlines several strange but insightful ways to recategorise and examine your qualitative data

 

Merging and splitting themes in qualitative analysis
It's important to have an iterative coding process, and you will usually want to re-examine themes and decide whether they need to be more specific or vague

 

 


Quirkos tips and tricks

Using Quirkos for Systematic Reviews and Evidence Synthesis
Qualitative software makes a great tool for literature reviews, and this article outlines how to sep up a project to make useful reports and outputs

 

How to organise notes and memos in Quirkos
Keeping memos is an important tool during the analytical process, and Quirkos allows you to organise and code memo sources in the same way you work with other data

 

Bringing survey data and mixed-method research into Quirkos
Data from online survey platforms often contains both qualitative and quantitative components, which can be easily brought into Quirkos with a quick tool

 

Levels: 3-dimensional node and topic grouping in Quirkos
When clustering themes isn't comprehensive enough, levels allows you to create grouped categories of themes that go across multiple clustered bubbles

 

10 reasons to try qualitative analysis with Quirkos
Some short tips to make the most of Quirkos, and get going quickly with your qualitative analysis

 

 

Qualitative market research and evaluations

Delivering qualitative market insights with Quirkos
A case study from an LA based market research firm on how Quirkos allowed whole teams to get involved in data interpretation for their client

 

Paper vs. computer assisted qualitative analysis
Many smaller market research firms still do most of their qualitative analysis on paper, but there are huge advantages to agencies and clients to adopt a computer-assisted approach

 

The importance of keeping open-ended qualitative responses in surveys
While many survey designers attempt to reduce costs by removing qualitative answers, these can be a vital source of context and satisfaction for users

 

Qualitative evaluations: methods, data and analysis
Evaluating programmes can take many approaches, but it's important to make sure qualitative depth is one of the methods adopted

 

Evaluating feedback
Feedback on events, satisfaction and engagement is a vital source of knowledge for improvement, and Quirkos lets you quickly segment this to identify trends and problems

 

 

 

Thinking About Me: Reflexivity in science and qualitative research

self rembrandt reflexivity

Reflexivity is a process (and it should be a continuing process) of reflecting on how the researcher could be influencing a research project.


In a traditional positivist research paradigm, the researcher attempts to be a neutral influence on  research. They make rational and logical interpretations, and assume a ‘null hypothesis’, in which they expect all experiments to have no effect, and have no pre-defined concept of what the research will show.


However, this is a lofty aspiration and difficult to achieve in practice. Humans are fallible and emotional beings, with conflicting pressures on jobs, publication records and their own hunches. There are countless stories of renowned academics having to retract papers, or their whole research careers because of faked results, flawed interpretations or biased coding procedures.


Many consider it to be impossible to fully remove the influence of the researcher from the process, and so all research would be ‘tainted’ in some way by the prejudices of those in the project. This links into the concept of “implicit bias” where even well-meaning individuals are influenced by subconscious prejudices. These have been shown to have a significant discriminatory impact on pay, treatment in hospitals and recruitment along lines of gender and ethnicity.


So does this mean that we should abandon research, and the pursuit of truly understanding the world around us? No! Although we might reject the notion of attaining an absolute truth, that doesn’t mean we can’t learn something. Instead of pretending that the researcher is an invisible and neutral piece of the puzzle, a positionality and reflexivity approach argues that the background of the researcher should be detailed in the same way as the data collection methods and analytical techniques.


But how is this done in practice? Does a researcher have to bare their soul to the world, and submit their complete tax history? Not quite, but many in feminist and post-positivist methodologies will create a ‘positionality statement’ or ‘reflexivity statement’. This is a little like a CV or self-portrait of potential experiences and bias, in which the researcher is honest about personal factors that might influence their decisions and interpretations. These might include the age, gender, ethnicity and class of the researcher, social and research issues they consider important, their country and culture, political leanings, life experiences and education. In many cases a researcher will include such a statement with their research publications and outputs, just Googling ‘positionality statements’ will provide dozens of links to examples.

 

However, I feel that this is a minimum level of engagement with the issue, and it’s actually important to keep a reflexive stance throughout the research process. Just like how a one-off interview is not as accurate a record as a daily diary, keeping reflexivity notes as an ongoing part of a research journal is much more powerful. Here a researcher can log changes in their situation, assumptions and decisions made throughout the research process that might be affected by their personal stance. It’s important that the researcher is constantly aware of when they are making decisions, because each is a potential source of influence. This includes deciding what to study, who to sample, what questions to ask, and which sections of text to code and present in findings.


Why this is especially pertinent to qualitative research? It’s often raised in social science, especially ethnography and close case study work with disadvantaged or hard-to-reach populations where researchers have a much closer engagement with their subjects and data. It could be considered that there are more opportunities for personal stance to have an impact here, and that many qualitative methods, especially the analysis process using grounded theory, are open to multiple interpretations that vary by researcher. Many make the claim that qualitative research and data analysis is more subjective than quantitative methods, but as we’ve argued above, it might be better to say that they are both subjective. Many qualitative epistemological approaches are not afraid of this subjectivity, but will argue it is better made forthright and thus challenged, rather than trying to keep it in the dark.


Now, this may sound a little crazy, especially to those in traditionally positivist fields like STEM subjects (Science, Technology Engineering, Mathematics). Here there is generally a different move: to use process and peer review to remove as many aspects of the research that are open to subjective interpretation as possible. This direction is fine too!


However, I would argue that researchers already have to make a type of reflexivity document: a conflict of interest statement. Here academics are supposed to declare any financial or personal interest in the research area that might influence their neutrality. This is just like a positionality statement! An admission that researchers can be influenced by prejudices and external factors, and that readers should be aware of such conflicts of interest when doing their own interpretation of the results.


If it can be the case that money can influence science (and it totally can) it’s also been shown that gender and other aspects of an academic's background can too. All reflexivity asks us to do is be open and honest with our readers about who we are, so they can better understand and challenge the decisions we make.

 

 

Like all our blog articles, this is intended to be a primer on some very complex issues. You’ll find a list of references and further reading below (in addition to the links included above). Don’t forget to try Quirkos for all your qualitative data analysis needs! It can help you keep, manage and code a reflexive journal throughout your analysis procedure. See this blog article for more!

 

 

References

 

Bourke, B., 2014, Positionality: Reflecting on the Research Process, The Qualitative Report 19, http://www.nova.edu/ssss/QR/QR19/bourke18.pdf


Day, E., 2002, Me, My*self and I: Personal and Professional Re-Constructions in Ethnographic Research, FQS 3(3) http://www.qualitative-research.net/index.php/fqs/article/view/824/1790


Greenwald, A., Krieger, L., 2006, Implicit Bias: Scientific Foundations, California Law Review, 94(4). http://www.jstor.org/stable/20439056


Lynch, M., 2000, Against Reflexivity as an Academic Virtue and Source of Privileged Knowledge, Theory, Culture & Society 17(3), http://tcs.sagepub.com/content/17/3/26.short


Savin-Baden, M., Major C., 2013, Personal stance, positionality and reflexivity, in Qualitative Research: The essential guide to theory and practice. Routledge, London.


Soros, G., 2013, Fallibility, reflexivity and the human uncertainty principle, Journal of Economic Methodology, 20(4) https://www.georgesoros.com/essays/fallibility-reflexivity-and-the-human-uncertainty-principle-2/

 

 

Quirkos version 1.4 is here!

quirkos version 1.4

It’s been a long time coming, but the latest version of Quirkos is now available, and as always it’s a free update for everyone, released simultaneously on Mac, Windows and Linux with all the new goodies!


The focus of this update has been speed. You won’t see a lot of visible differences in the software, but behind the scenes we have rewritten a lot of Quirkos to make sure it copes better with large qualitative sources and projects, and is much more responsive to use. This has been a much requested improvement, and thanks to all our intrepid beta testers for ensuring it all works smoothly.


In the new version, long coded sources now load in around 1/10th of the time! Search results and hierarchy views load much quicker! Large canvas views display quicker! All this adds up to give a much more snappy and responsive experience, especially when working with large projects. This takes Quirkos to a new professional level, while retaining the engaging and addictive data coding interface.


In addition we have made a few small improvements suggested by users, including:


• Search criteria can be refined or expanded with AND/OR operands
• Reports now include a summary section of your Quirks/codes
• Ability to search source names to quickly find sources
• Searches now display the total number of results
• Direct link to the full manual

 

There are also many bug fixes! Including:
• Password protected files can now be opened across Windows, Mac and Linux
• Fix for importing PDFs which created broken Word exports
• Better and faster CSV import
• Faster Quirk merge operations
• Faster keyword search in password protected files

 

However, we have had to change the .qrk file format so that password protected files can open on any operating system. This means that projects opened or created in version 1.4 cannot be opened in older versions of Quirkos (v1.3.2 and earlier).


I know how annoying this is, but there should be no reason for people to keep using older versions: we make the updates free so that everyone is using the same version. Just make sure everyone in your team updates!

 

When you first open a project file from an older version of Quirkos in 1.4, it will automatically convert it to the new file format, and save a backup copy of the old file. Most users will not notice any difference, and you can obviously keep working with your existing project files. But if you want to share your files with other Quirkos users, make sure they also have upgraded to the latest version, or they will get an error message trying to open a file from version 1.4.

 

All you need to do to get the new version is download and install from our website (www.quirkos.com/get.html) and install to the same location as the old Quirkos. Get going, and let us know if you have any suggestions or feedback! You could see your requests appear in version 1.5!

 

Participant diaries for qualitative research

participant diaries

 

I’ve written a little about this before, but I really love participant diaries!


In qualitative research, you are often trying to understand the lives, experiences and motivations of other people. Through methods like interviews and focus groups, you can get a one-off insight into people’s own descriptions of themselves. If you want to measure change over a period, you need to schedule a series of meetings, and each of which will be limited by what a participant will recall and share.


However, using diary methodologies, you can get a longer and much more regular insight into lived experiences, plus you also change the researcher-participant power dynamic. Interviews and focus groups can sometimes be a bit of an interrogation, with the researcher asking questions, and participants given the role of answering. With diaries, participants can have more autonomy to share what they want, as well as where and when (Meth 2003).


These techniques are also called self-report or ‘Contemporaneous assessment methods’, but there are actually a lot of different ways you can collect diary entries. There are some great reviews of different diary based methods, (eg Bolger et al. 2003), but let’s look at some of the different approaches.


The most obvious is to give people a little journal or exercise book to write in, and ask them to record on a regular basis any aspects of their day that are relevant to your research topic. If they are expected to make notes on the go, make it a slim pocket sized one. If they are going to write a more traditional diary at the end of each day, make a nice exercise book to work in. I’ve actually found that people end up getting quite attached to their diaries, and will often ask for them back. So make sure you have some way to copy or transcribe them and consider offering to return them once you have investigated them, or you could give back a copy if you wish to keep hold of the real thing.

 

You can also do voice diaries – something I tried in Botswana. We were initially worried that literacy levels in rural areas would mean that participants would either be unable, or reluctant to create written entries. So I offered everyone a small voice recorder, where they could record spoken notes that we would transcribe at the end of the session. While you could give a group of people an inexpensive (~£20) Dictaphone, I actually brought a bunch of cheap no-brand MP3 players which only cost ~£5 each, had a built in voice recorder and headphones, and could work on a single AAA battery (which was easy to find from local shops, since few respondents had electricity for recharging). The audio quality was not great, but perfectly adequate. People really liked these because they could also play music (and had a radio), and they were cheap enough to be lost or left as thank-you gifts at the end of the research.

 

There is also a large literature on ‘experience sampling’ – where participants are prompted at regular or random intervals to record on what they are doing or how they are feeling at that time. Initially this work was done using pagers, (Larson 1989) when participants would be ‘beeped’ at random times during the day and asked to write down what they were doing at the time. More recent studies have used smartphones to both prompt and directly collect responses (Chen et al. 2014).

 

Secondly, there is now a lot of online journal research, both researcher solicited as part of a qualitative research project (Kaun 2015), or collected from people’s blogs and social media posts. This is especially popular in market research when looking at consumer behaviour (Patterson 2005), project evaluation (Cohen et al. 2006).

 

Diary methods can create detailed, and reliable data. One study found that asking participants to record diary entries three times a day to measure stigmatised behaviour like sexual activities found an 89.7% adherence rate (Hensel et al. 2012), far higher than would be expected from traditional survey methods. There is a lot of diary based research in the sexual and mental health literature: for more discussion on the discrepancies and reliability between diary and recall methods, there is a good overview in Coxon (1999) but many studies like Garry et al. (2002) found that diary based methods generated more accurate responses. Note that these kinds of studies tend to be mixed-method, collecting both discrete quantitative data and open ended qualitative comments.

 

Whatever the method you are choosing, it’s important to set up some clear guidelines to follow. Personally I think either a telephone conversation or face-to-face meeting is a good idea to give a chance for participants to ask questions. If you’ve not done research diaries before, it’s a good idea to pilot them with one or two people to make sure you are briefing people clearly, and they can write useful entries for you. The guidelines, (explained and taped to the inside of the diary) should make it clear:

  • What you are interested in hearing about
  • What it will be used for
  • How often you expect people to write
  • How much they should write
  • How to get in touch with you
  • How long they should be writing entries, and how to return the diary.

 

Even if you expressly specify that your participants should write their journals should be written everyday for three weeks, you should be prepared for the fact that many won’t manage this. You’ll have some that start well but lapse, others that forget until the end and do it all in the last day before they see you, and everything in-between. You need to assume this will happen with some or all of your respondents, and consider how this is going to affect how you interpret the data and draw conclusions. It shouldn’t necessarily mean that the data is useless, just that you need to be aware of the limitations when analysing it. There will also be a huge variety in how much people write, despite your guidelines. Some will love the experience, sharing volumes of long entries, others might just write a few sentences, which might still be revealing.

 

For these reasons, diary-like methodologies are usually used in addition to other methods, such as semi-structured interviews (Meth 2003), or detailed surveys. Diaries can be used to triangulate claims made by respondents in different data sources (Schroder 2003) or provide more richness and detail to the individual narrative. From the researchers point of view, the difference between having data where a respondent says they have been bullied, and having an account of a specific incident recorded that day is significant, and gives a great amount of depth and illumination into the underlying issues.

 

Qualitative software - Quirkos

 

However, you also need to carefully consider the confidentiality and other ethical issues. Often participants will share a lot of personal information in diaries, and you must agree how you will deal with this and anonymise it for your research. While many respondents find keeping a qualitative diary a positive and reflexive process, it can be stressful to ask people in difficult situations to reflect on uncomfortable issues. There is also the risk that the diary could be lost, or read by other people mentioned in it, creating a potential disclosure risk to participants. Depending on what you are asking about, it might be wise to ask participants themselves to create anonymised entries, with pseudo-names and places as they write.

 

Last, but not least, what about your own diary? Many researchers will keep a diary, journal or ‘field notes’ during the research process (Altricher and Holly 2004), which can help provide context and reflexivity as well as a good way of recording thoughts on ideas and issues that arise during the data collection process. This is also a valuable source of qualitative data itself, and it’s often useful to include your journal in the analysis process – if not coded, then at least to remind you of your own reflections and experiences during the research journey.

 

So how can you analyse the text of your participant diaries? In Quirkos of course! Quirkos takes all the basics you need to do qualitative analysis, and puts it in a simple, and easy to use package. Try for yourself with a free trial, or find out more about the features and benefits.

 

Qualitative evidence for SANDS Lothians

qualitative charity research - image by cchana

Charities and third sector organisations are often sitting on lots of very useful qualitative evidence, and I have already written a short blot post article on some common sources of data that can support funding applications, evaluations and impact assessments. We wanted to do a ‘qualitative case study’: to work with one local charity to explore what qualitative evidence they already had, what they could collect, and use Quirkos to help create some reports and impact assessments.

 

SANDS Lothians is an Edinburgh based charity that provides long-term counselling and support for families who have experienced bereavement through the loss of a child near-birth. They approached us after seeing advertisements for one of our local qualitative training workshops.


Director Nicola Welsh takes up the story. “During my first six months in post, I could see there was much evidence to highlight the value of our work but was struggling to pull this together in some order which was presentable to others. Through working with Daniel and Kristin we were able to start to structure what we were looking to highlight and with their help begin to organise our information so it was available to share with others. Quirkos allowed us to pull information from service users, stats and studies to present this in a professional document. They gave us the confidence to ask our users about their experiences and encouraged us to record all the services we offered to allow others at a glance to get a feel for what we provide.”

 

First of all, we discussed what would be most useful to the organisation. Since they were in discussion with major partners about possible funding, an impact assessment would be valuable in this process.

 

They also identified concerns from their users about a specific issue, prescriptions for anti-depressants, and wanted to investigate this further. It was important to identify the audience that SANDS Lothians wanted to reach with this information, in this case, GPs and other health professionals. This set the format of a possible output: a short briefing paper on different types of support that parents experiencing bereavement could be referred to.

 

We started by doing an ‘evidence assessment’ (or evidence audit as this previous blog post article notes), looking for evidence on impact that SANDS Lothians already had. Some of this was quantitative, such as the number of phone calls received on a monthly basis. As they had recently started counting these calls, it was valuable evidence of people using their support and guidance services. In the future they will be able to see trends in the data, such as an increase in demand or seasonal variation that will help them plan better.

 

They already had national reports from NHS Scotland on Infant Mortality, and some data from the local health board. But we quickly identified a need for supportive scientific literature that would help them make a better case for extending their counselling services. One partner had expressed concerns that counselling was ineffective, but we found a number of studies that showed counselling to be beneficial for this kind of bereavement. Finding these journal articles for them helped provide legitimacy to the approach detailed in the impact assessment.

 

In fact, a simple step was to create a list of all the different services that SANDS Lothians provides. This had not been done before, but quickly showed how many different kinds of support were offered, and the diversity of their work. This is also powerful information for potential funders or partners, and useful to be able to present quickly.

 

Finally, we did a mini qualitative research project!

 

A post on their Facebook page asking for people to share experiences about being prescribed antidepressants after bereavement got more than 20 responses. While most of these were very short, they did give us valuable and interesting information: for example, not all people who had been suggested anti-depressants by their GP saw this as negative, and some talked about how these had helped them at a difficult time.

 

SANDS Lothians already had amazing and detailed written testimonials and stories from service users, so I was able to combine the responses from testimonials and comments from the Facebook feed into one Quirkos project, and draw across them all as needed.

 

Using Quirkos to pull out the different responses to anti-depressants showed that there were similar numbers of positive and negative responses, and also highlighted parent’s worries we had not considered, such as the effect of medication if trying to conceive again. This is the power of an qualitative approach: by asking open questions, we got a responses about issues we wouldn’t have asked about in a direct survey.

 

quirkos bubble cluster view

 

When writing up the report, Quirkos made it quick and easy to pull out supportive quotes. As I had previously gone through and coded the text, I could click on the counselling bubble, immediately see relevant comments, and copy and paste them into the report. Now SANDS Lothians also has an organised database of comments on how their counselling services helped clients, which they can draw on at any time.

 

Nicola explains how they have used the research outputs. “The impact assessment and white paper has been extremely valuable to our work. This has been shared with senior NHS Lothian staff regarding possible future partnership working.  I have also shared this information with the Scottish Government following the Bonomy recommendations. The recommendations highlight the need for clear pathways with outside charities who are able to assist bereaved parents. I was able to forward our papers to show our current support and illustrate the position Lothians are in regarding the opportunity to have excellent bereavement care following the loss of a baby. It strengthened the work we do and the testimonials give real evidence of the need for this care. 

 

I have also given our papers out at recent talks with community midwives and charge midwives in West Lothian and Royal Infirmary Edinburgh. Cecilia has attached the papers to grant applications which again strengthens our applications and validates our work.”

 

Most importantly, SANDS Lothians now have a framework to keep collecting data, “We will continue to record all data and update our papers for 2016.  Following our work with Quirkos, we will start to collate case studies which gives real evidence for our work and the experiences of parents.  Our next step would be to look specifically at our counselling service and its value.” 

 

“The work with Quirkos was extremely helpful. In very small charities, it is difficult to always have the skills to be an expert in all areas and find the time to train. We are extremely grateful to Daniel and Kristin who generously volunteered their time to assist us to produce this work. I would highly recommend them to any business or third sector organisation who need assistance in producing qualitative research.  We have gained confidence as a charity from our journey with Quirkos and would most definitely consider working with them again in the future.”

 

It was an incredible and emotional experience to work with Nicola and Cecilia at SANDS Lothians on this small project, and I am so grateful to for them for inviting us in to help them, and sharing so much. If you want any more information about the services they offer, or need to speak to someone about losing a baby through stillbirth, miscarriage or soon after birth, all their contact details are available on their website: http://www.sands-lothians.org.uk .

 

If you want any more information about Quirkos and a qualitative approach, feel free to contact us directly, or there is much more information on our website. Download a free trial, or read more about adopting a qualitative approach.

 

 

Recruitment for qualitative research

Recuriting qualitative participants

 

You’ll find a lot of information and debate about sampling issues in qualitative research: discussions over ‘random’ or ‘purposeful’ sampling, the merits and pitfalls of ubiquitous ‘snowball’ sampling, and unending questions about sample size and saturation. I’m actually going to address most of these in the next blog post, but wanted to paradoxically start by looking at recruitment. What’s the difference, and why think about recruitment strategies before sampling?

 

Well, I’d argue that the two have to be considered together, but recruitment tends to be a bit of an afterthought and is so rarely detailed in journal articles (Arcury and Quandt 1999) I feel it merits its own post. In fact, there is a great ONS document about sampling, but it only has one sentence on advice for respondent recruitment: “The method of respondent recruitment and its effectiveness is also an important part of the sampling strategy”. Indeed!

 

When we talk about recruitment, we are considering the way we actually go out and ask people to take part in a research study. The sample frame is how we choose what groups of people and how many to approach, but there are huge practical problems in implementing our chosen sampling method that can be dealt with by writing a comprehensive recruitment strategy.

 

This might sound a bit dull, but it’s actually kind of fun – and the creation of such a strategy for your qualitative research project is a really good thought exercise, helping you plan and later acknowledge shortcomings in what actually happened. Essentially, think of this process as how you will market and advertise your research project to potential participants.

 

Sometimes there is a shifting dynamic between sampling and recruitment. Say we are doing random sampling from numbers in a phone book, a classic ‘random’ technique. The sampling process is the selection of x number of phone numbers to call. The recruitment is the actually calling and asking someone to take part in the research. Now, obviously not everyone is going to answer the phone, or want to answer any questions. So you then have a list of recruited people, which you might actually want to sample from again to make a representative sample. If you found out everyone that answered the phone was retired and over 60, but you wanted a wider age profile, you will need to refactor from your recruited sample.

 

But let’s think about this again. Why could it be that everyone who consented to take part in our study was retired? Well, we used numbers from the phone book, and called during the day. What effect might this have? Numbers in the phone book tend to be people who have been resident in one place for a long time, many students and young people just have mobiles, and if we call during the day, we will not get answers from most people who work. This illustrates the importance of carefully considering the recruitment strategy: although we chose a good random sampling technique, our strategy of making phone calls during the day has already scuppered our plans.

 

How about another example: recruitment through a poster advertising the study. Many qualitative studies aren’t looking for very large number of respondents, but are targeting a very specific sample. In this example, maybe it’s people who have visited their doctor in the last 6 months. Sounds like a poster in the waiting room of the local GP surgery would work well. What are the obvious limitations here?

 

simple qualitative analysis software from quirkos

 

First of all, people who see the poster will probably have visited the GP (since they are in that location), however, it actually only would recruit people who are currently receiving treatment. People who had been in the previous 6 months but didn’t need to go back again, or had such a horrible experience they never returned, will not see our poster and don’t have a chance to be recruited. Both of these will skew the sample of respondents in different ways.

 

In some ways this is inevitable. Whichever sampling technique and recruitment strategy we adopt, some people will not hear about the study or want to take part. However, it is important to be conscious of not just who is being sampled, but who is left out, and the likely effect this has on our sample and consequently our findings. For example our approach here probably means we oversample people who have chronic conditions requiring frequent treatment, and undersample people who hate their doctor. It’s not necessarily a disaster, but just like making a reflexivity statement about our own biases, we must be forthright about the sampling limitations and consider them when analysing and writing conclusions.

 

For these reasons, it’s often desirable to have multiple and complementary recruitment strategies, so that one makes up for deficiencies in the other. So a poster in the waiting room is great, but maybe we can get a list of everyone registered at the surgery, so we can also contact people not currently seeking treatment. This would be wonderful, but in the real world, we might hit problems with the surgery not being interested in the study, not able to release that information for confidentiality reasons, and the huge extra time such a process would require.

 

That’s why I see a recruitment strategy as a practical battle plan that tries to consider the limitations and realities of engaging with the real world. You can also start considering seemingly small things that can have a huge impact on successful recruitment:


• The design of the poster
• The wording of invitation letters
• The time of day you make contact (not just by phone, but don’t e-mail first thing on a Monday morning!)
• Any incentives, and how appropriate they are
• Data protection issues
• Winning the support of ‘gatekeepers’ who control access to your sample
• Timescales
• Cost (especially if you are printing hundreds of letters of flyers)
• Time and effort required to find each respondent
• And many more…


For a more detailed discussion, there’s a great article by Newington and Metcalfe (2014) specifically on influencing factors for recruitment in qualitative research.

 

Finally, I want to reiterate the importance of trying to record who has not been recruited and why. If you are directly contacting a few dozen respondents by phone or e-mail, this is easy to keep track of: you know exactly who has declined or not responded, likely reasons why and probably some demographic details.

 

However, think about the poster example. Here, we will be lucky if 1% of people that come through the surgery contact us to take part in the study. Think through these classic marketing stages: they have to see the poster, think it’s relevant to them, want to engage, and then reach out to contact you. There will be huge losses at each of those stages, and you don’t know who these people are or why they didn’t take part. This makes it very difficult in this kind of study to know the bias of your final sample: we can guess (busy people, those who aren’t interested in research) but we don’t know for sure.

 

Response rates vary greatly by method: by post 25% is really good, direct contact much higher, posters and flyers below 10%. However, you can improve these rates with careful planning, by considering carefully who will engage and why, and making it a good prospect to take part: describe the aims of the research, compensate time, and explain the proposed benefits. But you also need to take an ethical approach, don’t coerce, and make promises you can’t keep. Check out the recruitment guidelines drawn up by the Association for Qualitative Research.

 

My personal experience tells me that most people who engage with qualitative research are lovely! They want to help if they can, and love an opportunity to talk about themselves and have their voice heard. Just be aware of what kinds of people end up being your respondents, and make sure you acknowledge the possibility of hidden voices from people who don’t engage for their own reasons.

 

Once you get to your analysis, don't forget to try Quirkos for free, and see how our easy-to-use software can make a real qualitative difference to your research project! To keep up to date with new blog articles on this, and other qualitative research topics, follow our Twitter feed: twitter.com/quirkossoftware.

 

 

Engaging qualitative research with a quantitative audience.

graphs of quantiatative data in media

 

The last two blog post articles were based on a talk I was invited to give at ‘Mind the Gap’, a conference organised by MDH RSA at the University of Sheffield. You can find the slides here, but they are not very text heavy, so don’t read well without audio!

 

The two talks which preceded me, by Professors Glynis Cousin and John Sandars, echoed quite a few of the themes. Professor Cousin spoke persuasively about reductionism in qualitative research, in her talk on the ‘Science of the Singular’ and the significance that can be drawn from a single case study. She argued that by necessity all research is reductive, and even ‘fictive’, but that doesn’t restrict what we can interpret from it.

 

Professor Cousin described how both Goffman (1961) and Kenkessie (1962) did extensive ethnographies on mental asylums about the same time, but one wrote a classic academic text, and the other the ‘fictive’ novel, One Flew Over the Cuckoo’s Nest. One could argue that both were very influential, but the different approaches in ‘writing-up’ appeal to different audiences.

 

That notion of writing for your audience was evident in Professor Sanders talk, and his concern for communications methods that have the most impact. Drawing from a variety of mixed-method research projects in education, he talked about choosing a methodology that has to balance the approach the researcher desires in their heart, with what the audience will accept. It is little use choosing an action-research approach if the target audience (or journal editors) find it inappropriate in some way.

 

This sparked some debate about how well qualitative methods are accepted in mainstream journals, and if there is a preference towards publishing research based on quantitative methods. Some felt that authors felt an obligation to take a defensive stance when describing qualitative methods, further restricting the limited word limits that cut down so much detail in qualitative dissemination. The final speaker, Dr Kiera Barlett also touched on this issue when discussing publications strategies for mixed-method projects. Should you have separate qualitative and quantitative papers for respective journals, or try and have publications that draw from all aspects of the study? Obviously this will depend on the field, findings and methods chosen, but it again raised a difficult issue.

 

Is it still the case that quantitative findings have more impact than qualitative ones? Do journal articles, funders and decision makers still have a preference for what are seen as more traditional statistical based methodologies? From my own anecdotal position I would have to agree with most of these, although to be fair I have seen little evidence of funding bodies (at least in the UK and in social sciences and health) having a strong preference against qualitative methods of inquiry.

 

However, during the discussion at the conference it was noted that the preference for ‘traditional’ methods is not just restricted to journal reviewers but the culture of disciplines at large. This is often for good reason, and not restricted to a qualitative/quantitative divide: particular techniques and statistical tests tend to dominate, partly because they are well known. This has a great advantage: if you use a common indicator or test, people probably have a better understanding of the approach and limitations, so can interpret the results better, and compare with other studies. With a novel approach, one could argue that readers also need to also go and read all the references in the methodology section (which they may or may not bother to do), and that comparisons and research synthesis are made more difficult.

 

As for journal articles, participants pointed out that many online and open-access journals have removed word limits (or effectively done so by allowing hyperlinked appendices), making publication of long text based selections of qualitative data easier. However, this doesn’t necessarily increase palatability, and that’s why I want to get back to this issue about considering the audience for research findings, and choosing an appropriate medium.

 

It may be easy to say that if research is predominantly a quantitative world, quantifying, summarising, and statistically analysing qualitative data is the way to go. But this is abhorrent, not just to the heart of a qualitative researcher, but also deceptive - imparting a quantitative fiction on a qualitative story. Perhaps the challenge is to think of approaches outside the written journal article. If we can submit a graphic novel as a PhD or explain your research as a dance we can reach new audiences, and engage in new ways with existing ones.

 

Producing graphs, pie charts, and even the bubble views in Quirkos are all ways that essentially summarise, quantify and potentially trivialise qualitative data. But if this allows us to access a wider audience used to quantitative methods, it may have a valuable utility, at least in providing that first engagement that makes a reader want to look in more detail. In my opinion, the worst research is that which stays unread on the shelf.

 

 

Is qualitative data analysis fracturing?

Having been to several international conferences on qualitative research recently, there has been a lot of discussion about the future of qualitative research, and the changes happening in the discipline and society as a whole. A lot of people have been saying that acceptance for qualitative research is growing in general: not only are there a large number of well-established specialist journals, but mainstream publications are accepting more papers based on qualitative approaches.


At the same time, there are more students in the UK at all levels, but especially starting Masters and PhD studies as I’ve noted before. While some of these students will focus solely on qualitative methods, many more will adopt mixed methods approaches, and want to integrate a smaller amount of qualitative data. Thus there is a strong need, especially at the Masters by research level, for software that’s quicker to learn, and can be well integrated into the rest of a project.


There is also the increasing necessity for academic researchers to demonstrate impact for their research, especially as part of the REF. There are challenges involved with doing this with qualitative research, especially summarising large bodies of data, and making them accessible for the general public or for targeted end users such as policy makers or clinicians. Quirkos has been designed to create graphical outputs for these situations, as well as interactive reports that end-users can explore in their own time.


But another common theme has emerged is the possibility of the qualitative field fracturing as it grows. It seems that there are at least three distinct user groups emerging: firstly there are the traditional users of in-depth qualitative research, the general focus of CAQDAS software. They are experts in the field, are experienced with a particular software package, and run projects collecting data with a variety of methods, such as ethnography, interviews, focus groups and document review.


Recently there has been increased interest in text analytics: the application of ‘big data’ to quantify qualitative sources of data. This is especially popular in social media, looking at millions of Tweets, texts, Facebook posts, or blogs on a particular topic. While commonly used in market research, there are also applications in social and political analysis, for example looking at thousands of newspaper articles for portrayal of social trends. This ‘bid data’ quantitative approach has never been a focus of Quirkos’ approach, although there are many tools out there that work in this way.
Finally, there is increasing interest in qualitative analysis from more mainstream users, people who want to do small qualitative research projects as part of their own organisation or business. Increasingly, people working in public sector organisations, HR or legal have text documents they need to manage and gain a deep understanding of.
Increasingly it seems that a one-size-fits-all solution to training and software for qualitative data analysis is not going to be viable. It may even be the case that different factions of approaches and outcomes will emerge. In some ways this may not be too dissimilar to the different methodologies already used within academic research (ie grounded / emergent / framework analysis), but the numbers of ‘researchers’ and the variety of paradigms and fields of inquiry looks to be increasing rapidly.


These are definitely interesting times to be working in qualitative research and qualitative data analysis. My only hope is that if such ‘splintering’ does occur, we keep learning from each other, and we keep challenging ourselves by exposure to alternative ways of working.

 

 

Analysing text using qualitative software

I'm really happy to see that the talks from the University of Surrey CAQDAS 2014 are now up online (that's 'Computer Assisted Qualitative Data Analysis Software' to you and me). It was a great conference about the current state of software for qualitative analysis, but for me the most interesting talks were from experienced software trainers, about how people actually were using packages in practice.

There were many important findings being shared, but for me one of the most striking was that people spend most of their time coding, and most of what they are coding is text.

In a small survey of CAQDAS users from a qualitative research network in Poland, Haratyk and Kordasiewicz found that 97% of users were coding text, while only 28% were coding images, and 23% directly coding audio. In many ways, the low numbers of people using images and audio are not surprising, but it is a shame. Text is a lot quicker to skip though to find passages compared to audio, and most people (especially researchers) and read a lot faster than people speak. At the moment, most of the software available for qualitative analysis struggles to match audio with meaning, either by syncing up transcripts, or through automatic transcription to help people understand what someone is saying.

Most qualitative researchers use audio as an intermediary stage, to create a recording of a research event, such as in interview or focus group, and have the text typed up word-for-word to analyse. But with this approach you risk losing all of the nuance that we are attuned to hear in the spoken word, emphasis, emotion, sarcasm – and these can subtly or completely transform the meaning of the text. However, since audio is usually much more laborious to work with, I can understand why 97% of people code with text. Still, I always try to keep the audio of an interview close to hand when coding, so that I can listen to any interesting or ambiguous sections, and make sure I am interpreting them fairly.

Since coding text is what most people spend most of their time doing, we spent a lot of time making the text coding process in Quirkos was as good as it could be. We certainly plan to add audio capabilities in the future, but this needs to be carefully done to make sure it connects closely with the text, but can be coded and retrieved as easily as possible.

 

But the main focus of the talk was the gaps in users' theoretical knowledge, that the survey revealed. For example, when asked which analytical framework the researchers used, only 23% described their approach as Grounded Theory. However, when the Grounded Theory approach was described in more detail, 61% of respondents recognised this method as being how they worked. You may recall from the previous top-up, bottom-down blog article that Grounded Theory is essentially finding themes from the text as they appear, rather than having a pre-defined list of what a researcher is looking for. An excellent and detailed overview can be found here.

Did a third of people in this sample really not know what analytical approach they were using? Of course it could be simply that they know it by another name, Emergent Coding for example, or as Dey (1999) laments, there may be “as many versions of grounded theory as there were grounded theorists”.

 

Finally, the study noted users comments on advantages and disadvantages with current software packages. People found that CAQDAS software helped them analyse text faster, and manage lots of different sources. But they also mentioned a difficult learning curve, and licence costs that were more than the monthly salary of a PhD student in Poland. Hopefully Quirkos will be able to help on both of these points...

 

Participatory analysis: closing the loop

In participatory research, we try to get away from the idea of researchers doing research on people, and move to a model where they are conducting research with people.

 

The movement comes partly from feminist critiques of epistemology, attacking the pervasive notion that knowledge can only be created by experienced academics, The traditional way of doing research generally disempowers people, as the researchers get to decide what questions to ask, how to interpret and present them, and even what topics are worthy of study in the first place. In participatory research the people who are the focus of the research are seen as the experts, rather than the researchers. At face value, this seems to make sense. After all, who knows more about life on a council estate: someone who has lived there for 20 years, or a middle-class outside researcher?

 

In participatory research, the people who are the subject of the study are often encouraged to be a much greater part of the process, active participants rather than aliens observed from afar. They know they are taking part in the research process, and the research is designed to give them input into what the study should be focusing on. The project can also use research methods that allow people to have more power over what they share, for example by taking photos of their environment, having open group discussions in the community, or using diaries and narratives in lieu of short questionnaires. Groups focused on developing and championing this work include the Participatory Geographies working group of the RGS/IBG, and the Institute of Development Studies at the University of Sussex.

 

This approach is becoming increasingly accepted in mainstream academia, and many funding bodies, including the NIHR, now require all proposals for research projects to have had patient or 'lay-person' involvement in the planning process, to ensure the design of the project is asking the right questions in an appropriate way. Most government funded projects will also stipulate that a summary of findings should be written in a non-technical, freely available format so that everyone involved and affected by the research can access it.

 

Engaging with analysis

Sounds great, right? In a transparent way, non-academics are now involved in everything: choosing which studies are the most important, deciding the focus, choosing the methods and collecting and contributing to the data.

 

But then what? There seems to be a step missing there, what about the analysis?

 

It could be argued that this is the most critical part of the whole process, where researchers summarise, piece together and extrapolate answers from the large mass of data that was collectively gathered. But far too often, this process is a 'black-box' conducted by the researchers themselves, with little if any input from the research participants. It can be a mystery to outsiders, how did researchers come to the particular findings and conclusions from all the different issues that the research revealed? What was discarded? Why was the data interpreted in this way?

 

This process is usually glossed over even in journal articles and final reports, and explaining the process to participants is difficult. Often this is a technical limitation: if you are conducting a muli-factor longitudinal study, the calculation of the statistical analysis is usually beyond all but the most mathematically minded academics, let alone the average Jo.

 

Yet this is also a problem in qualitative research, where participatory methods are often used. Between grounded theory, framework analysis and emergent coding, the approach is complicated and contested even within academia. Furthermore, qualitative analysis is a very lengthy process, with researchers reading and re-reading hundreds or thousands of pages of text: a prospect unappealing to often unpaid research participants.

 

Finally, the existing technical solutions don't seem to help. Software like Nvivo, often used for this type of analysis, is daunting for many researchers without training, and encouraging people from outside the field to try and use it, with all the training and licensing implications of this, makes for an effective brick wall. There are ways to make analysis engaging for everyone, but many research projects don't attempt participation at the analysis stage.

 

Intuitive software to the rescue?

By making qualitative analysis visual and engaging, Quirkos hopes to make participatory analysis a bit more feasible. Users don't require lengthy training, and everyone can have a go. They can make their own topics, analyse their own transcripts (or other people's), and individuals in a large community group can go away and do as little or as much as they like, and the results can be combined, with the team knowing who did what (if desired).

 

It can also become a dynamic group exercise, where with a tablet, large touch surface or projector, everyone can be 'hands on' at once. Rather than doing analysis on flip-charts that someone has to take away and process after the event, the real coding and analysis is done live, on the fly. Everyone can see how the analysis is building, and how the findings are emerging as the bubbles grow. Finally, when it comes to share the findings, rather than long spreadsheets of results, you get a picture – the bubbles tell the story and the issues.

 

Quirkos offers a way to practically and affordably facilitate proper end-to-end participatory research, and finally close the loop to make participation part of every stage in the research process.

 

 

A new Qualitative Research Blog

While hosted by Quirkos, the main aim for this blog is to promote the wider use of qualitative research in general. We will link to other blogs and articles (not just academic), have guest bloggers, and welcome comments and discussion.

Qualitative research is a very powerful way to understand and fix our world, and one of the main aims in developing Quirkos was to make it possible for a much wider range of people to use qualitative software to understand their data.

To do this, we need to make more people aware of not just how to do qualitative research, but the reasons and benefits of doing so. In the next few weeks, we’ll cover a basic overview of qualitative research, and some of the common methods for finding strong narratives.  We’ll also highlight some great examples from the academic literature, but also from wider sources, to show the power of understanding people’s stories.