Paper vs. computer assisted qualitative analysis

I recently read a great paper by Rettie et al. (2008) which, although based on a small sample size, found that only 9% of UK market research organisations doing qualitative research were using software to help with qualitative analysis.

 

At first this sounds very low, but it holds true with my own limited experiences with market research firms, and also with academic researchers. The first formal training courses I attended on Qualitative Analysis were conducted by the excellent Health Experiences Research Group at Oxford University, a team I would actually work with later in my career. As an aforementioned computer geek, it was surprising for me to hear Professor Sue Ziebland convincingly argue for a method they defined as the One Sheet of Paper technique, immortalised as OSOP. This is essentially a way to develop a grounded theory or analytical approach by reducing the key themes to a diagram that can be physically summarised on a single piece of paper, a method that is still widely cited to this day.

 

However, the day also contained a series of what felt like ‘confessions’ about how much of people’s Qualitative analysis was paper based: printing out whole transcripts of interviews, highlighting sections, physically cutting and gluing text into flipcharts, and dozens and dozens of multi-coloured Post-it notes! Personally, I think this is a fine method of analysis, as it keeps researchers close to the data and, assuming you have a large enough workspace, it lets you keep dozens of interviews and themes to hand. It’s also very good for team work, as the physicality gets everyone involved in reviewing codes and extracts.

 

In the last project I worked on, looking at evidence use for health decision making we did most of the analysis in Excel, which was actually easier for the whole team to work with than any of the dedicated qualitative analysis software packages. However, we still relied heavily on paper: printing out the interviews and Excel spreadsheets, and using flip-chart paper, post-its and marker pens in group analysis sessions. Believe me, I felt a pang of guilt for all the paper we used in each of these sessions, rainforests be damned! But it kept us inspired, engaged, close to the data and let us work together.

 

So I can quite understand why so many academics and market research organisations choose not to use software packages: at the moment they don’t have the visual connection to the data that paper annotations allow, it’s often difficult to see the different stages of the coding process, and it’s hard to produce reports and outputs that communicate properly.

 

The problem with this approach is the literal paper-trail – how you turn all these iterations of coding schemes and analysis sessions into something you can write up to share with others in order to justify how you made the decisions that led to your conclusions. So I had to file all these flip-charts and annotated sheets, often taking photos of them so they could be shared with colleagues at other universities. It was a slow and time consuming process, but it kept us close to the data.

 

When designing Quirkos, I have tried in some ways to replicate the paper-based analysis process. There’s a touch interface, reports that show all the highlighting in a Word document, and views that keep you close to the data. But I also want to combine this with all the advantages you get from a software package, not least the ability to search, shuffle dozens of documents, have more colours than a whole rainbow of Post-it notes, and the invaluable Undo button!

 

Software can also help keep track of many more topics and sources than most people (especially myself) can remember, and if there are a lot of different themes you want to explore from the data, software is really good at keeping them all in one place and making them easy to find. Working as part of a team, especially if some researchers work remotely or in a different organisation can be much easier with software. E-mailing a file is much easier than sending a huge folder of annotated paper, and combining and comparing analysis can be done at any stage of the project.

 

Qualitative analysis software also lets you take different slices through the data, so you can compare responses grouped by any caracteristics for the sources you have. So it's easy to look at all the comments from people in one location, or between a certain age range. Certainly this is possible to do with qualitative data on paper as well, but the software can remove the need of a lot of paper shuffling, especially when you have a large number of respondents.

 

But most importantly, I think software can allow more experimentation - you can try different themes, easily combine or break them apart, or even start from scratch again, knowing that the old analysis approach you tried is just a few clicks away. I think that the magic undo button also gives researchers more confidence in trying something out, and makes it easier for people to change their mind.

 

Many people I’ve spoken to have asked what the ‘competition’ for Quirkos is like, meaning, what do the other software packages do. But for me the real competitor is the tangible approach and the challenge is to try and have something that is the best of both worlds: a tool that not only apes the paper realm in a virtual space, but acknowledges the need to print out and connect with physical workflows. I often want to review a coded project on paper, printing off and reading in the cafe, and Quirkos makes sure that all your coding can be visually displayed and shared in this way.

 

Everyone has a workflow for qualitative analysis that works for them, their team, and the needs of their project. I think the key is flexibility, and to think about a set of tools that can include paper and software solutions, rather than one approach that is a jack of all trades, and master of none.

 

Analysing text using qualitative software

I'm really happy to see that the talks from the University of Surrey CAQDAS 2014 are now up online (that's 'Computer Assisted Qualitative Data Analysis Software' to you and me). It was a great conference about the current state of software for qualitative analysis, but for me the most interesting talks were from experienced software trainers, about how people actually were using packages in practice.

There were many important findings being shared, but for me one of the most striking was that people spend most of their time coding, and most of what they are coding is text.

In a small survey of CAQDAS users from a qualitative research network in Poland, Haratyk and Kordasiewicz found that 97% of users were coding text, while only 28% were coding images, and 23% directly coding audio. In many ways, the low numbers of people using images and audio are not surprising, but it is a shame. Text is a lot quicker to skip though to find passages compared to audio, and most people (especially researchers) and read a lot faster than people speak. At the moment, most of the software available for qualitative analysis struggles to match audio with meaning, either by syncing up transcripts, or through automatic transcription to help people understand what someone is saying.

Most qualitative researchers use audio as an intermediary stage, to create a recording of a research event, such as in interview or focus group, and have the text typed up word-for-word to analyse. But with this approach you risk losing all of the nuance that we are attuned to hear in the spoken word, emphasis, emotion, sarcasm – and these can subtly or completely transform the meaning of the text. However, since audio is usually much more laborious to work with, I can understand why 97% of people code with text. Still, I always try to keep the audio of an interview close to hand when coding, so that I can listen to any interesting or ambiguous sections, and make sure I am interpreting them fairly.

Since coding text is what most people spend most of their time doing, we spent a lot of time making the text coding process in Quirkos was as good as it could be. We certainly plan to add audio capabilities in the future, but this needs to be carefully done to make sure it connects closely with the text, but can be coded and retrieved as easily as possible.

 

But the main focus of the talk was the gaps in users' theoretical knowledge, that the survey revealed. For example, when asked which analytical framework the researchers used, only 23% described their approach as Grounded Theory. However, when the Grounded Theory approach was described in more detail, 61% of respondents recognised this method as being how they worked. You may recall from the previous top-up, bottom-down blog article that Grounded Theory is essentially finding themes from the text as they appear, rather than having a pre-defined list of what a researcher is looking for. An excellent and detailed overview can be found here.

Did a third of people in this sample really not know what analytical approach they were using? Of course it could be simply that they know it by another name, Emergent Coding for example, or as Dey (1999) laments, there may be “as many versions of grounded theory as there were grounded theorists”.

 

Finally, the study noted users comments on advantages and disadvantages with current software packages. People found that CAQDAS software helped them analyse text faster, and manage lots of different sources. But they also mentioned a difficult learning curve, and licence costs that were more than the monthly salary of a PhD student in Poland. Hopefully Quirkos will be able to help on both of these points...

 

Quirkos Beta Update!

We are busy at Quirkos HQ putting the finishing touches on the Beta version of Quirkos.

The Alpha was relased 5 months ago, and during that time we've collected feedback from people who've used Quirkos in a variety of settings to do all kinds of different research. We've also been adding a lot of features that were requested, and quite a few bonus ones too! We've made search much more powerful, created new graphical reports, and given people tools to get writing reports and articles quickly.

There has been a lot of interest in the next version, and we are excited to share it with people. But we also want to make sure that when it goes out we give the best possible impression of what Quirkos will be like to use. We are planning to make the Beta available to people who have signed up at the end of June, so we can collect more feedback and be ready for a September launch. We'll be detailing some of the new features on the blog over the next few months, so watch this space!

Evaluating feedback

We all know the score: you attend a conference, business event, or training workshop, and at the end of the day you get a little form asking you to evaluate your experience. You can rate the speakers, venue, lunch and parking on a scale from one-to-five, and tick to say whether you would recommend the event to a friend or colleague.

But what about the other part of the evaluation: the open comments box? What was your favourite part of the day? What could we improve for next time? Any other comments? Hopefully someone is going to spend time typing up all these comments, and see if there are some common themes or good suggestions they can use to improve the event next year. Even if you are using a nifty on-line survey system like SurveyMonkey, does someone read and act on the suggestions you spent all that time writing?

And what about feedback on a product, or on service in a hotel or restaurant? Does something actually happen to all those comments, or as one conference attendee once suggested to me, do they all end up on the floor?

In fact, this is a common problem in research. Even when written up, reports often just stay on the shelf, and don't have influence on practice or procedure. If you want decision makers to pay attention to participant feedback and evaluations, then you need to present them in a clear and engaging way.

 

For the numerical or discrete part of surveys, this is not usually too hard. You can put these values in Excel, (or SPSS if you are statistically minded) and explore the data in pivot tables and bar graphs. Then you can see that the happiest attendees were the ones who ranked lunch as excellent, or that 76% of people would recommend the day to others.

Simple statistics and visualisations like this are a standard part of our language: we hear and see them in the news, at board meetings, even in football league tables. They communicate clearly and quickly.

But what about those written comments? In Excel you can't really see all the comments made by people who ranked the conference poorly, or see if the same suggestions are being made about workshop themes for next year.

That's what Quirkos aims to do: become the 'Excel of text'. It's software that everyone can use to explore, summarise and present text data in an intuitive way.

If you put all of your conference evaluations or customer feedback in Quirkos, you can quickly see all the comments made by people who didn't like your product. Or everything that women from the ages of 24-35 said about your service compared with men from 45-64. By combining the numerical, discrete and text data, you have the power to explore the relationships between themes and the differences between respondees. Then you can share these findings as graphs, bubble maps or just the quotes themselves: quick and easy to understand.

This unlocks the power of comments from all your customers, because Quirkos allows you to see why they liked a particular product. And it gives you the chance to be a better listener: if your consumers have an idea for improving your product, you can make it pop out as clear as day.

Hopefully it also breaks a vicious circle: people don't bother leaving comments as they assume they are aren't being read, and thus organisers stop asking for comments, because those sections are ignored or give generic responses.

 

So hopefully next time you fill out a customer feedback form or event evaluation, your comments will lead to direct improvements, rather than just being lost in translation.

Touching Text

Presenting Quirkos at the CAQDAS 2014 conference this month was the first major public demonstration of Quirkos, and what we are trying to do. It’s fair to say it made quite a splash! But getting to this stage has been part of a long process from an idea that came about many years ago.

Like many geeks on the internet, I’d been amazed by the work done by Jeff Han and colleagues at the University of New York on cheap, multi-touch interfaces. This was 2006, and the video went viral in a time before iPhones and tablets, when it looked like someone had finally worked out how to make the futuristic computer interface from Minority Report which had come out in 2002. Others, such as Johnny Lee at Carnegie Mellon University had worked out how the incredible technology in the controllers for the Wii could make touchscreen interactive whiteboards with a £25 toy.

I’ve always been of the opinion that technology is only interesting when it is cheap: it can’t have an impact when it’s out of reach for a majority of people. Now, none of this stuff was particularly ground-breaking in itself, but these people were like heroes to me, for making something amazing out of bits and pieces that everyone could afford.

Meanwhile, I was trying to do qualitative analysis for my PhD [danfreak.net/thesis.html], and having a iBook that wouldn’t run any of the qualitative analysis packages, I cobbled together my own system: my first attempt at making a better qualitative research system. It was based on a series of unique three letter unique codes I’d insert into a sentence, and a Linux based file search system called ‘Beagle’ which allowed me to see a piece of text I’d assigned with a code across any of the files on my computer. Thus in one search I could see all the relevant bits of text from interviews, focus groups, diaries and notes. It was clunky, but worked, and was the beginning of something with potential.

 

By 2009, I had my first proper research job in Oxford, and was spending my salary trying to make a touchscreen computer out of a £120 netbook and a touchscreen overlay I’d imported from China. In fact, I got through two of these laptops, after short-circuiting the motherboard of one while trying to cram the innards into a thin metal case. What excited me was the potential for a £150 touchscreen computer, with no keyboard, that you used like a ‘tablet’ from Star Trek. Then, while I was doing this, Apple came out with the long-anticipated iPad, which had the distinct advantage of being about ¼ of the thickness and weight!

But while all this was going on in my spare time, at work I was spending all day coding semi-structured interviews for a research project. I was being driven mad with the slow coding process, Nvivo was crashing frequently and corrupting all the work when it did, and using interfaces in the 21st century that were beginning to feel a whole generation behind.

And that’s where the idea came from: me speculating on what qualitative analysis would be like with a touch screen interface. What if you could do it on a giant tablet or digital whiteboard with a team of people? I drew sketches of bubbles (I’ve always liked playing with bubbles) that grew when you added text to them, integrating the interface and the visualisation, and showing relationships between the themes.

 

After this, the idea didn’t really progress until I was working on my next job, at Sheffield Hallam University. Again, qualitative analysis was giving me a headache, this time because we wanted to do analysis with participants and co-researchers, and most of the packages were too difficult to learn and too expensive to afford to let the whole team get involved. A new set of colleagues shared my pain with using current CAQDAS software, and as no-one else seemed to be doing anything about it, I thought it was worth giving a try.

I took a course in programming user interfaces using cross-platform frameworks, and was able to knock up some barely functioning prototypes, at the time called ‘Qualia’. But again, things didn’t really progress until I left my job to focus on it full time, fleshing out the details and hiring the wonderful Adrian Lubik: a programmer who actually knows what he’s doing!

With the project gaining momentum, a better name was needed. Looking around classical Greek and Latin names, I came across ‘kirkos’, the Greek word which is the root of the word ‘circle’. Change the beginning to ‘Qu’ for qualitative, and voilá, Quirkos was born: Qualitative Circles. Something that very neatly summed up what I’d been working towards for nearly a decade.

In June we’ll be releasing the beta version to testers for the first time, and the final version will go on sale in September at a lower price point that means a lot more people can try qualitative research. It’s really exciting to be at this stage, with so much enthusiasm and anticipation building in the market. But it’s also just a beginning; we have a 5 year plan to keep adding unique features and develop Quirkos into something that is innovative at every stage of the research process. It’s been a long journey, but it’s great that so many people are now coming along!

Top-down or bottom-up qualitative coding?

In framework analysis, sometimes described as a top-down or 'a-priori' approach, the researcher decides on the topics of interest they will look for before they start the analysis, usually based on a theory they are looking to test. In inductive coding the researcher takes a more bottom-up approach, starting with the data and a blank-sheet, noting themes as the read through the text.

 

Obviously, many researchers take a pragmatic approach, integrating elements of both. For example it is difficult for a emergent researcher to be completely naïve to the topic before they start, and they will have some idea of what they expect to find. This may create bias in any emergent themes (see previous posts about reflexivity!). Conversely, it is common for researchers to discover additional themes while reading the text, illustrating an unconsidered factor and necessitating the addition of extra topics to an a-proiri framework.

 

I intend to go over these inductive and deductive approaches in more detail in a later post. However, there is also another level in qualitative coding which is top-down or bottom-up: the level of coding. A low 'level' of coding might be to create a set of simple themes, such as happy or sad, or apple, banana and orange. These are sometimes called manifest level codes, and are purely descriptive. A higher level of coding might be something more like 'issues from childhood', fruit, or even 'things that can be juggled'. Here more meaning has been imposed, sometimes referred to as latent level analysis.

 

 

Usually, researchers use an iterative approach, going through the data and themes several times to refine them. But the procedure will be quite different if using a top-down or bottom-up approach to building levels of coding. In one model the researcher starts with broad statements or theories, and breaks them down into more basic observations that support or refute that statement. In the bottom-up approach, the researcher might create dozens of very simple codes, and eventually group them together, find patterns, and infer a higher level of meaning from successive readings.

 

So which approach is best? Obviously, it depends. Not just on how well the topic area is understood, but also the engagement level of the particular researcher. Yet complementary methods can be useful here: the PI of the project, having a solid conceptual understanding of the research issue, can use a top-down approach (in both approaches to the analysis) to test their assumptions. Meanwhile, a researcher who is new to the project or field could be in a good position to start from the bottom-up, and see if they can find answers to the research questions starting from basic observations as they emerge from the text. If the themes and conclusions then independently reach the same starting points, it is a good indication that the inferences are well supported by the text!

 

qualitative data analysis software - Quirkos

 

 

Participatory analysis: closing the loop

In participatory research, we try to get away from the idea of researchers doing research on people, and move to a model where they are conducting research with people.

 

The movement comes partly from feminist critiques of epistemology, attacking the pervasive notion that knowledge can only be created by experienced academics, The traditional way of doing research generally disempowers people, as the researchers get to decide what questions to ask, how to interpret and present them, and even what topics are worthy of study in the first place. In participatory research the people who are the focus of the research are seen as the experts, rather than the researchers. At face value, this seems to make sense. After all, who knows more about life on a council estate: someone who has lived there for 20 years, or a middle-class outside researcher?

 

In participatory research, the people who are the subject of the study are often encouraged to be a much greater part of the process, active participants rather than aliens observed from afar. They know they are taking part in the research process, and the research is designed to give them input into what the study should be focusing on. The project can also use research methods that allow people to have more power over what they share, for example by taking photos of their environment, having open group discussions in the community, or using diaries and narratives in lieu of short questionnaires. Groups focused on developing and championing this work include the Participatory Geographies working group of the RGS/IBG, and the Institute of Development Studies at the University of Sussex.

 

This approach is becoming increasingly accepted in mainstream academia, and many funding bodies, including the NIHR, now require all proposals for research projects to have had patient or 'lay-person' involvement in the planning process, to ensure the design of the project is asking the right questions in an appropriate way. Most government funded projects will also stipulate that a summary of findings should be written in a non-technical, freely available format so that everyone involved and affected by the research can access it.

 

Engaging with analysis

Sounds great, right? In a transparent way, non-academics are now involved in everything: choosing which studies are the most important, deciding the focus, choosing the methods and collecting and contributing to the data.

 

But then what? There seems to be a step missing there, what about the analysis?

 

It could be argued that this is the most critical part of the whole process, where researchers summarise, piece together and extrapolate answers from the large mass of data that was collectively gathered. But far too often, this process is a 'black-box' conducted by the researchers themselves, with little if any input from the research participants. It can be a mystery to outsiders, how did researchers come to the particular findings and conclusions from all the different issues that the research revealed? What was discarded? Why was the data interpreted in this way?

 

This process is usually glossed over even in journal articles and final reports, and explaining the process to participants is difficult. Often this is a technical limitation: if you are conducting a muli-factor longitudinal study, the calculation of the statistical analysis is usually beyond all but the most mathematically minded academics, let alone the average Jo.

 

Yet this is also a problem in qualitative research, where participatory methods are often used. Between grounded theory, framework analysis and emergent coding, the approach is complicated and contested even within academia. Furthermore, qualitative analysis is a very lengthy process, with researchers reading and re-reading hundreds or thousands of pages of text: a prospect unappealing to often unpaid research participants.

 

Finally, the existing technical solutions don't seem to help. Software like Nvivo, often used for this type of analysis, is daunting for many researchers without training, and encouraging people from outside the field to try and use it, with all the training and licensing implications of this, makes for an effective brick wall. There are ways to make analysis engaging for everyone, but many research projects don't attempt participation at the analysis stage.

 

Intuitive software to the rescue?

By making qualitative analysis visual and engaging, Quirkos hopes to make participatory analysis a bit more feasible. Users don't require lengthy training, and everyone can have a go. They can make their own topics, analyse their own transcripts (or other people's), and individuals in a large community group can go away and do as little or as much as they like, and the results can be combined, with the team knowing who did what (if desired).

 

It can also become a dynamic group exercise, where with a tablet, large touch surface or projector, everyone can be 'hands on' at once. Rather than doing analysis on flip-charts that someone has to take away and process after the event, the real coding and analysis is done live, on the fly. Everyone can see how the analysis is building, and how the findings are emerging as the bubbles grow. Finally, when it comes to share the findings, rather than long spreadsheets of results, you get a picture – the bubbles tell the story and the issues.

 

Quirkos offers a way to practically and affordably facilitate proper end-to-end participatory research, and finally close the loop to make participation part of every stage in the research process.

 

 

True cross-platform support

Another key aim for Quirkos was to have proper multi-platform support. By that, I mean that it doesn't matter if you are using a desktop or laptop running Windows, a Mac, Linux, or a tablet, Quirkos is the same across them all. You can swap files between different operating systems without needing to convert them, and the interface is the same for everyone. Magic!

This seems like such a simple goal, but Quirkos will be the first qualitative analysis package to acheive this, and it's something that has not been good enough for far too long. It's been a real pain when team members have different computers, and people can't share their data and files.

While it's great that some of the big players are finally releasing Mac versions of their software, these have different interfaces to learn, have less features, and can't talk seamlessly with the Windows versions. Quirkos says: it shouldn't matter. You can pick up an Android tablet right now, and send your Quirkos file to a collegue using a Mac or Windows computer, and explore it using the same interface: an interface that is visual and intuitive, where you don't need to learn any technical query languages, or computer jargon.

Finally, qualitative data analysis shouldn't require the most powerful computer your department can afford, with as much RAM as you can fit in it. The header in the image above shows Quirkos purring away on an old 2008 netbook (!) running XP, and it still searches faster than certain other qualitative analysis software running on my Quad-core, desktop PC with 8GB of RAM.

This is becoming an embarassingly geeky post, but the point is that with Quirkos these stats don't matter anymore. You don't need to worry about what platforms your collegues are using, you can just share with them. And because it works so much faster, it means you can play and with and explore your data in a new way.

Before now, many people I know prefer to do their analysis on paper, and I don't blame them. But finally there is software that just gets out of the way, and puts your data first and formost, regardless of what you have to run it on.

10 tips for semi-structured qualitative interviewing

Many qualitative researchers spend a lot of time interviewing participants, so here are some quick tips to make interviews go as smooth as possible: before, during and after!

 

1. Let your participants choose the location

If you want your interviewees to be comfortable in sharing sometimes personal or sensitive information, make sure they can do it in a comfortable location. For some people, this might be their own house, or a neutral territory like a local cafe. Giving them the choice can help build trust, and gives the right impression: that you are accomodating them. However, make sure you make it clear that you need a relatively quiet location free from interruptions: a pub that plays loud music will not only stop you hearing each other, but usually makes recordings unusable!

 

2. Remember that they are helping you

Be polite and curtious, and be grateful to them for sharing their time and experiences. This always gets interviews off on the right foot. Also, try and think about participants motivations for taking part. Do they want the research to help others? Are they looking for a theraputic discussion? Do they just like a chat? Understanding this will help you guide the interview, and make sure you meet their expectations.

 

3. A conversation, not an interregation!

Interviews work best when they are a friendly dialogue: don't be afraid to start with some small talk, even when the tape is running. It turns a weird situation into a much more normal human experience, and starting with some easy 'starter for 10' questions helps people open up. Even a chatty "How did you hear about the project?" can gives you useful information.

 

4. Memorise the topic guide, but keep it to hand

Knowing all the questions in the topic guide can really help, so group them thematically, and memorise them as much as you can. It will really help the flow of information if you can segue seamlessly from one question to another relevant one. However, it's always useful to keep a print-out in front of you, not just for if you forget something, but also to make you seem more human, with a specific role. Joking about remembering all the questions is a great icebreaker, and it gives you something to look at other than the participant, to stop the session turning into a staring match!

 

5. Use open body language and encouraging cues

Face the participant in a friendly way, and nod or look sympathetic at the right times. Sometimes it's tempting for the interviewer to keep quiet during the responses, and not put in any normal encouraging noises like "Yeah", "Hmm" or "Right" knowing how odd these read in a transcript. But these are important cues that people use to know when to keep talking, so if you are going to drop them, make sure you make positive eye contact, and nod at the right times instead!

 

Quirkos - simple qualitative analysis software

 

6. Write notes, even if you don't use them

It always helps me to scribble down some one-word notes on the topic guide when you are doing an interview: first of all it helps focus my thoughts, and remind me about interesting things that the participant mentioned that I want to go back to. But it also helps show you are listening, and makes sure if the recording goes wrong, there is something to fall back on.

 

7. Write-up the interivew as soon as you finish

Just take 15 minutes after each interview to reflect: the main points that came up, how open the respondent was, any context or distractions that might have impared the flow. This helps you think about things to do better in the next interview, and will help you later to remember each interview.

 

8. Return to difficult issues

If a particular topic is clearly a difficult question (either emotionally, or just because someone can't remember) don't be afraid to leave the topic and come back to it later, asking in a different way. It can really help recall to have a break talking about something easier, and then approach the issue sideways later on.

 

9. Ask stupid questions

Don't assume you know anything. In these kinds of interviews, it's usually not about getting the right answer, but getting the respondent's view or opinion. Asking 'What do you mean by family?' is really useful if you discover someone has adopted children, step-sisters and a beloved family dog that all share the house. Don't make any assumptions, let people tell you what they mean. Even if you have to ask something that makes you sound ignorant on a specialist subject, you could discover that someone didn't know the difference between their chemotherapy and radiotherapy.

 

10. Say thank you

And follow up: send a nice card after the interview, don't be like a date they never hear from again! Also, try and make sure they get a summary of the findings of the study they took part in. It's not just about being nice, but to make sure people have a good experience as a research subject, and will want to be involved in the next project that comes along, which might be yours or mine!

 

I hope these tips have been hopeful, don't forget Qurikos makes your transcribed interviews easy to analyse, as well as a visual and engaging process. Find out more and download a free trial from our website. Our blog is updated with articles like this every week, and you can hear about it first by following our Twitter feed @quirkossoftware.

 

 

Quirkos is coming...

key

 

Quirkos is intended to be a big step forward for qualitative research. The central idea is to make text analysis so easy, that anyone can do it.

That includes people who don't know what qualitative analysis is, or that it could help them to better understand their world. This could be a council or hospital trust wanting to better understand the needs of people that use their services, or a team developing a new product, wanting feedback from users and consumers.

And for experienced researchers too, the goal was to create software that helps people engage with their data, rather than being a barrier to it. Over the last decade I've used a variety of approaches to analysing qualitative research, and many collegues and I felt that there had to be a better way.

Quirkos aims to make software to easily manage large projects, search them quickly, and keep them secure. To visualise data on the fly, so findings come alive and are sharable with a team of people. And finally to make powerful tools to sort and understand the connections in the data.

After years of planning, these pieces are finally coming together, and the prototype is already something that I prefer using to any of the other qualitative software packages out there. In the next few weeks, the first version of Quirkos will be sent to intreped researchers around the globe to test in their work. A few months later, we'll be ready to share a polished version with the world, and we're really excited that it will work for everyone: with any level of experience, and on pretty much any computer too.

There are a lot of big firsts in Quirkos, and it's going to be exciting sharing them here over the next few weeks!