Evaluating feedback

We all know the score: you attend a conference, business event, or training workshop, and at the end of the day you get a little form asking you to evaluate your experience. You can rate the speakers, venue, lunch and parking on a scale from one-to-five, and tick to say whether you would recommend the event to a friend or colleague.

But what about the other part of the evaluation: the open comments box? What was your favourite part of the day? What could we improve for next time? Any other comments? Hopefully someone is going to spend time typing up all these comments, and see if there are some common themes or good suggestions they can use to improve the event next year. Even if you are using a nifty on-line survey system like SurveyMonkey, does someone read and act on the suggestions you spent all that time writing?

And what about feedback on a product, or on service in a hotel or restaurant? Does something actually happen to all those comments, or as one conference attendee once suggested to me, do they all end up on the floor?

In fact, this is a common problem in research. Even when written up, reports often just stay on the shelf, and don't have influence on practice or procedure. If you want decision makers to pay attention to participant feedback and evaluations, then you need to present them in a clear and engaging way.

 

For the numerical or discrete part of surveys, this is not usually too hard. You can put these values in Excel, (or SPSS if you are statistically minded) and explore the data in pivot tables and bar graphs. Then you can see that the happiest attendees were the ones who ranked lunch as excellent, or that 76% of people would recommend the day to others.

Simple statistics and visualisations like this are a standard part of our language: we hear and see them in the news, at board meetings, even in football league tables. They communicate clearly and quickly.

But what about those written comments? In Excel you can't really see all the comments made by people who ranked the conference poorly, or see if the same suggestions are being made about workshop themes for next year.

That's what Quirkos aims to do: become the 'Excel of text'. It's software that everyone can use to explore, summarise and present text data in an intuitive way.

If you put all of your conference evaluations or customer feedback in Quirkos, you can quickly see all the comments made by people who didn't like your product. Or everything that women from the ages of 24-35 said about your service compared with men from 45-64. By combining the numerical, discrete and text data, you have the power to explore the relationships between themes and the differences between respondees. Then you can share these findings as graphs, bubble maps or just the quotes themselves: quick and easy to understand.

This unlocks the power of comments from all your customers, because Quirkos allows you to see why they liked a particular product. And it gives you the chance to be a better listener: if your consumers have an idea for improving your product, you can make it pop out as clear as day.

Hopefully it also breaks a vicious circle: people don't bother leaving comments as they assume they are aren't being read, and thus organisers stop asking for comments, because those sections are ignored or give generic responses.

 

So hopefully next time you fill out a customer feedback form or event evaluation, your comments will lead to direct improvements, rather than just being lost in translation.

Touching Text

Presenting Quirkos at the CAQDAS 2014 conference this month was the first major public demonstration of Quirkos, and what we are trying to do. It’s fair to say it made quite a splash! But getting to this stage has been part of a long process from an idea that came about many years ago.

Like many geeks on the internet, I’d been amazed by the work done by Jeff Han and colleagues at the University of New York on cheap, multi-touch interfaces. This was 2006, and the video went viral in a time before iPhones and tablets, when it looked like someone had finally worked out how to make the futuristic computer interface from Minority Report which had come out in 2002. Others, such as Johnny Lee at Carnegie Mellon University had worked out how the incredible technology in the controllers for the Wii could make touchscreen interactive whiteboards with a £25 toy.

I’ve always been of the opinion that technology is only interesting when it is cheap: it can’t have an impact when it’s out of reach for a majority of people. Now, none of this stuff was particularly ground-breaking in itself, but these people were like heroes to me, for making something amazing out of bits and pieces that everyone could afford.

Meanwhile, I was trying to do qualitative analysis for my PhD [danfreak.net/thesis.html], and having a iBook that wouldn’t run any of the qualitative analysis packages, I cobbled together my own system: my first attempt at making a better qualitative research system. It was based on a series of unique three letter unique codes I’d insert into a sentence, and a Linux based file search system called ‘Beagle’ which allowed me to see a piece of text I’d assigned with a code across any of the files on my computer. Thus in one search I could see all the relevant bits of text from interviews, focus groups, diaries and notes. It was clunky, but worked, and was the beginning of something with potential.

 

By 2009, I had my first proper research job in Oxford, and was spending my salary trying to make a touchscreen computer out of a £120 netbook and a touchscreen overlay I’d imported from China. In fact, I got through two of these laptops, after short-circuiting the motherboard of one while trying to cram the innards into a thin metal case. What excited me was the potential for a £150 touchscreen computer, with no keyboard, that you used like a ‘tablet’ from Star Trek. Then, while I was doing this, Apple came out with the long-anticipated iPad, which had the distinct advantage of being about ¼ of the thickness and weight!

But while all this was going on in my spare time, at work I was spending all day coding semi-structured interviews for a research project. I was being driven mad with the slow coding process, Nvivo was crashing frequently and corrupting all the work when it did, and using interfaces in the 21st century that were beginning to feel a whole generation behind.

And that’s where the idea came from: me speculating on what qualitative analysis would be like with a touch screen interface. What if you could do it on a giant tablet or digital whiteboard with a team of people? I drew sketches of bubbles (I’ve always liked playing with bubbles) that grew when you added text to them, integrating the interface and the visualisation, and showing relationships between the themes.

 

After this, the idea didn’t really progress until I was working on my next job, at Sheffield Hallam University. Again, qualitative analysis was giving me a headache, this time because we wanted to do analysis with participants and co-researchers, and most of the packages were too difficult to learn and too expensive to afford to let the whole team get involved. A new set of colleagues shared my pain with using current CAQDAS software, and as no-one else seemed to be doing anything about it, I thought it was worth giving a try.

I took a course in programming user interfaces using cross-platform frameworks, and was able to knock up some barely functioning prototypes, at the time called ‘Qualia’. But again, things didn’t really progress until I left my job to focus on it full time, fleshing out the details and hiring the wonderful Adrian Lubik: a programmer who actually knows what he’s doing!

With the project gaining momentum, a better name was needed. Looking around classical Greek and Latin names, I came across ‘kirkos’, the Greek word which is the root of the word ‘circle’. Change the beginning to ‘Qu’ for qualitative, and voilá, Quirkos was born: Qualitative Circles. Something that very neatly summed up what I’d been working towards for nearly a decade.

In June we’ll be releasing the beta version to testers for the first time, and the final version will go on sale in September at a lower price point that means a lot more people can try qualitative research. It’s really exciting to be at this stage, with so much enthusiasm and anticipation building in the market. But it’s also just a beginning; we have a 5 year plan to keep adding unique features and develop Quirkos into something that is innovative at every stage of the research process. It’s been a long journey, but it’s great that so many people are now coming along!

Top-down or bottom-up qualitative coding?

In framework analysis, sometimes described as a top-down or 'a-priori' approach, the researcher decides on the topics of interest they will look for before they start the analysis, usually based on a theory they are looking to test. In inductive coding the researcher takes a more bottom-up approach, starting with the data and a blank-sheet, noting themes as the read through the text.

 

Obviously, many researchers take a pragmatic approach, integrating elements of both. For example it is difficult for a emergent researcher to be completely naïve to the topic before they start, and they will have some idea of what they expect to find. This may create bias in any emergent themes (see previous posts about reflexivity!). Conversely, it is common for researchers to discover additional themes while reading the text, illustrating an unconsidered factor and necessitating the addition of extra topics to an a-proiri framework.

 

I intend to go over these inductive and deductive approaches in more detail in a later post. However, there is also another level in qualitative coding which is top-down or bottom-up: the level of coding. A low 'level' of coding might be to create a set of simple themes, such as happy or sad, or apple, banana and orange. These are sometimes called manifest level codes, and are purely descriptive. A higher level of coding might be something more like 'issues from childhood', fruit, or even 'things that can be juggled'. Here more meaning has been imposed, sometimes referred to as latent level analysis.

 

 

Usually, researchers use an iterative approach, going through the data and themes several times to refine them. But the procedure will be quite different if using a top-down or bottom-up approach to building levels of coding. In one model the researcher starts with broad statements or theories, and breaks them down into more basic observations that support or refute that statement. In the bottom-up approach, the researcher might create dozens of very simple codes, and eventually group them together, find patterns, and infer a higher level of meaning from successive readings.

 

So which approach is best? Obviously, it depends. Not just on how well the topic area is understood, but also the engagement level of the particular researcher. Yet complementary methods can be useful here: the PI of the project, having a solid conceptual understanding of the research issue, can use a top-down approach (in both approaches to the analysis) to test their assumptions. Meanwhile, a researcher who is new to the project or field could be in a good position to start from the bottom-up, and see if they can find answers to the research questions starting from basic observations as they emerge from the text. If the themes and conclusions then independently reach the same starting points, it is a good indication that the inferences are well supported by the text!

 

qualitative data analysis software - Quirkos

 

 

Participatory analysis: closing the loop

In participatory research, we try to get away from the idea of researchers doing research on people, and move to a model where they are conducting research with people.

 

The movement comes partly from feminist critiques of epistemology, attacking the pervasive notion that knowledge can only be created by experienced academics, The traditional way of doing research generally disempowers people, as the researchers get to decide what questions to ask, how to interpret and present them, and even what topics are worthy of study in the first place. In participatory research the people who are the focus of the research are seen as the experts, rather than the researchers. At face value, this seems to make sense. After all, who knows more about life on a council estate: someone who has lived there for 20 years, or a middle-class outside researcher?

 

In participatory research, the people who are the subject of the study are often encouraged to be a much greater part of the process, active participants rather than aliens observed from afar. They know they are taking part in the research process, and the research is designed to give them input into what the study should be focusing on. The project can also use research methods that allow people to have more power over what they share, for example by taking photos of their environment, having open group discussions in the community, or using diaries and narratives in lieu of short questionnaires. Groups focused on developing and championing this work include the Participatory Geographies working group of the RGS/IBG, and the Institute of Development Studies at the University of Sussex.

 

This approach is becoming increasingly accepted in mainstream academia, and many funding bodies, including the NIHR, now require all proposals for research projects to have had patient or 'lay-person' involvement in the planning process, to ensure the design of the project is asking the right questions in an appropriate way. Most government funded projects will also stipulate that a summary of findings should be written in a non-technical, freely available format so that everyone involved and affected by the research can access it.

 

Engaging with analysis

Sounds great, right? In a transparent way, non-academics are now involved in everything: choosing which studies are the most important, deciding the focus, choosing the methods and collecting and contributing to the data.

 

But then what? There seems to be a step missing there, what about the analysis?

 

It could be argued that this is the most critical part of the whole process, where researchers summarise, piece together and extrapolate answers from the large mass of data that was collectively gathered. But far too often, this process is a 'black-box' conducted by the researchers themselves, with little if any input from the research participants. It can be a mystery to outsiders, how did researchers come to the particular findings and conclusions from all the different issues that the research revealed? What was discarded? Why was the data interpreted in this way?

 

This process is usually glossed over even in journal articles and final reports, and explaining the process to participants is difficult. Often this is a technical limitation: if you are conducting a muli-factor longitudinal study, the calculation of the statistical analysis is usually beyond all but the most mathematically minded academics, let alone the average Jo.

 

Yet this is also a problem in qualitative research, where participatory methods are often used. Between grounded theory, framework analysis and emergent coding, the approach is complicated and contested even within academia. Furthermore, qualitative analysis is a very lengthy process, with researchers reading and re-reading hundreds or thousands of pages of text: a prospect unappealing to often unpaid research participants.

 

Finally, the existing technical solutions don't seem to help. Software like Nvivo, often used for this type of analysis, is daunting for many researchers without training, and encouraging people from outside the field to try and use it, with all the training and licensing implications of this, makes for an effective brick wall. There are ways to make analysis engaging for everyone, but many research projects don't attempt participation at the analysis stage.

 

Intuitive software to the rescue?

By making qualitative analysis visual and engaging, Quirkos hopes to make participatory analysis a bit more feasible. Users don't require lengthy training, and everyone can have a go. They can make their own topics, analyse their own transcripts (or other people's), and individuals in a large community group can go away and do as little or as much as they like, and the results can be combined, with the team knowing who did what (if desired).

 

It can also become a dynamic group exercise, where with a tablet, large touch surface or projector, everyone can be 'hands on' at once. Rather than doing analysis on flip-charts that someone has to take away and process after the event, the real coding and analysis is done live, on the fly. Everyone can see how the analysis is building, and how the findings are emerging as the bubbles grow. Finally, when it comes to share the findings, rather than long spreadsheets of results, you get a picture – the bubbles tell the story and the issues.

 

Quirkos offers a way to practically and affordably facilitate proper end-to-end participatory research, and finally close the loop to make participation part of every stage in the research process.

 

 

True cross-platform support

Another key aim for Quirkos was to have proper multi-platform support. By that, I mean that it doesn't matter if you are using a desktop or laptop running Windows, a Mac, Linux, or a tablet, Quirkos is the same across them all. You can swap files between different operating systems without needing to convert them, and the interface is the same for everyone. Magic!

This seems like such a simple goal, but Quirkos will be the first qualitative analysis package to acheive this, and it's something that has not been good enough for far too long. It's been a real pain when team members have different computers, and people can't share their data and files.

While it's great that some of the big players are finally releasing Mac versions of their software, these have different interfaces to learn, have less features, and can't talk seamlessly with the Windows versions. Quirkos says: it shouldn't matter. You can pick up an Android tablet right now, and send your Quirkos file to a collegue using a Mac or Windows computer, and explore it using the same interface: an interface that is visual and intuitive, where you don't need to learn any technical query languages, or computer jargon.

Finally, qualitative data analysis shouldn't require the most powerful computer your department can afford, with as much RAM as you can fit in it. The header in the image above shows Quirkos purring away on an old 2008 netbook (!) running XP, and it still searches faster than certain other qualitative analysis software running on my Quad-core, desktop PC with 8GB of RAM.

This is becoming an embarassingly geeky post, but the point is that with Quirkos these stats don't matter anymore. You don't need to worry about what platforms your collegues are using, you can just share with them. And because it works so much faster, it means you can play and with and explore your data in a new way.

Before now, many people I know prefer to do their analysis on paper, and I don't blame them. But finally there is software that just gets out of the way, and puts your data first and formost, regardless of what you have to run it on.

10 tips for semi-structured qualitative interviewing

Many qualitative researchers spend a lot of time interviewing participants, so here are some quick tips to make interviews go as smooth as possible: before, during and after!

 

1. Let your participants choose the location

If you want your interviewees to be comfortable in sharing sometimes personal or sensitive information, make sure they can do it in a comfortable location. For some people, this might be their own house, or a neutral territory like a local cafe. Giving them the choice can help build trust, and gives the right impression: that you are accomodating them. However, make sure you make it clear that you need a relatively quiet location free from interruptions: a pub that plays loud music will not only stop you hearing each other, but usually makes recordings unusable!

 

2. Remember that they are helping you

Be polite and curtious, and be grateful to them for sharing their time and experiences. This always gets interviews off on the right foot. Also, try and think about participants motivations for taking part. Do they want the research to help others? Are they looking for a theraputic discussion? Do they just like a chat? Understanding this will help you guide the interview, and make sure you meet their expectations.

 

3. A conversation, not an interregation!

Interviews work best when they are a friendly dialogue: don't be afraid to start with some small talk, even when the tape is running. It turns a weird situation into a much more normal human experience, and starting with some easy 'starter for 10' questions helps people open up. Even a chatty "How did you hear about the project?" can gives you useful information.

 

4. Memorise the topic guide, but keep it to hand

Knowing all the questions in the topic guide can really help, so group them thematically, and memorise them as much as you can. It will really help the flow of information if you can segue seamlessly from one question to another relevant one. However, it's always useful to keep a print-out in front of you, not just for if you forget something, but also to make you seem more human, with a specific role. Joking about remembering all the questions is a great icebreaker, and it gives you something to look at other than the participant, to stop the session turning into a staring match!

 

5. Use open body language and encouraging cues

Face the participant in a friendly way, and nod or look sympathetic at the right times. Sometimes it's tempting for the interviewer to keep quiet during the responses, and not put in any normal encouraging noises like "Yeah", "Hmm" or "Right" knowing how odd these read in a transcript. But these are important cues that people use to know when to keep talking, so if you are going to drop them, make sure you make positive eye contact, and nod at the right times instead!

 

Quirkos - simple qualitative analysis software

 

6. Write notes, even if you don't use them

It always helps me to scribble down some one-word notes on the topic guide when you are doing an interview: first of all it helps focus my thoughts, and remind me about interesting things that the participant mentioned that I want to go back to. But it also helps show you are listening, and makes sure if the recording goes wrong, there is something to fall back on.

 

7. Write-up the interivew as soon as you finish

Just take 15 minutes after each interview to reflect: the main points that came up, how open the respondent was, any context or distractions that might have impared the flow. This helps you think about things to do better in the next interview, and will help you later to remember each interview.

 

8. Return to difficult issues

If a particular topic is clearly a difficult question (either emotionally, or just because someone can't remember) don't be afraid to leave the topic and come back to it later, asking in a different way. It can really help recall to have a break talking about something easier, and then approach the issue sideways later on.

 

9. Ask stupid questions

Don't assume you know anything. In these kinds of interviews, it's usually not about getting the right answer, but getting the respondent's view or opinion. Asking 'What do you mean by family?' is really useful if you discover someone has adopted children, step-sisters and a beloved family dog that all share the house. Don't make any assumptions, let people tell you what they mean. Even if you have to ask something that makes you sound ignorant on a specialist subject, you could discover that someone didn't know the difference between their chemotherapy and radiotherapy.

 

10. Say thank you

And follow up: send a nice card after the interview, don't be like a date they never hear from again! Also, try and make sure they get a summary of the findings of the study they took part in. It's not just about being nice, but to make sure people have a good experience as a research subject, and will want to be involved in the next project that comes along, which might be yours or mine!

 

I hope these tips have been hopeful, don't forget Qurikos makes your transcribed interviews easy to analyse, as well as a visual and engaging process. Find out more and download a free trial from our website. Our blog is updated with articles like this every week, and you can hear about it first by following our Twitter feed @quirkossoftware.

 

 

Quirkos is coming...

key

 

Quirkos is intended to be a big step forward for qualitative research. The central idea is to make text analysis so easy, that anyone can do it.

That includes people who don't know what qualitative analysis is, or that it could help them to better understand their world. This could be a council or hospital trust wanting to better understand the needs of people that use their services, or a team developing a new product, wanting feedback from users and consumers.

And for experienced researchers too, the goal was to create software that helps people engage with their data, rather than being a barrier to it. Over the last decade I've used a variety of approaches to analysing qualitative research, and many collegues and I felt that there had to be a better way.

Quirkos aims to make software to easily manage large projects, search them quickly, and keep them secure. To visualise data on the fly, so findings come alive and are sharable with a team of people. And finally to make powerful tools to sort and understand the connections in the data.

After years of planning, these pieces are finally coming together, and the prototype is already something that I prefer using to any of the other qualitative software packages out there. In the next few weeks, the first version of Quirkos will be sent to intreped researchers around the globe to test in their work. A few months later, we'll be ready to share a polished version with the world, and we're really excited that it will work for everyone: with any level of experience, and on pretty much any computer too.

There are a lot of big firsts in Quirkos, and it's going to be exciting sharing them here over the next few weeks!

An overview of qualitative methods

There are a lot of different ways to collect qualitative data, and this article just provides a brief summary of some of the main methods used in qualitative research. Each one is an art in its own right, with various different techniques, definitions, approaches and proponents.

More on each one will follow in later articles, and it’s worth remembering that these need to be paired with the right questions, sampling, and analysis to get good results.

Interviews

Possibly the richest, and most powerful tool: talking to someone directly. The classic definition is “conversations with a purpose“, the idea being that there is something you are interested in, you ask questions about it, and someone gives useful responses.

There are many different styles for example how structured your questions are (this paper has a wonderful and succinct overview in the introduction). These can range from a rigid script where you ask the same questions every time, or completely open discussion, where the researcher and respondent have freedom to shape the conversation. A common middle ground are semi-structured interviews, which often have a topic guide, listing particualar issues to discuss, but will allow questions for clarification, or to follow up on an interesting tangent.

Participant Observation

Often the remit of ethnography or sociology, participant observation usually involves watching, living or even participating in the daily life of research subjects. However, it can also involve just watching people in a certain setting, such as a work meeting, or using a supermarket.

This is probably the most time intensive and potentially problematic method, as it can involve weeks or even years of placement for a researcher, often on their own. However, it does produce some of the richest data, as well as a level of depth that can really help explain complex issues. This chapter is a fine starting point.

Focus groups

A common method used in market research, where a researcher leads a group discussion on a particular topic. However, it is also a powerful tool for social researchers, especially when looking at group dynamics, or the reactions of particular groups of people. It’s obviously important to consider who is chosen for the group, and how the interactions of people in the group affect the outcome (although this might be what you are looking for).

It’s usually a quicker and cheaper way of gauging many reactions and opinions, but requires some skill in the facilitator to make sure everyone’s voice is being heard, and that people stay on track. Also a headache for any transcribers who have to identify different voices from muffled audio recordings!

Participant Diaries

Getting people to write a diary for a research project is a very useful tool, and is commonly used in looking at taboo behaviours such as drug use or sexuality, not just because researchers don’t have to ask difficult questions face-to-face, but that data can be collected over a long period of time. If you are trying to find out how often a particular behaviour occurs, a daily or weekly record is likely to be more accurate than asking someone in a single interview (as in the studies above).

There are other benefits to the diary method: not least that the participant is in control. They can share as much or as little as they like, and only on topics they wish to. It can also be theraputic for some people, and is more time flexible. Diaries can be paper based, electronic, or even on a voice recorder if there are literacy concerns. However, researchers will probably need to talk to people at the beginning and end of the process, and give regular reminders.

Surveys

Probably one of the most common qualitative methods are the open ended questions on surveys, usually by post, on-line, or ‘guided’ by someone with a clipboard. Common challenges here are

  • Encouraging people to write more than one word, but less than an essay
  • Setting questions carefully so they are clear, but not leading
  • Getting a good response rate and
  • Knowing who has and hasn’t responded

The final challenge is to make sure the responses are useful, and integrating them with the rest of the project, especially quantitative data.

Field notes

Sometimes the most overlooked, but most vaulable source of information can be the notes and field diaries of researchers themselves. These can include not just where and when people did interviews or observations, but crucial context, like the people who refused to take part, and whether a interviewee was nervous. It need not just be for ethnographers doing long field work, it can be very helpful in organising thoughts and work in smaller projects with multiple researchers.

As part of a reflexive method, it might contain comments and thoughts from the researcher, so there can be a risk of autobiographical overindulgence. It is also not easy to integrate ‘data’ from a research diary with other sources of information when writing up a project for a particular output.

 

This is just a whistle-stop introduction, but more on each of these to follow…

A new Qualitative Research Blog

While hosted by Quirkos, the main aim for this blog is to promote the wider use of qualitative research in general. We will link to other blogs and articles (not just academic), have guest bloggers, and welcome comments and discussion.

Qualitative research is a very powerful way to understand and fix our world, and one of the main aims in developing Quirkos was to make it possible for a much wider range of people to use qualitative software to understand their data.

To do this, we need to make more people aware of not just how to do qualitative research, but the reasons and benefits of doing so. In the next few weeks, we’ll cover a basic overview of qualitative research, and some of the common methods for finding strong narratives.  We’ll also highlight some great examples from the academic literature, but also from wider sources, to show the power of understanding people’s stories.

Why qualitative research?

There are lies, damn lies, and statistics

It’s easy to knock statistics for being misleading, or even misused to support spurious findings. In fact, there seems to be a growing backlash at the automatic way that significance tests in scientific papers are assumed to be the basis for proving findings (an article neatly rebutted here in the aptly named post “Give p a chance!”). However, I think most of the time statistics are actually undervalued. They are extremely good at conveying succinct summaries about large numbers of things. Not that there isn’t room for more public literacy about statistics, a charge that can be levied at many academic researchers too.

But there is a clear limit to how far statistics can take us, especially when dealing with complex and messy social issues. These are often the result of intricately entangled factors, decided by fickle and seemingly irrational human beings. Statistics can give you an overview of what is happening, but they can’t tell you why. To really understand the behaviour and decisions of an individual, or a group of actors, we need to get an in-depth knowledge: one data point in a distribution isn’t going to be enough power.

Sometimes, to understand a public health issue like obesity, we need to know about everything from supermarket psychology that promotes unhealthy food, to how childhood depression can be linked with obesity. When done well, qualitative research allows us to look across societal and personal factors, integrating individuals stories into a social narrative that can explain important issues.

To do this, we can observe the behaviour of people in a supermarket, or interview people about their lives. But one of the key factors in some qualitative research, is that we don’t always know what we are looking for. If we explicitly go into a supermarket with the idea that watching shoppers will prove that supermarket two-for-one offers are causing obesity, we might miss other issues: the shelf placement of junk food, or the high cost of fresh vegetables. In the same way, if we interview someone with set questions about childhood depression, we might miss factors like time needed for food preparation, or cuts to welfare benefits.

This open ended, sometimes called ‘semi-structured’, or inductive analytical approach is one of the most difficult, but most powerful methods of qualitative research. Collecting data first, and then using grounded theory in the analytic phase to discover underlying themes from which can build hypotheses, sometimes seems like backward thinking. But when you don’t know what the right questions are, it’s difficult to find the right answers.

More on all this soon…