Software for computer-aided qualitative analysis (CAQDAS) or “head” analysis?

This is the second time that a university professor has asked me for a study / talk that includes a program / software for computer-assisted qualitative analysis (Computer Assisted/Aided Qualitative Data Analysis Software – CAQDAS), such as Atlas.ti or Nlive.

And in both cases, I have commented that, although I know them, I do not use them, but, as I think happens to the rest of my colleagues, I analyze “head over heels”, interpreting, reviewing the transcriptions and holding analysis meetings with the team involved in the study.

This led me to ask on Linkedin if CAQDAS was used by someone doing qualitative market research or if it was more of a tool for academia.

The response was massive. The post received more than 9,000 visits and some 75 comments from professionals from all countries. It is clear that I am not the only one who has asked this question 😉

Result? Except for a few cases, my suspicions were confirmed. Most professional colleagues (outside the academic field) did not use them either. Because?:

1) Glyn Griffiths pointed out that the use of these computer-assisted techniques requires two stages: transcription and then analysis. And in the commercial environment, our customers want fast and cheap answers.

In contrast, in academic (more “scientific”) methodologies, your analysis technique must be repeatable in such a way that if a different person follows your instructions, they should reach almost the same conclusions, if the same data set is used. That is, it must be objective and not dependent on the observer. But this requires a much more methodological approach.

As an example he said: «Imagine an ecological study of a swamp. A trained botanist could have spent about twenty minutes looking across the swamp and telling you which areas were wet vs. dry, which areas were more acidic, which areas were more shaded from light, etc. just by looking at the plants, based on his years of experience. Instead, the more scientific, academic approach would spend three days systematically measuring everything in a repeatable way that anyone could copy. Working from transcripts allows for a more objective, repeatable approach, which even an inexperienced analyst could be taught, but is costly and time consuming. As research experts, clients pay us for our expertise and the ability to quickly interpret and analyze.

There are companies that claim to be able to resolve “positive or negative sentiment” from content automatically, but in my experience this is highly unreliable. Search engine optimization means that most content includes hundreds of tags, for just about anything every business does. So within an article about, say, a new ePOC drug for GSK, the article will also include pretty much every other therapy area they work on. So it can be quite difficult for a machine to figure out what an article is really about.

And in terms of feelings, you have to interpret the meaning. Positive or negative for whom? A new license for a COPD drug from GSK is good for them, but bad for their competitors, so you have to know from what point of view that sentiment is being expressed.

At the moment, there is no substitute for a human mind when it comes to accurately interpreting meaning.”

2) Jorge Marrón Abascal pointed out that he has always remained faithful to that of “intellectual craftsmanship”, since: “I move in the micro and in the meso, the macro right now does not fall within my object of study and work .

And, furthermore, I am dedicated to participatory research, sociopraxis, IAP and/or socioanalysis, so there are things that the tool is not capable of anticipating when there is collective creativity involved.

I am a defender of craftsmanship against the tyranny of Big Data. What I put into practice is what these tools offer: perception and anticipation, discussion and proposal”.

3) María del Carmen Vargas qualified that «if we talk about something descriptive, it is logical that the machine can do it better. But research as a whole tries to delve into what is not seen and even search for data/information that is outside the field carried out to complement and support findings, then proposing clear courses of action. It is the way to give value to our work as researchers and until now I have not known of a machine that works outside of what it is programmed to do«.

4) Coral Hernández Fernández, pointed out that: «in the professional field, I believe that its usefulness depends above all on the size of the project. For a study of a couple of groups in which you have to give results in three days, no way! One partner taking analytical notes during the group is worth more than all the software in the world«

And why do they use it so much in the academic field? What are its advantages?:

Un máster para dar el salto hacia la docencia universitaria!

1) Jorge Cruz Cárdenas explained that: «with software like Nvivo or Atlas.ti you can encode directly into the video.

For example, you point to various segments of the video and code them as “new product apathy.” Then, when discussing the results with the client, you automatically retrieve all segments from all videos under this code.

On the other hand if you think this category is part of a larger one, just collapse into the other one, because the segments are already identified.

Additionally, if the client or a colleague of yours doesn’t agree with a label, it’s as easy as typing the new one and saving.

In addition, you have many options to generate themes or conceptual structures“.

2) Nicolás Sáiz Bravo shared some deliverables that can come from platforms like NVivo.

3) Jorge Andrade Ríos told us that: «ignoring that the main thing is the analyst’s knowledge and his sharpness to analyze the data, I believe that natural language processing will sooner or later displace the vast majority of qualitative researchers.

For example, in our case, in the last year it is rare that we do not include text networks, semantics, quanteda type analysis or others, in all the qualitative studies that we have done.

For example, in workshops or group sessions, with bases close to 30 people, you can generate RSNs (Natural Semantic Networks) or make text clusters. The truth is, it is difficult for someone to handle all this, but not the concept. The irruption of technology has been very violent and it takes a lot of effort to catch up.

5) Coral Hernández Fernandez told us: «in large or complex projects, in accompaniments or in ethnographic ones in which you are going to review recordings or if they have done previous tasks, then the coding systems using labels are usually the most useful.

For example, to instantly find different materials that deal with the same topic (a point from a recording, a collage or a photo of a task, etc.).

Also, the ability to categorize and recategorize, add or disaggregate categories, etc. It makes analysis much easier, even when you end up with ideas that are far from the initial ones.

And we have the software at the university for free!”

Will the software continue outside the professional world of qualitative market research?:

Well, they may arrive sooner than we think.

Lately I have attended several events on innovation (within Tech Spirit Barcelona) and I have contacted several companies that are developing software with artificial intelligence and machine learning to be able to analyze the most qualitative information (without much success at the moment, it seems).

I wonder if AI will be the definitive leap that CAQDAS needed.