Guest blog: Melisa Stevanovic and Elina Weiste on impossible content analysis

Two of Finland’s most active and productive young Conversation Analysis researchers, Melisa Stevanovic and Elina Weiste,  tried their hand at an intriguing experiment: analysing what people said about doing CA. The result was a thoughtful article (not in ROLSI) but clearly there was more to it than that, so I was delighted when they agreed to do a guest blog here.

The title they suggested was “On the impossibility of conducting content analysis: Back story of our data-session paper”, which sets the scene tantalisingly…

Melisa head

Dr Melisa Stevanovic, Helsinki University

Screen Shot 2018-01-06 at 16.27.10

Dr Elina Weiste, Helsinki University

We wanted to do something entirely new for us. Since both of us had started our academic careers more or less straightaway with conversation analysis (CA), neither of us had ever before conducted any ordinary content-analytic interview study.

This was certainly a gap in our research records—given that for some people this method appeared to be the only real way of conducting qualitative (social interaction) research.

Furthermore, during the previous years, we had been pursuing studies in university pedagogy and, in that context, been exposed to the general idea of studying teaching and learning. We were thinking of the various ways in which CA was generally taught in our university and how, despite the many CA courses offered, it was the CA data sessions that ultimately worked to socialize newcomers into the CA research practice. All of this led us to make the decision: we wanted to conduct a study on what CA folks generally think about the CA data sessions from a pedagogical point of view. What a perfect opportunity to fill the content-analysis gap in our lives!

A simple two-group design

Since we thought that the opinions of the CA experts and novices could be essentially different from one another, we decided to conduct focus-group interviews for the CA experts and novices separately. We also decided to use a succession of stimulus materials as interview prompts. To generate the stimulus materials, we audio-recorded one real CA data session. From this recording we selected five (in our opinion) particularly interesting segments, hoping that they would inspire the focus-group participants to talk.

For instance, we selected a segment where someone’s CA observation was met with a total silence and a segment where someone’s observations were focused solely on the personal traits and other non-visible qualities of the participants in the data. We had much fun when anonymizing the stimulus materials by altering the speakers’ pitch levels in Audacity. It struck us that the same analytic observation generated a very different impression depending on whether the chosen pitch level represented a female or male voice. Thus, as a prompt to generate talk about possible status hierarchies among CA data-session participants, we created two different audio clips from one speech segment to be played for the focus-groups.

How do you actually do interviewing?

Shortly before the first interview, it actually struck us that we did not necessarily know how to carry out an interview. Melisa, in particular, became anxious when she realized that she could sabotage the whole interview simply by talking too much. Of course, for a large part, our interview protocol consisted of the presentation of stimuli and related open questions such as: “What kinds of thoughts does the heard segment elicit in you?”. However, the question whether and when follow-up questions would be needed appeared to us as more tricky—and risky, since it provided an opportunity for the interviewer to slip in into the conversation. So we decided that Elina—who is used to the long silences associated with psychotherapeutic interaction—should always be the one making the first follow-up question, while Melisa could make the second and third one if needed.

A rogue anecdote about Paul Drew

The interviews went very well. The stimulation material functioned as we had hoped: after listening to a data session clip, the participants started to talk—first somewhat hesitantly but then becoming more and more relaxed and engaged in a free-flowing conversation. They also generated talk about those very the topics that we hoped they would. This was the case, for instance, for the female-male voice prompt described above. After the prompt, the participants discussed the issue of gender only very shortly but then, without us asking anything, moved on to consider the possibility of other types of status hierarchies that might prevail in CA data sessions. Also our distribution of labor regarding the follow-up questions worked pretty well. As for our part, we managed to keep our own talk to a minimum (expect for one anecdote about Paul Drew told by Melisa).

And how do you actually do content analysis?

Then, finally, it was the time to engage in proper content analysis! What would then be the themes that CA folks would talk about? In our data, these appeared to be: the structural organization of the data session, determination of the focus line, the making of notes, the “round” of first analytic observations, and – ultimately – also something about the pedagogical function of the data sessions. Interesting themes, certainly! Curiously, though, we could observe a remarkably high correlation between these “themes” and our interview protocol. Somehow, it did not feel quite right to write a research paper where we would report the (equal) occurrence of all these themes in both the expert and novice groups.

Paul Drew

Paul Drew, Loughborough University, seemed to crop up a lot

So, we thought that maybe it would be more worthwhile to consider those less interviewer-led parts of the group members’ talk that came across as spontaneous. What would then be the topics voluntarily raised by the CA folks? In our data, these included: interjections, intersexuality, California, medical consultations, PhD students, speculations about intentionality, and – evidently – Paul Drew. Again, we got a weird feeling that a paper reporting the occurrence of these themes would raise more questions than answer them. At this point at the latest, we began to realize that the idea of sticking to the mere content of the participants’ speech might not be sensible. In the end, we could not come up with any way of doing it in a sane way.

After this realization, we “loosened up” our approach a little. We decided that also the following four categories, apparent in the participants’ tellings, would count as “content”: ostensibly neutral descriptions of practices, stories with an affective stancepersonal judgments of practices, and generic normative evaluations of practices. We realized that the CA novices were quite eager to tell narratives of their first data-session experiences, while the tellings of the CA experts were more focused on describing the adventures of the international gurus in the field. However, we did not really know what to make out of this observation. During the subsequent rounds of analysis, we also started to consider the accessibility of the experiences told and the valence of the tellings, and – at the point at which we finally gave in to our old instincts – the reception of the tellings by other participants in the group. This led us to come up with a compromise, which has now been reported in our Learning, Culture and Social Interaction article Conversation-analytic data session as a pedagogical institution.

So, at the end, we were really happy that we managed to find a way to combine content-analysis with CA, but needed to notice that: the leopard doesn’t change its spots