Author Archives: charlesantaki

Guest blog: doing a data-session ‘remotely’

Some researchers are lucky enough to work in a community of like-minded scholars, with whom they can easily chat, meet up and collaborate; when that’s not the case, the isolation can be damaging. That’s why it’s so heartening to see a group of UK postgraduates inaugurate a regular “remote” data session, bringing people together who would otherwise be apart. This lively blog by Marina Cantarutti, Jack Joyce and Tilly Flint gives the story.


Marina Cantarutti

Screen Shot 2017-06-01 at 10.45.17

Jack Joyce

Screen Shot 2018-04-20 at 16.12.01

Tilly Flint






The Remote Data Sessions (RDS) is a monthly online data session coordinated by Marina (University of York), Jack (Loughborough University) and Tilly (Ulster University). The initiative came about at the EMCA Doctoral Network meeting in September 2017; we were discussing that with the network becoming increasingly popular, it meant more parallel sessions, which meant missing out on seeing some of our peers’ data – something which seems incongruous with the ethos of the network. In this guest blog post, we wanted to share some of the background behind our reasoning as to why the RDS came about, explain the practicalities of how it works, the technical hitches we have encountered and have a brief look at the future of the sessions.

Great data sessions – if you’re in the right place

Doing Conversation Analysis (CA), and being a Conversation Analyst irrespective of discipline means at one time or another you’ll find yourself in a data session. In CA, a data session involves poring over and scrutinizing audio and/or video recordings with colleagues to discuss some of the interactional features occurring within. Data sessions are central to CA and they “constitute a central locus of socializing new members into the CA community” (Stevanovic & Weiste, 2017: 2). We (the coordinators) are very fortunate to come from universities with regular and well-attended data sessions (DARG and CASLC; but see elsewhere on this site for a list of sessions elsewhere); but for some, accessing data sessions often means travelling to another institution – which isn’t usually feasible for regular attendance.

The solution, and next logical step for collaboration, was to utilise EMCA’s strong online presence to plan for an online synchronous experience for (not exclusively) EMCA PGRs. The first step was deciding on a platform which could facilitate at least 15 participants, suited CA data sessions, allowed us to share and play data to all participants at the same time, and was easily accessible to anyone across the world. This ruled out Skype, Google Hangouts and Adobe Connect; the closest platform to our criteria was BigMarker.

Learning how to wrangle with “Big Marker”

BigMarker is a web-based platform so it’s accessible to anyone with a reliable unfiltered internet connection. It has features such as ‘hands up’ so the coordinators, and/or the presenter can organise turn-taking to reduce echo or overlap which can happen with multiparty video-calls. It also includes a chat box so if someone’s audio fails we can send instant messages, and BigMarker enables the simultaneous playing of data to all participants which means the presenter can play and pause the recording at specific moments if they so choose. It is, however, not without its problems and we have encountered most.

plan shot

Our first session was marred with technical hitches, it took us around 45 minutes just to play the data to the participants, and longer to organise turn-taking. Despite the problems — when the platform worked as we expected it to work, it went really well. Gradually over the following sessions we ironed out the problems and established a step-by-step ‘manual’ for presenters and ourselves to prevent and remedy any potential hitches. One remaining problem, and biggest drawback, is that it is a paid-for platform and we are currently hosting the sessions with the personal account of one of the coordinators, which means that we run the risk of losing access to the platform in the future, as free features tend to become premium features with time. This will eventually become something to consider for the continuity of the project.

Straightforward for the participants

For the time being, we are continuing with this account and platform. For attenders, and presenters, the process is really straightforward. Once the presenter has applied to be session leader, and a date has been settled, all they need to do is test the webinar room in their own time. They can then focus on providing a transcript, and editing and anonymising their video file. The coordinators take care of the virtual room booking, set up the registration process, and manage all communication and technical issues. Sessions last 1 hour and 45 minutes, with the first 15 minutes devoted to technical testing

group shot

Recent Webinar group photo: Magnus Hamann, Jack Joyce, Tilly Flint, Melissa Bliss, Hongmei Zhu, Marc Alexander, Emma Greenhalgh and Marina Cantarutti

and familiarisation with the platform, as well as for introductions. After the introduction by the leader and the (re)playing of the data, all participants are given a slot to make
their own contribution, and immediately after that, the floor is open for further comments.
 And no session is complete without a group photo!

Most of our sessions have been fully-booked so far, we capped the number of ‘participants’* at 15 so that everyone has an opportunity to contribute to the data during the session. Almost all of our participants have been PhD students, representing 23 institutions from across the globe; our only requirements being: Google Chrome, a reliable internet connection and at least a basic understanding of CA / Discourse Analysis.

If you’ve never participated in a data session before and would like to attend / share data then you can read Saul Albert’s blog post on what a data session entails. The next session will be held in May, if you’d like to offer some data for that session then please do get in touch here.


*A “participant” can be an individual with their own mic and/or camera, or a group of individuals in the same location, participating with one camera.

Stevanovic, M. & Weiste, E. (2017). Conversation-analytic data session as a pedagogical institution, Learning, Culture and Social Interaction, vol. 15, pp. 1-17. (And also as a guest blog on this site)


Guest blog: Leelo Keevallik on making grammar real

As a featured debate article in ROLSI vol 51(1), I invited Leelo Keevallik to showcase her argument that traditional conceptions of grammar needed to change: to take the body, and its deployment in the unfolding of turns, seriously. I’m delighted that she also accepted an invitation to write a guest blog, reflecting on how she came to this challenging, and tantalising, new conception.

Leelo Keevallik

Leelo Keevallik, Linköping University

My training as a linguist started behind the Iron Curtain according to a very traditional philological curriculum and no course literature in English. But I was fascinated by the neat grammatical paradigms, the prudent morphology tables, and the precise categorizations of parts-of-speech.

Getting real 

The first revelation that language had to do something with real life came with a thick green book called Directions in Sociolinguistics (ed. by John Gumperz & John Dell Hymes) that was brought to Tartu by a fellow student after a term abroad. It read like a good collection of short stories. William Labov’s variationist method made a special impression on me, and my first grant was for a project to record every inhabitant on a small island.

Screen Shot 2018-03-26 at 10.39.59

Fieldwork and farm work

The speakers had to sit down close to a tape recorder and talk for as long as possible. We would even help them to get some urgent farm work done so that they would have spare time for us. It was exciting to study language variation in relation to social parameters. Indeed, young boys hoping to become fishermen sounded more like their parents than the young girls planning to educate themselves on the mainland.

Twenty years later I returned to the island, this time not to force the people to sit down with me but to video record their working together, going about their everyday business. I am still interested in language, but only as it makes sense in relation to action, collaboration, and embodied behavior. The entire backdrop of my research has changed dramatically since I learned how to deal with the complexities of real-time face-to-face interaction and discover social order in it. Many of my favorite professional moments occur at data sessions with close colleagues in Linköping, while scrutinizing a few seconds of video-recorded conversation.

Syntax and the body

Once in a while, though, I miss explicit discussions on the difference a prefix vs. suffix, or ergative vs. nominative-accusative syntax makes. The recent paper in ROLSI reconciles my interests in both the routinized and the occasioned, the verbal and the embodied, and is an attempt to talk to a broader circle of linguists about real-life interaction. If you take the time to record actual behaviour, and scrutinise it carefully, leaving preconceptions about grammar to one side, you will find that people actually complete their clauses with embodied demonstrations and continue building syntactic structures after them. We can in fact choose to add a relative clause, or maybe even a comparative suffix as an immediate reaction to what the others are currently doing – or not doing, such as not yet straightening their spines enough in a Pilates class. The figures below show the progression of the utterance.

Screen Shot 2018-03-26 at 12.19.18.png

Screen Shot 2018-03-26 at 12.19.33.png

Estonian, French, German, Italian, Japanese, English, Finnish and Swedish

While targeting embodied interaction the paper still treasures linguistic diversity. Some of the patterns are illustrated in Estonian, one of the smallest languages ever analyzed on the pages of this journal [1].

Realising the range of languages that we have collectively worked on without ignoring the body was one of the most gratifying aspects of taking stock of the current research situation. Another one was to be able to pull together threads from such a huge number of studies, even though the final text had to be a balancing act between them. And it was a particular pleasure and honour to have my ideas worked through, and taken yet further by two eminent scholars, Elizabeth Couper-Kuhlen and Jürgen Streeck. I hope readers of this blog will have the time and inclination to follow up the whole debate in ROLSI.

[1] And very welcome (Ed.)

Guest blog: Talking with Alexa at home

I imagine that many interaction researchers will have been curious about how a voice-activated internet-connected device might be integrated (or not) into conversations at home.  Martin Porcheron along with Stuart Reeves,  Joel Fischer and Sarah Sharples (all at the University of Nottingham) went the next step, and did the research. Here Martin and Stuart explain how the research was done…

Screen Shot 2018-02-15 at 11.22.00

Martin Porcheron

Screen Shot 2018-02-15 at 11.23.24

Stuart Reeves

Voice-based ‘smartspeaker’ products, such as the Amazon Echo, Google Home, or Apple HomePod have become popular consumer items in the last year or two. These devices are designed for use in the home, and offer a kind of interaction where users may talk to an anthropomorphised ‘intelligent personal assistant’ which responds to things like questions and instructions. The widespread adoption of this new kind of interaction modality (i.e. voice) provided us with a great opportunity to consider how we could bring ethnomethodology and conversation analysis to bear on talk with and around such devices.

In this guest blog post, we wanted to give some background to our study of these devices and to discuss something we think might interest the ROLSI community. We recently published our findings as a paper to be presented at CHI 2018, the ACM conference for Human-Computer Interaction. We also posted couple of more easily digestible elaborations our findings and data:

There are many different ways you could study how interactions with such a device could unfold. Often people do lab studies or observational studies. However, for us the key considerations were: (1) getting the most ‘naturalistic’ interactions of people with such a device,  and (2) recording the conversational context in which those interactions take place,  before someone says ‘Alexa’ or ‘OK Google’ to wake up the device, as well as capturing what unfolds thereafter.

To achieve this, we provided a number of households with an Amazon Echo for about a month and also gave them a custom device (built by Martin) to record interactions with the Echo.

Screen Shot 2018-02-15 at 11.27.21.png

Martin’s bespoke device to record interactions with the Echo

This device, the ‘Conditional Voice Recorder’ (CVR), is essentially a Raspberry Pi (a credit-card sized, but functionally complete, computer) with a conference microphone stuck on the top. It is always listening — much like the Amazon Echo — but differs in a range of ways. Firstly, it has lights to show when it was listening and when it was recording. Secondly, a button was added to enable/disable recording, as we wanted participant households to feel comfortable with the study and have control over the data that was being collected.

What it records 

To collect contextual information of how an interaction was occasioned, the CVR listens continuously for an interaction with the Amazon Echo (the Amazon Echo interaction always starts with the word ‘Alexa’) and keeps the last minute of audio in the memory of the device. When an interaction with the Echo starts, it saves the last minute of audio to an internal memory card, and also records for one further minute. If people use the Echo again in that minute, recording is extended.

Screen Shot 2018-02-15 at 11.46.06.png

We encountered many challenges with the CVR, not least the challenges of people with non-English accents. In much the same way that commercial systems struggle with peoples’ accents, so did this one. Fortunately, we built in a system to remotely update the CVR once it was deployed into people’s homes. We also adjusted settings remotely to make the device more sensitive to detecting ‘Alexa’, or sometimes ‘less’ sensitive. Another concern was what happens if the recorder were to crash, or people left it turned off without realising (something that we found was easy to do during development) — the solution here was to turn the device automatically off and on again overnight.

How to analyse it?

As any CA researchers who work with audio-only recordings only will know well, one of the challenges during our research was working out the situational relevancy, particularly of interactions that seemed to not involve the Echo but were nevertheless occurring alongside it (a video camera would have been useful!). A mitigating factor here, we found, of course, was that participants routinely produced accounts of their interactions with the Echo and the relation of these to the current course of their activities in the home. Participants tended to audibly orient between (as well as embed) their interactional work with the Amazon Echo both alongside and ‘inside’ various other typical tasks of home life: cooking, watching TV, or eating dinner together.

To give a flavour of the sort of data we have collected, consider the short (but very rich) fragment below of a family using the Amazon Echo, taken from our paper. Here, Susan (the mother; all names changed) announces to the group her desire to play a particular game (called ‘Beat the Intro’) with the Amazon Echo, with a negative assessment from Liam (the son) overlapping Susan’s almost immediate instigation of the “wake word” to the Amazon Echo. Carl (the father) approves, inserting a quick ‘yeah’ in the pause between the wake word and the subsequent request by Susan:

Screen Shot 2018-02-16 at 08.00.37.png

However, the request fails, but not before Susan turns to Liam and instructs him to keep eating his food, with Carl providing support. Susan then repeats the request, implicating her assessment that the prior request has failed….

By adopting a CA approach, we very quickly were able to draw out some of the nuanced ways in which interaction with the Amazon Echo gets sequentially interleaved with other ongoing activities. Naturally, however, we also learned much of the character of home life of our participants just by listening to these interactions with / around the Echo. Through just our audio recordings of Amazon Echo use, you begin to understand habits such as meal times, music tastes, TV interests, shopping habits, and so on. We have plans to release some of our audio for others to use in research but ensuring we can maintain the confidentiality and anonymity of participants makes this a significant challenge.

In summary, we started out with a rather exciting challenge of trying to understand how interaction with new voice-based devices is practically achieved amidst the complex multiactivity setting of the home, facing a number of challenges in the process. Some of these we overcame with technology, such as running collected data through further automatic speech recognition software, and others, through analysing the audio recordings and drawing on EMCA.

Hopefully this guest blog provides some of the background to our work. If you’re interested in the outcomes of our analyses, we encourage you to check out the other posts linked to above, or the paper itself.


Guest blog: Melisa Stevanovic and Elina Weiste on impossible content analysis

Two of Finland’s most active and productive young Conversation Analysis researchers, Melisa Stevanovic and Elina Weiste,  tried their hand at an intriguing experiment: analysing what people said about doing CA. The result was a thoughtful article (not in ROLSI) but clearly there was more to it than that, so I was delighted when they agreed to do a guest blog here.

The title they suggested was “On the impossibility of conducting content analysis: Back story of our data-session paper”, which sets the scene tantalisingly…

Melisa head

Dr Melisa Stevanovic, Helsinki University

Screen Shot 2018-01-06 at 16.27.10

Dr Elina Weiste, Helsinki University

Continue reading

Guest blog: Jason Turowetz on “I just thought…”

“I just thought… ” is one of those phrases whose meaning we think we know, but there are intriguing subtleties in what people do with it in conversation. In a recent article for the journal, Jason Turowetz delved into some of its main uses. Here he gives the background to the story. 

Jason Turowetz

My article on ‘I just thought formulations’ has its origins in a study of speed dating I conducted with a colleague, Matthew Hollander, in 2009, when we were graduate students at the University of Wisconsin-Madison. It seems a long way back, but that shows how a phenomenon can lodge in your head and inspire a continuing thread of research. Continue reading

Guest blog: Gareth Walker on how acoustic data are represented

Quite often a ROLSI article touches on a matter than will interest a very wide range of readers, and Gareth Walker‘s account of how acoustic data is represented is a very good example. The range of representations is wide, and not all are equally good for the same things; some may even be misleading. I’m delighted that Gareth has agreed to go into some of the thinking that prompted him to write the piece. Continue reading

Guest Blog: The 8th biannual EM/CA Doctoral Network meeting

Twice a year, UK postgraduates meet to thrash out issues in ethnomethodology and Conversation Analysis, generously hosted by staff at a University. The second meeting this year was held at Newcastle. Jack Joyce tells the story, and Marc Alexander muses on the pros and cons of parallel sessions.

Jack Joyce, Loughborough DARG

The 8th biannual EMCA Doctoral Network event was hosted at Newcastle University. It brought the marvellous event to the land of Applied Linguistics, and gave us EMCA researchers a further opportunity to explore different ways with which EMCA is employed around the UK. The collegial and supportive spirit highlighted at past EMCA Doctoral Networks was again, present, giving us the chance to meet old friends and make new connections. Continue reading