Research problem

What are the grounds and the implications of the use of co-speech gestures in consecutive dialogue interpreting? Does gestural production help decrease the interpreter’s cognitive load?


Consecutive dialogue interpreting consists in bidirectional oral translation provided for users who do not speak the same language. It is the most frequent form of linguistic mediation provided in public service interactions with migrant users who do not yet master the language of the host country. Despite a seemingly language-centered character of interpreter-mediated encounters, the act of interpreting is described as a multimodal and embodied cognitive activity where gestures facilitate participatory meaning-making processes and help to coordinate turn taking. The study aims at investigating how gestural production of interpreters influences their cognitive load and the users’ satisfaction from the interpreting performance. The research design is based on non-invasive experiments involving students of interpreting departments who perform mock dialogue interpreting tasks resembling interactions in medical, police and administrative settings. Study 1 conducted onsite in Warsaw is focused on collecting psychophysiological data such as electroencephalogram (EEG) and Hear Rate Variability (HRV) of the interpreters, parameters known to indicate cognitive load, and cross-examining them with behavioral data gathered in video recordings. Study 2 conducted abroad (France, Spain) is based on remote interpreter-mediated interactions which are video recorded and followed by questionnaires and self-reports helping to examine subjective indicators of the cognitive load (e.g. level of fatigue, perceived level of difficulty of the task) and the appraisal of the interpreting performance delivered with or without spontaneous co-speech gestures.