anno iv - numero 1 - come journal

106
Anno IV - Numero 1

Upload: others

Post on 18-Dec-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Anno IV - Numero 1 - CoMe Journal

Anno IV - Numero 1

Page 2: Anno IV - Numero 1 - CoMe Journal

Anno IV/2019

Daniel Dejica-Cartis, Politehnica University of Timisoara - Romania

Olga Egorova, Astrakhan State University - Russia

Najwa Hamaoui, Université de Mons - Belgio

John Milton, Universidade de São Paulo - Brasile

Pilar Orero, Universidat Autònoma de Barcelona - Spagna

Franca Orletti, Università degli Studi “Roma Tre” - Italia

Luca Serianni, Università degli Studi “La Sapienza” di Roma - Italia

Comitato Scientifico

Vice-Direzione

Carlo Eugeni, Eleonora Romanò

In collaborazione conla Scuola Superiore per Mediatori Linguistici di Pisa

Monika Pelz(Scuola Superiore per Mediatori Linguistici)

Direzione

Comitato Editoriale

Beth De Felici, Alessandra Carbone, Fabiana Mirella Garbosa,

Giulia Kado, Alina Kunusova, Shijie Mao, Issam Marjani,

Fiammetta Papi, Paolo Tomei, Silvia Velardi, Yukako Yoshida

[email protected]

Copyright © 2019. CoMe. Studi. Tutti i diritti riservati.

Page 3: Anno IV - Numero 1 - CoMe Journal

INDICE

Curricula-wise similitudes and discrepancies between translation competences (TC) across the European Union and in Romania.A case study on the cultural and linguistic sub-competences.MARIA CRISTINA MIUȚESCU

Curriculum Design in Diamesic Translation -For the Didactics of Real-Time Intralingual SubtitlingCARLO EUGENI

4

17

Multimodal corpora input to translation trainingDIANA OŢĂT 29

43Identifying parameters for creating Easy to Read subtitlesROCÍO BERNABÉ – ÓSCAR GARCÍA

La traduction audiovisuelle adaptée aux vidéos du web:Sous-titrage vs voice-overFLORIAN LEONI

60

84

91Neural aspects in simultaneous interpretingThe role of music in intralingual and interlingual activitiesMARIA LAURA MANCA, MARTINA COSCI, MICHELANGELO MAESTRI,GIULIA RICCI, GABRIELE SICILIANO, ENRICA BONANNI

Subtitling of the The Star movie into Romanianbetween the literal and the non-literalPAULA-ANDREEA GHERCĂ

Page 4: Anno IV - Numero 1 - CoMe Journal

www.comejournal.com 4

MARIA CRISTINA MIUȚESCU

In the translation industry, technological change has had an ever-increasing impact on how tran-slation services are offered. However, human intelligence, knowledge, and skills are still key factors in delivering high-quality translations. Translation competences (TC) that can only be displayed by professional translators and neither by machine translation applications are still extremely important nowadays and have a major impact on the production of an accurate piece of www.comejournal.com. translation. Translation competence (TC) is a superordinate concept or macro-competence/supercompetence that integrates a wide range of sub-competences within its spectrum. The following sub-competences, both listed in the EMT Competence Framework and PETRA-E Competence Framework, could be defined as belonging to the academic training of a professional translator – the cultural competence (being able to identify and deal with lexis-based difficulties and differences between the source culture and the target culture, troubleshoot cul-tural gaps and find the best strategies and procedures in order to render them in the target lan-guage in a way that must be easily accessible to the target readership), personal and interpersonal

competence known as “soft skills” (the ability to perform the job of a professional translator in a

Curricula-wise similitudes and discrepanciesbetween translation competences (TC) across the European Union and in Romania.

A case study on the cultural and linguistic sub-competences.

West University of Timisoara – [email protected]

Keywords: acquisition of translation competence (ATC), ATC-based curriculum, cultural subcompetence,

EMT Competence Framework, linguistic sub-competence, PETRA-E Competence Framework, Romanian

Translation Studies MA Programmes

1. Introduction

This study explores the range of similitudes and discrepancies between the level of translation competence

(TC) formulated in the ATC-based curricula of specialised EU Translation Studies MA programmes and

the range of translation competences (TC) targeted by specialised Romanian Translation Studies MA pro-

grammes’ ATC-based curricula – primary emphasis to be laid upon the cultural and linguistic sub-com-

petences. This case study is grounded in one primary data mining methodological tool, i.e. a questionnaire

disseminated among students enrolled in the Theory and Practice of Translation MA programme (Faculty

of Letters, History, and Theology, West University of Timisoara).

It builds on previous research concerning the evolution of translation competence (TC) and the design of

ATC-based curricula, attempting to identify the level of translation competence (TC) required at a Europe-

an level (considering the European Master’s In Translation Competence Framework 2017 and PETRA-E

Competence Framework, both issued by EU bodies) as compared to the one attained by students enrolled in

Romanian MA Translation Studies programmes (Timisoara, Romania).

Abstract

Page 5: Anno IV - Numero 1 - CoMe Journal

Maria Cristina Miutescu (2019) “Curriculawise similitudes and discrepancies..”, CoMe IV (1), pp. 4-16

www.comejournal.com 5

According to the EMT group, the concept of competence could be defined as the “proven ability to use knowledge, skills and personal, social and/ or methodological abilities, in work or study situations and in professional and personal development” (Emt framework 2017:3), while Lasnier (2000) describes it as “a complex know-how to act resulting from integration, obilization and organization of a combination of capabilities and skills […] and knowledge […] used efficiently in situations with common characteristics” (Lasnier 2000, cited in Reza Esfandiari et aL. 2015:45), with Gonzalez and Wagenaar (2003) defining it as “a combination of set skills, knowledge, ap-titudes and attitudes and […] disposition to learn as well as know-how” (GonzaLes, waGenaar 2003:10, cited in Reza Esfandiari et aL. 2015:45). Edwards and Csizer (2004) point out the fact that competences are “a type of knowledge that learners possess, develop, acquire, use or lose” (edwards, Csizer 2004, cited in kaminskiene, kavaLiauskiene 2012:139), while Kasper (1997) states that competence “cannot be taught, but students should be provided with opportunities to de-velop their pragmatic competence” (kasper 1997, cited in kaminskiene, kavaLiauskiene 2012:139). Furthermore, by drawing parallels to other concepts denoting the same reality, competence has also been used by some scholars “as a (near) synonym to expertise” (sChwieter, ferreira 2014:6), which has been defined as “the bulk of cognitive resources and skills leading to […] superior performance” (sChwieter, ferreira 2014:3). This analogy has been pointed out by authors like Ehrensberger-Dow and Massey, Gopferich (2013) and PACTE (2003) (sChwieter, ferreira 2014:6).

On a more particular note, several authors have attempted to define the concept of translation com-petence (TC) over the years, most notably the members of the PACTE research group (2000,2003) – “the underlying system of knowledge and skills needed to be able to translate” (paCte 2000:100, cited in Reza Esfandiari et aL. 2015:45), Kelly (2005) – “the macro-competence that comprises the different capacities, skills, knowledge and even attitudes that professional translators possess and which are involved in translation as an expert activity” (keLLy 2005:14-15, cited in Reza Esfan-diari et aL. 2015:46) and Schaffner (2000) – “a complex notion which involves an awareness of and conscious reflection on all the relevant factors for the production of a target text (TT) that

2. Competence vs. skill in Translation Studies

multitude of working formats, to comply with deadlines and to effectively dapt to the working schedule) and evaluative competence (being able to justify the core strategy of translation methods and procedures applied in order to render the text in the target language and to assess one’s own translations by contrasting them to other people’s solutions).

Considering the fact that these competences are key factors in producing an effective and accu-rate piece of translation, it is essential to see to what extent specialized Translation Studies MA programmes (taught at the West University of Timisoara), such as the Theory and Practice of Tran-

slation MA Programme, focus on getting their students equipped with these sub-competences both theoretically and practically-wise. The present article will limit its scope to discussing the cultural and linguistic sub-competences.

Page 6: Anno IV - Numero 1 - CoMe Journal

Maria Cristina Miutescu (2019) “Curriculawise similitudes and discrepancies..”, CoMe IV (1), pp. 4-16

www.comejournal.com 6

appropriately fulfils its specified function for its target addressees” (SChaffner 2000:146, cited in ZOU 2015:788).Broadly speaking, it must be noted that almost all the existent definitions of competence feature the term skill within their content – it can, hence, be inferred that, at least in a translational context, a skill is an integrated part of the competence, or, to put it another way, a subordinate concept to the superordinate concept competence.

Starting with the 1970s, many translation scholars have investigated the components that lie behind the concept of translation competence (TC) – for instance, the decade of the 1970s fea-tures some scholars pointing out three major competences that they consider mandatory in theprofessional translator’s training schemata: source language receptive competence, target langua-

ge reproductive competence and super-competence (hurtado aLbir 2017:19). While the SL receptive competence refers to the ability of decoding and understanding source texts, the TL reproducti-ve competence refers to the ability of using linguistic and textual resources in the target langua-ge. The super-competence could, however, be considered the result of combining these two com-petences and the ability to transfer messages between the source and target culture linguistic and text systems (hurtado aLbir 2017:19).

The 1980s and the 1990s bring along new reflections on the sub-competences that make up thesuperordinate notion of translation competence – e.g. according to Wilss (1982:58, cited in Reza Esfandiari et aL. 2015:45), translation competence requires “an interlingual supercompetence”, while Bell claims that translation competence “includes the set of knowledge and skills posses-sed by the translator so as to perform a translation” (beLL 1991:43, cited in Reza Esfandiari et aL. 2015:45). Delisle, on the other hand, suggests four competences required in order to translate – lin-guistic competence, encyclopaedic competence, comprehension competence and reformulation competence (deLisLe 1980:235, cited in hurtado aLbir 2017:19). Roberts (1984) expands this num-ber to five, listing the following competences as mandatory when translating – linguistic com-petence, transfer competence, methodological competence, thematic competence and technical

Generally speaking, translation competence (TC) may be analysed on a dual level – on a macro-

level, as a superordinate concept, macro-competence (keLLy 2005:14, cited in Reza Esfandiari et aL. 2015:46) or super-competence, comprising many other sub-competences within its spectrum - since the beginning of the 1990s, translation competence (TC) has been described as “a multi-componential competence which comprises of sets of technological, cultural or linguistic skills” (Reza Esfandiari et aL. 2015:45) - and, on a micro-level, as a subordinate concept, overlapping with the concept of sub-competence, component or skill.

3. Translation competence (TC) – macro-competence vs. sub-competence

Two translation sub-competences of paramount importance – the cultural and linguistic subcom-petences

Page 7: Anno IV - Numero 1 - CoMe Journal

Maria Cristina Miutescu (2019) “Curriculawise similitudes and discrepancies..”, CoMe IV (1), pp. 4-16

www.comejournal.com 7

competence (hurtado aLbir 2017:19). Nord (1988/1991, 1992) emphasizes three primary com-ponents in translation competence – transfer competence, linguistic competence and cultural competence – mastering these feats on a bilingual level, i.e. source and target language (nord 1992:47, cited in hurtado aLbir 2017:19). On his turn, Neubert also mentions linguistic and tran-sfer competences, adding the subject competence to the last two (neubert 1994:412, cited in hurtado aLbir 2017:20). Hurtado Albir highlights five key-competences in her studies – lingui-stic competence, extralinguistic competence, transfer competence, professional competence and strategic competence (hurtado aLbir 1996, cited in hurtado aLbir 2017:20). Risku (1998) suggests four sub-competences that are required in the translation process – “setting a macrostrategy, integrating information, planning and decision-making and self-organisation” (hurtado aLbir 2017:21).It can, hence, be concluded that almost all translation scholars have agreed upon two primary subcompetences or components required when translating – linguistic competence, understood as the ability to comprehend the SL and to produce an equivalent of the original message in a TL (hurtado aLbir 2017:20) and transfer competence, defined as the overall understanding of the original text and its reformulation in the target language based on the “purpose of the transla-tion and characteristics of the target reader” (hurtado aLbir 2017:20).

4. Research methodologySampling method

The research detailed on in this article is a case study based on the design, administration, andanalysis of one questionnaire distributed among second-year students enrolled in the Theory and Practice of Translation MA Programme (Faculty of Letters, History, and Theology, West University of Timisoara) during the 2018/2019 academic year, seeking to get an overview of the level of acquired translation competence (ATC) during the two-year span of the master’s pro-gramme – it must be reiterated that the questions based on the two translation sub-competences under scrutiny, i.e. the cultural and linguistic ones, are exclusively based on written translation and not interpreting, primarily due to the fact that the latter might involve additional skill-spe-cific tasks that do not fall within the scope of this study.It must be noted that a similar study was carried out three years ago (2016/2017) by two pro-fessors delivering lectures at the same master’s programme and attempted to explore the appli-cability of PETRA-E Framework (punGĂ, PERCEC 2017:146) based on a wide range of translation tasks targeting several sub-competences, most notably the cultural, heuristic, linguistic, textual, and transfer sub-competences. Although their study focused largely on PETRA-E Framework (my research also considered the EMT Framework) and resorted to a text as the primary data mining tool (the tasks included in my questionnaire were generalised), The Gift of the Magi by O. Henry, on which first-year MA students were invited to work, results of their research concer-ning the cultural and linguistic sub-competences will be briefly mentioned when analysing the ones based on this questionnaire.

Page 8: Anno IV - Numero 1 - CoMe Journal

Maria Cristina Miutescu (2019) “Curriculawise similitudes and discrepancies..”, CoMe IV (1), pp. 4-16

www.comejournal.com 8

Method-based frameworks

As the purpose of this study is investigating whether or not Romanian Translation Studies MA Programmes’ ATC-based curricula such as the Theory and Practice of Translation Programme com-ply with the EU standardised levels of translation competence (TC), I will primarily look at two recently-launched translation competence framework projects designed by European Union bo-dies – The European Master’s in Translation Competence Framework (2017), commonly referred to as the EMT Framework, and PETRA-E Competence Framework. The main reason for resorting to these two frameworks is the exhaustive skill-specific items that surround the cultural and linguistic

sub-competences.The European Master’s in Translation Competence Framework (2017) is undoubtedly one of the most ambitious and comprehensive outlines of its kind in the field of Translation Studies, tar-geting and analysing the range of translation competences (TC) that a professional translator to be should possess upon completing his/her MA programme. It is extremely useful for MA programmes focusing on competence-based training (CBT) as it provides competences related to five main fields – the core, translation, as well as four related fields – language and culture, tech-nology, personal and interpersonal skills, and service provision.As discussed before, translation competence (TC) must be observed at a dual level: at a microle-vel, as a task-specific competence or sub-competence and at a macro-level, as a macrocompetence or super-competence, comprising all the task-specific competences or subcompetences, skills and knowledge required when rendering the message from the source language into the target lan-guage. The EMT Framework follows this principle and lists competences that are urther subdi-vided into sub-competences.The first field emerging from the translational process and developed within the EMT Fra-mework is represented by language and culture, described as “all the general or language-specific linguistic, sociolinguistic, cultural and transcultural knowledge and skills that constitute the ba-sis for advanced translation competence” and it could be considered “the driving force behind all the other competences described in this reference framework” (EMT FRAMEWORK 2017:6). Language generates the linguistic or bilingual sub-competence, while culture embodies the field in which the extra-linguistic or cultural sub-competence are grounded – these two areas, man-datory when constituting the translation macro-competence spectrum, have been adjoined wi-thin a single section in the EMT Framework, which could, naturally, be interpreted as an em-phatic way of highlighting the key-connection between the two fields when employed in the translation process. However, the EMT Framework stresses the paramount importance of the linguistic subcompetence when stating that an applicant should possess a certificate testifying his/her level in the working foreign languages and it must unveil a “CEFR level C1 and above or an equivalent level in comparable reference systems”, hence constituting a “prerequisite for access to any EMT Master’s degree course in translation” (EMT FRAMEWORK 2017:6).The second framework considered when designing the two questionnaires that constitute the core of this study is the PETRA-E Competence Framework, available in nine languages – Dutch, English, French, German, Hungarian, Italian, Portuguese, Spanish, and Bulgarian. Similarly to the European Master’s in Translation Competence Framework, it has been created by the European

Page 9: Anno IV - Numero 1 - CoMe Journal

Maria Cristina Miutescu (2019) “Curriculawise similitudes and discrepancies..”, CoMe IV (1), pp. 4-16

www.comejournal.com 9

Commission, thus unveiling all the translation competences (TC) and the derived task-specific skills that a successful professional translator must possess. It is to be mentioned, however, that the PETRA-E Competence Framework is the specialised counterpart of the EMT framework in the sense that it refers to task-specific skills connected to literary translation and much less to the translation of pragmatic texts. The main reason for resorting to this framework, although literarytexts-oriented, is the fact that its listed translation competences (TC) may be applied to all domains of activity and have been included in most of the TC models. In order to make it suitable for all text types, I preserved most of the range of translation competences and I only adapted its taskspecific skills schemata so as to fit both literary and specialised/pragmatic text types.The PETRA-E Competence Framework brings along a more minutely organised level-wise layout, highlighting several levels ranging from beginner (LT1), advanced learner (LT2), early career professional (LT3), advanced professional (LT4) to the top-qualified expert (LT5). Each of these levels displays several descriptors or skill-specific tasks characteristic of each translation com-petence (TC) listed within the framework. Considering the fact that the questionnaire used as the research instrument in this case-study is aimed at second-year MA students, on the point of graduating and looking for jobs, the most realistic option would be targeting the early career

professional (LT3) level criteria. The set of translation competences (TC) is extremely comprehen-sive, encompassing almost all the sub-competences suggested by scholars when attempting a definition of the superordinate oncept of translation macro-competence or supercompetence, i.e. transfer competence, language competence, textual competence, heuristic competence, lite-rary-cultural competence, professional competence, evaluative competence and research com-petence. The second translation competence displayed in the framework is the language competence, defined as the “grammatical, stylistic and pragmatic mastering of the source language and the target language especially in the domains of reading and writing” (PETRA-E FRAMEWORK). Its taskspecific kills or descriptors particular of the early career professional (LT3) level involve the adoption of an appropriate style and language variety – features of the bilingual or linguistic subcompetence.The fifth translation competence (TC) listed in this framework is the literary-cultural compe-tence or, broadly speaking, the cultural competence, defined as the “ability to apply knowledge about the source and target […] culture while making a […] translation; […] the ability to handle cultural differences” (PETRA-E FRAMEWORK). Its task-specific skills or descriptors characteristic of the early career professional (LT3) level involve situating the translation in the target culture, ef-fectively dealing with culture-specific elements, differences between source and target culture, and intertextual references.Due to its nature, this case study is mainly of a qualitative nature, i.e. a data-driven investigation presupposing a worm’s eye view approach, seeking to explore, name, and define the strengths and shortcomings of specialized Translation Studies MA programmes’ curricula, particularly those connected to the cultural and linguistic sub-competences. Nevertheless, quantitative data analyses - theory-driven and presupposing a bird’s eye view - will also be employed when dealing

Page 10: Anno IV - Numero 1 - CoMe Journal

Maria Cristina Miutescu (2019) “Curriculawise similitudes and discrepancies..”, CoMe IV (1), pp. 4-16

www.comejournal.com 10

4. Data collection and analysis

with certain sections of the questionnaire, such as testing the students’ ability to deal with diffe-rent translation tasks meant to test the use of specific translation competences.The questionnaire comprises three main components – the personal component, i.e. personalde-tails of the participants related to their academic background (Section I); the testing component, i.e. purposefully designed questions meant to observe the efficiency of the MA programme’s theory-based and practice-based curricula by considering the participants’ answers (Section II); the evaluative component, i.e. the participants’ wide-ranging review of the classes provided du-ring the Theory and Practice of Translation MA Programme (Section III). The emphasis will be laid primarily upon the cultural and linguistic sub-competences belonging to the testing component.Section II, the body of the questionnaire and the main area of data mining, focuses on the range of translation competences (TC) and fields mentioned in the European Master’s in Translation

Competence Framework (EMT) and PETRA-E Competence Framework. It contains questions based on task-specific translation sub-competences and skills required at the level of early career profes-

sional (LT3) – as referenced in PETRA-E Framework.Of the range of translation sub-competences, two of the most extensive in terms of question variety are to be analysed in what follows, i.e. the cultural and linguistic/language sub-competences. They comprise issues arising from the differences between the source langua-ge culture and the target language culture. Among these questions, there can be mentioned – How often do you summarise/rephrase source texts in order to make sure that you have understood

all the main ideas?; Choose the way in which you deal with the language variety (e.g. a translation

task involving a literary source text, written in the 19th century or earlier), in order to render the source

text in the target language; Choose the way in which you deal with the language variety (e.g. a source text

featuring dialectal lexis - e.g. Scottish, Welsh, Irish, etc.) in order to render the source text in the target

language; Did you have to deal with texts involving cultural differences between the source language and

target language when translating?

As the bilingual/linguistic and the cultural sub-competences are the two components that prima-rily fall within the scope of this study, the analysis will particularly be targeted towards them and less towards their remaining counterparts. By analysing the data collected, a correlation between the tasks given during the practical classes of the MA programme and the students’ level of con-fidence and self-assessment when dealing with certain text types and translation directions was explored. Task-specific questions comprise issues arising from the differences between the sour-ce language culture and the target language culture and are inspired by the descriptors emer-ging from the LT3 level of competence within PETRA-E Framework and by the “transcultural and sociolinguistic awareness and communicative skills” pointed out in the EMT Framework 2017. It must be noted, however, that all translation sub-competences are interconnected and cannot function properly without resorting to other counterparts. For instance, the first que-stion to be analysed in the following simultaneously targets the linguistic and textual sub-com-petences, as pre-translational stages, while preparing the ground for the transfer counterpart.

Page 11: Anno IV - Numero 1 - CoMe Journal

Maria Cristina Miutescu (2019) “Curriculawise similitudes and discrepancies..”, CoMe IV (1), pp. 4-16

www.comejournal.com 11

There were 21 participants who answered the close-ended question How often do you summarise/

rephrase source texts in order to make sure that you have understood all the main ideas?, representing 100% of the total number of respondents. 12 of them (57,1%) opted for “sometimes”, 4 (19%) said “often”, 3 (14,3%) answered “rarely” and 2 (9,5%) opted for “never”. Constituting major pre-translational phases, rephrasing and summarizing source texts areundoubtedly two-key actions when it comes to textual comprehension and must be paired with the translation textual sub-competence for a proper performance of the translator. They enhance a systematic analysis of the source text and prepare the ground for the next tran-slation subcompetence to be activated – the transfer competence. While rephrasing and sum-marizing, students can easily detect vocabulary-connected issues that might impede the pro-per functioning of the transfer competence later on. The percentages mentioned above display generally positive levels regarding the students’ pre-translational awareness as a substantial majority of them have pointed out that they use these techniques “often” or “sometimes” – i.e. some of them summarize and rephrase prior to proceeding to render the message in the target language text.

Page 12: Anno IV - Numero 1 - CoMe Journal

Maria Cristina Miutescu (2019) “Curriculawise similitudes and discrepancies..”, CoMe IV (1), pp. 4-16

www.comejournal.com 12

Transitioning to the cultural sub-competence and testing-component of this sub-section, all participants (21) answered the semi-closed question Choose the way in which you deal with the

language variety (e.g. a translation task involving a literary source text written in the 19th century or

earlier), in order to render the source text in the target language. Tick only one option. 15 (71,4%) of the respondents opted for “I try to preserve the meaning as much as possible, searching for archaic equivalents in the target language”, 5 (23,8%) for “I try to preserve the meaning as much as pos-sible, adapting all the archaic style to a more standardised version”, while 1 participant (4,8%) answered “I do a good translation”. According to Meshalkina (2008:206, cited in ANDRIENKO 2016), there are three types of archaic texts – “archaic texts that were created in the contempo-rary language but aged over time; their translation is termed as diachronic”; “modern archaized texts which have been deliberately stylized to depict situations remote in time (artistic past), where synchronic translation is required”, and “archaized archaic texts that were created with the illusion of the past artistic time but are also remote in real time; the translation of such texts is described as diachronic translation of archaized texts”. The question whose answerdistribu-tion is represented in Figure 2 could be categorized as being related to the first type of archaic text - “created in the contemporary language but aged over time”. Lexis, one of the key cultural components, is exploited in this particular context, i.e. in the case of a literary-text translation, the students having been asked to choose the linguistic and culture-wise solution they conside-red the best. The two options they were offered to make a choice from were either to preserve the archaic source vocabulary – archaisation / archaising or to adapt it to a more contemporary version in the target text – modernisation / modernising. -An overwhelming majority went for the option of preserving the literary archaic lexis in the target text, choosing to pay attention to the source style instead of adapting it to a contemporary version, which may be interpreted in two ways, as follows: a) possibly dealing with many classes of literary translation during the MA programme, students could have been advised by teachers to pay particular attention to style, and to do their best to stick to it in case they are set a translation task involving style problems; or b) they intuitively declared themselves in favour of preserving the archaic style, a less plau-sible scenario, however, as the percentages representing the answers would not have been so uneven - percentages showed an overwhelming majority opting for preserving archaisation in the production of the target text, this indicating a strong probability of skillachievement heavily relying on the teachers’ theoretical guidance and hence, through correlations with previous co-gnitive experiences rather than by means of intuitive assumptions.

Page 13: Anno IV - Numero 1 - CoMe Journal

Maria Cristina Miutescu (2019) “Curriculawise similitudes and discrepancies..”, CoMe IV (1), pp. 4-16

www.comejournal.com 13

Going on with the cultural sub-competence skill-related tasks, there were 21 participants an-swering the semi-closed question Choose the way in which you deal with the language variety (e.g. a source text featuring dialectal lexis - e.g. Scottish, Welsh, Irish, etc.) in order to render the source text in the target language. Tick only one option, representing 100% of the total number of respondents. Out of 21 participants (100%), 13 opted for the “I try to preserve the meaning as much as possible, however I neutralize the source text because a region from a certain coun-try cannot be a proper equivalent for a region within another country” answer (representing 61,9%), 7 for the “I try to find a dialect within my country (e.g. Moldavian) and translate the whole source text using specific dialectal words” answer (representing 33,3%), while 1 partici-pant originally responded “It depends if the context needs to be translated or localised” (repre-senting 4,8%). By far one of the most complex and challenging tasks for a professional translator is to faithfully render the message of a source text written in a specific dialect that pertains to a certain country’s region in the target language while preserving the dialect-specific style. As with the previous question depicted in Figure 2, whose primary two style-troubleshooting so-lutions were either preserving the archaic style or modernising it, this question – depicted in Figure 3 unveils two major possibilities – either searching for an adequate dialectal counterpart within the country of origin of the target language or neutralising the dialectal lexis – the pre-ferred solution targeted by many scholars including Peter Newmark (1988) - with standardised counterparts, faithfully rendering the message in the target language, yet missing on the styli-stic component. Notwithstanding, both options bring along pros and cons – the first scenario, although preferable in terms of stylistic features, raises the question of the best dialectal target version equivalent – theoretically speaking, one cannot be 100% sure of the complete overlap of two different-country regions such as, for instance, of the Welsh dialect pertaining to the UK and the Moldavian dialect pertaining to Romania – “there is no need to replace a coalminer’s dialect in Zola with, say, a Welsh coalminer’s dialect, and this would only be appropriate, if you yourself were completely at home in Welsh dialect” (NEWMARK 1988:195). While both illustrate dialect-specific features, they could not be culturally overlapped and the translator, although gaining on the overall stylistic effect of the target text, would probably lose on what concerns the transmission of the original message. Cultural allusions implicitly or explicitly present in the source text are in danger of being lost in the target text – e.g. idioms, proverbs, puns, etc. Conversely, the second scenario contrasts with the first one in that it gains in terms of faithful-ness of the message rendered by using neutral lexis that would be more easily accessible to the target readership, yet it would compromise on the stylistic component, not rendering the shades of meaning that the author of the source text applied to his/her text. Most second-year MA stu-dents have opted for gaining in terms of communicative purpose – attempting to “render the […] meaning of the original in such a way that both content and language are readily acceptable and comprehensible to the readership” (NEWMARK 1988:47) and therefore losing in the stylistic depart-ment, choosing neutralisation over dialectisation. As with the previous question depicted in Figure

2, this result can be interpreted dually, i.e. as a correlation with previous cognitive experiences of this particular task, when students could have been assisted by teachers who might have en-couraged them to perform this methodological action or as an intuitive approach.

Page 14: Anno IV - Numero 1 - CoMe Journal

Maria Cristina Miutescu (2019) “Curriculawise similitudes and discrepancies..”, CoMe IV (1), pp. 4-16

www.comejournal.com 14

However, this latter assumption is less credible than the first one considering that a vast majori-ty of the participants come from Letters-related BA programmes and have dealt at least tangen-tially with the process of translation in terms of practical activities.

Staying within the area concerning the cultural spectrum, there were 21 participants answering the close-ended question Did you have to deal with texts involving cultural differences between the

source language and target language when translating?, representing 100% of the total number of respondents. Out of 21 participants (100%), 19 opted for the “yes, and we were encouraged by the teachers to argue our translation solutions and strategies” answer (representing 90,5%), while 2 participants chose the “yes, but we did not have to justify our translation solutions, just to check if our choices were correct or not” answer (representing 9,5%). The unanimous positive answer shows a significant provision of classes in terms of translating documents featuring cul-tural issues, resulting in the students’ familiarization with the cultural issues and possible ways to troubleshoot them. For instance, an overwhelming majority of 19 students have stated that they had been asked by their teachers to justify their options, indicating an optimal format of the class – on the one hand, by correcting inaccurate answers and, on the other hand, by reactivating the cognitive processes that have led to the students’ translation solutions when troubleshoo-ting cultural issues such as cultural gaps.

Page 15: Anno IV - Numero 1 - CoMe Journal

Maria Cristina Miutescu (2019) “Curriculawise similitudes and discrepancies..”, CoMe IV (1), pp. 4-16

www.comejournal.com 15

Matters surrounding the cultural and linguistic sub-competences - two major pre-requisites when rendering a message from the source language (SL) into the target language (TL) - have been inserted in the questionnaire, both on a theoretical and practical level (e.g. dealing with language variety in terms of dialectal and diachronic issues). Similarly to the pilot study con-ducted by professors L. Pungă and D. Percec (Faculty of Letters, West University of Timisoara) that indicated that, in terms of linguistic elements, first-year (2016/2017) Translation Studies MA students “had the ability to choose equivalents appropriately” (PUNGĂ, PERCEC 2017:149), the second-year (2018/2019) MA students completing this questionnaire have also managed extra-ordinarily well in troubleshooting cultural issues in the most accurate way possible according to various scholars’ criteria.

Although the number of questions and participants is by no means exhaustive, it could be con-cluded that the already existent ATC-based curriculum of the Theory and Practice of Translation MA Programme (Faculty of Letters, West University of Timisoara, Romania) fulfils to a satisfac-tory extent the level of translation competence (TC) targeted in the two frameworks issued by EU bodies, at least on a cultural and linguistic level. This study could be significantly expanded to analysing other translation sub-competences, while the sampling method could be redesi-gned in order to be applicable to several generations of alumni of the Translation Studies MA programme (Faculty of Letters, West University of Timisoara).

6. Conclusion

Page 16: Anno IV - Numero 1 - CoMe Journal

Maria Cristina Miutescu (2019) “Curriculawise similitudes and discrepancies..”, CoMe IV (1), pp. 4-16

www.comejournal.com 16

ANDRIENKO, T. (2016) “Translation across Time: Natural and Strategic Archaization of Transla-tion” in Translation Journal, https://translationjournal.net/October-2016/translation-acrossti-me-natural-and-strategic-archaization-of-translation.html [last accessed 30.01.2020].

URTADO ALBIR, A. (2017) Researching Translation Competence by PACTE Group, Amsterdam/Phila-delphia: John Benjamins Publishing Company.

KAMINSKIENE, L. & G. KAVALIAUSKIENE (2012) “Competences in Translation and Interpreting” in Studies about languages (20/2012), https://www.researchgate.net/publication/265281593_Com-petences_in_Translation_and_Interpreti ng [last accessed 30.01.2020].

n.d. EUROPEAN MASTER’S IN TRANSLATION COMPETENCE FRAMEWORK, https://ec.europa.eu/info/sites/info/files/emt_competence_fwk_2017_en_web.pdf [l.a. 30.01.2020].

n.d. PETRA-E COMPETENCE FRAMEWORK, https://PETRA-educationframework.eu/ [last accessed 30.01.2020]. NEWMARK, P. (1988) A Textbook of Translation, Hempstead: Prentice Hall.

PUNGĂ, L. & D. PERCEC (2017) “An inquiry into challenges of literary translation for future profes-sionals” in Professional communication and translation studies (10/2017), pp. 145-149.

REZA ESFANDIARI, M., T. SEPORA & T. MAHADI (2015) “Translation Competence: Aging Towards Mo-dern Views” in Procedia Social and Behavioural Sciences, https://www.sciencedirect.com/scien-ce/article/pii/S1877042815034783 [last acc. 30.01.2020].

SCHWIETER, J.W. & A. FERREIRA (2014) The Development of Translation Competence: Theories and Methodologies from Psycholinguistics and Cognitive Science, Newcastle upon Tyne: Cambrid-ge Scholars Publishing.

ZOU, Y. (2015) “The Constitution of Translation Competence and Its Implications on Transla-tor Education” in International Conference on Arts, Design and Contemporary Education (ICADCE

2015), available online at https://pdfs.semanticscholar.org/72d1/c3fff2a30e4a5e561c5fcf-8d268e03230b2f.pdf [last accessed 30.01.2020].

References

Page 17: Anno IV - Numero 1 - CoMe Journal

www.comejournal.com 17

CARLO EUGENI

Languages and cultures are intimately related especially in the age of information society, where technology continuously gives rise to new levels of interaction. In this context, the traditional trai-ning of language professionals in Translation Studies is no longer in line with current social, and industry requirements. In particular, professional translators complain that translator training programs are “inefficient, misleading, too theoretical, and irremediably out of touch with market developments” (pym 2011: 6). Moreover, the disruption of automatic mediation processes clearly demands a fresh look at the training of future professionals, which is already highlighted by the EMT EXPERT GROUP (2009: 7). Last but not least, the profession has been evolving so much that traditional translation is no longer the practice. On the contrary, we are witnessing an important and evident differentiation in terms of method (crowd-sourcing, relay, and live), working possi-bilities (in-person and remote), distribution opportunities (from massive to individual) and roles (translator, interpreter, and linguistic and cultural mediator) (ONCINS, EUGENI & BERNABÉ 2019). There is, then, a gap to bridge in the fields of academic and vocational training, which requires training skills to be defined for professionals of language and cultural mediation (live reporters or live subtitlers), whilst taking into account technical possibilities and industry requirements, without losing sight of the most recent contributions in the field of academic training. To try and start bridging this gap in the field of real-time intralingual subtitling, this paper deals with the

effort of the EU-funded LTA project in designing a curriculum for the training of professional

Curriculum Design in Diamesic Translation -For the Didactics of Real-Time Intralingual Subtitling

Scuola Superiori per Mediatori Linguistici di Pisa - [email protected]

Real-time intralingual subtitles enable access to live audiovisual products. However, the provision and the

quality of such services across Europe is uneven and sometimes insufficient because live subtitlers are un-

trained or partially trained and without a recognised professional status. To bridge this gap, the EU-funded

project Live Text Access (LTA) aims to create ad-hoc training materials and proposes the recognition of

certified professionals. This article first concentrates on the multifaceted and heterogeneous terminology

adopted in the field. Then it gives an overview of the current situation of training of live subtitlers in Europe

with a focus on the LTA rationale to create open-source training materials based on certification, subtitling

standards, and a useroriented approach. Finally, it reports on the progress of the project in defining the

professional profile as well as skills and competences of the intralingual real-time subtitler.

Keywords: live subtitling, real-time intralingual subtitles, diamesic translation, respeaking, velotyping

Introduction

Abstract

Page 18: Anno IV - Numero 1 - CoMe Journal

Carlo Eugeni (2019) “Curriculum Design in Diamesic Translation...’’, CoMe IV (1), pp. 17-28

www.comejournal.com 18

1. As EUGENI & BERNABÉ show, terminology in the field of real-time intralingual subtitling ismultifaceted and can be categorised based on several criteria, each affecting training (2019: 91): the context (e.g. conference, TV, court trial, parliamentary session, meeting), target text (e.g. report, subtitles, transcription), production system (e.g. pre-recorded, live, semi-live), te-chnique (keyboard, respeaking, velotyping, stenotyping), other (e.g. verbatim subtitling, live editing, remote interpreting). If some of these criteria are ignored in training, future professio-nals will lack necessary skills as academic and vocational courses available today show. For instance, students are trained mainly to a specific context (e.g. semi-live TV subtitling through velotyping1, live speech-to-text interpreting through respeaking, pre-recorded court reporting through stenotyping) or mode (e.g. in presence, by relay, or from remote), thus limiting the sco-pe of all the potential activities someone who is trained into diamesic translation can perform.

2. Moreover, such courses mainly concentrate on respeaking, thus limiting training to a tec-nique and to the languages for which Automatic Speech Recognition (ASR) technology is avai-lable (ROMERO-FRESCO & EUGENI forth.). Not to mention that the training material used in such training courses is culture-specific – which is not a disadvantage per se, but limits the scope to one single culture – and it is not open source (ONCINS, EUGENI & BERNABÉ 2019).

3. Finally, the training is mainly limited to students who can afford a training course in terms ofcosts and time, since they might have to move to another city or country. Conversely to what happens in in-house training, these students are trained to the profession in very general terms and do not experience the real world until they undergo an apprenticeship, or are employed by a service provider or find clients as freelancers. Concerning vocational training, trainees are usually focused on one specific job, thus acquiring concrete insight, but also a particularly nar-row view of the profession. So, training today is either too exclusive in terms of time, money, or place; too focused on a technique, a language, an application, or a context; or too generic (IBIDEM). On top of this, the quality of this training is neither well-established – it depends on trainers and is not evidence-based – or certified (ONCINS, EUGENI & BERNABÉ 2019).

1. Teaching real-time intralingual subtitling

1.1 Terminological remarks

1 Live text Access, or LTA (Reference Number: 2018-1-DE01-KA203-004218), is a project co-funded by the ERA-SMUS+ Programme of the EU. This article is part of the project dissemination activities required by the Era-smus+ program. More information at http://ltaproject.eu

respeakers and velotypists1. In particular, a theoretical and operational framework for academi-

cand vocational application pathways will be proposed. To do so, the paper intends to make an

overview of training real-time subtitlers by focusing on academic training experiences in the field

across Europe (section 2). Section 3 will analytically go into the “ingredients” of the proposed

curriculum for the real-time intralingual subtitler through respeaking and velotyping. Finally,

section 4 will propose a graphic representation of the LTA curriculum to explain its characteristi-

cs, with some suggested vocational and academic implementations.

Page 19: Anno IV - Numero 1 - CoMe Journal

Carlo Eugeni (2019) “Curriculum Design in Diamesic Translation...’’, CoMe IV (1), pp. 17-28

www.comejournal.com 19

2. Towards a curriculum for training real-time intralingual subtitlers

2.1 The Pedagogical and Methodological Curriculum

Real-time intralingual subtitles were first produced on TV using standard QWERTY keyboards (LAMBOURNE 2006) but then were replaced by more speed-efficient stenographers (DEN BOER 2001). Due to a lack of professionals, many broadcasters more recently opted to train their own profes-sionals internally to respeaking as is still the case today (ROMERO-FRESCO 2018). Formal training of real-time intralingual subtitling only came in 2005 at the then SSLMIT (Scuola Superiore di Lingue Moderne per Interpreti e Traduttori) of the University of Bologna (EUGENI 2008). After that, some universities have tried to organize courses on live subtitling, especially through respe-aking, but only for a limited period of time. Currently, only a few European universities regularly offer training on respeaking, such as the University of Bologna itself; the University of Antwerp, having been the first to offer curricular training into respeaking; the University of Leeds, pro-viding introductory sessions on respeaking as part of their courses on Audiovisual Translation (AVT); the Universitat Autònoma de Barcelona, providing a three-month online module and a one-month face-to-face module in Spanish as part of a Master ‘s degree in AVT; the University of Roehampton, providing a three-month face-to-face module in English, Spanish, French, Italian, and German; the International University of Rome; and the Universidade de Vigo, offering a three-month online module on intralingual respeaking in English, Spanish, and Galician, and a three-month online module on interlingual respeaking in the same languages (ROMERO-FRE-

SCO 2018). Worth a mention are also the School of Applied Linguistics of the Zurich Universi-ty of Applied Sciences (DUTKA & SZARKOWSKA, 2017), the three-week online module on respea-king within the online Master of Audiovisual Translation (MTAV) of the University of Parma; the course on audiovisual translation, including respeaking, at the University of Mons; and the one-week face-to-face module on respeaking within the summer school in AVT of the Universi-ty of Salento, in Lecce. In Germany, the SDI München offers a nine-month course, which trains to both respeaking and QWERTY typing. The course is practice-oriented and combines formal learning with short internships with partners in the industry (ROMERO-FRESCO & EUGENI forth.).

In the framework of what we have seen above, training materials and their structure, normally depend on the single trainer and not on an international reference framework, which could more easily bridge the many identified gaps. Among these is certification. Though university students get a diploma, this is not a certification of their real-time intralingual subtitling competences. And this affects the status of such a profession which is more and more widespread but not yet inter-nationally recognised. To reduce such gaps, the Bologna Process has been trying to redesign tea-ching, by moving its focus from students needs and expectations to competencies to be mastered and acquired, thus reducing the distance between academia and the world of job.However, learning single skills one after the other does not automatically allow trainees or

1.2 Training practices across Europe

Page 20: Anno IV - Numero 1 - CoMe Journal

Carlo Eugeni (2019) “Curriculum Design in Diamesic Translation...’’, CoMe IV (1), pp. 17-28

www.comejournal.com 20

students to be able and start working as professionals. The LTA project was funded in order to try and provide a larger scheme for the training of realtime intralingual subtitling, in its broadest sense, because, being able to subtitle in real time is not a synonym to being able to use an ASR software or having typing skills. It means much more. It means knowing where, when, how and for whom to subtitle. It means knowing the sociolinguistic environment of real-time intralingual subtitling in its widest sense (BERNABÉ 2019). To try and move a step further in the direction of a full-encompassing curriculum, LTA has inve-stigated SAFAR (1992) and HAMAOUI’s (2010) proposals for the training of university students into audiovisual translation. They base their work on the proposal made in 1975 by Belgian pedago-gue Louis D’Hainaut and propose the Pedagogical and Methodological Curriculum (PMC) to structure a curriculum on 3 levels further subdivided into 14 subcategories. By adapting the PMC to the purposes of the LTA project, the proposed curriculum resulted in the following structure, which is illustrated in Section 4:

1. Aims and objectivesa. “Defining and analysing educational policy”b. “Implementing aims and objectives”c. “Understanding trainees background”d. “Determining and analysing contents”e. “Processing learning outcomes”

2. Teaching methods and toolsa. “Determining resources and limits”b. “Tools and methods”c. “Teaching and Learning conditions”d. “Determining feasibility of tasks”e. “Creation and implementation of missing tools”

3. Evaluation methods and toolsa. “Designing assessment plan”b. “Selection and creation of assessment tools”c. “Implementation of assessment methods and tools”

4. The rationale of the LTA course runs parallel to the PMC structure, divided into 3 main areas:

5. Aims and Objectives: real-time intralingual subtitling can be considered as a discipline per se,whose best collocation is a BA only devoted to diamesic translation – intended as “the practices used to translate speech into a written form” in many public contexts (ORLETTI 2017: 13) – and

2.2. Rationale of the LTA curriculum

Page 21: Anno IV - Numero 1 - CoMe Journal

Carlo Eugeni (2019) “Curriculum Design in Diamesic Translation...’’, CoMe IV (1), pp. 17-28

www.comejournal.com 21

2 For more detailed information about Aims and Objectives, see the LTA intellectual output report devoted to this athttps://ltaproject.eu/wp-content/uploads/2020/02/LTA_Report-SSML-IO2_LTA-Curriculum-FINAL.pdf [last accessed 20/11/2019].

3 For more detailed information about Tools and Teaching, see the LTA intellectual output report devoted to this at https://ltaproject.eu/wp-content/uploads/2020/02/LTA_Report-SSML-IO2_LTA-Curriculum-FINAL.pdf [last accessed 20/11/2019].

4 For more detailed information about Assessment, see the LTA intellectual output report devoted to this at ht-tps://ltaproject.eu/wp-content/uploads/2020/02/LTA_Report-SSML-IO2_LTA-Curriculum-FINAL.pdf [last accessed 20/11/2019].

its applications. Regardless of its implementation, a course into real-time intralingual subtitlingshould be structured into types of competence (general for every technique and specific to a given technique), with assessments along the course to guarantee progressive learning (seeAssessment below). As to contents, they are to be selected according to a progressive principlewhich is determined by the number of learning outcomes to be acquired. Plus, they have to behomogeneous in all languages. Hence, first-level contents (beginner) are general. Second-levelcontents (intermediate) are specific. Third-level contents (advanced) are all the more varied andspecific. The modular competence-based structure of the LTA curriculum allows forcustomising the course and assessing achievement of learning outcomes (acquired knowledge,competence and skills) and professionalism.2

6. Tools and Teaching: LTA curriculum envisages ad hoc tools and teaching. As to teaching,trainers should be professional, so as to add professional value to training. Logistically, finan-cially and administratively, training should guarantee real-life conditions. To do so, it isrecom-mended that trainers are professiona or, in the future, certified, while hardware and software tools can be either essential to training or only recommended for training. LTA teaching and learning have been thought in a way that they adapt to both vocational and academic training. Moreover the curriculum is modular and personalisable, meaning that trainers can choose the kind of materials they want depending on teaching limits and students’ needs. Finally, general modules are organised in a progressive way, and specific modules are transversal, because they start simultaneously to the course.3

7. Assessment: the course is going to be divided into 3 levels of competence: beginners, interme-diate, and advanced. LTA material will allow trainees to self-monitor not just the achievement of every single Learning Outcome but also their overall expertise before, during and after the course.To do so, an assessment system has been designed divided into three steps: pre-asses-sment, peri-assessment, and post-assessment. In particular, a preliminary assessment will tell which skills and competencies a trainee already possesses and if he or she has an aptitude for respeaking or velotyping. Intermediate (peri-) tests will guarantee that progression is in line with the aims of the course. Finally, tests included in the post-assessment will establish if a trai-nee is ready for the profession, in line with international professional practices and in view of internationally recognised certification.4

Page 22: Anno IV - Numero 1 - CoMe Journal

Carlo Eugeni (2019) “Curriculum Design in Diamesic Translation...’’, CoMe IV (1), pp. 17-28

www.comejournal.com 22

3. The LTA curriculum

3.1. Materials

2.3. Criteria of the LTA curriculum design

On the basis of the PMC described above, the LTA curriculum for the training of the real-time intralingual subtitler through respeaking and/or velotyping has been designed according to the following criteria (FAME):

• Feasibility: the LTA curriculum is to be progressive so as not to discourage trainees. To do so, LTA has capitalised on existing literature, best practices and interviews with trainers, and adapted the PMC to real-time intralingual subtitling.

• Adaptability: thanks to LTA advisory board members and the surveys and interviews carried out during the first part of the project, LTA has come to a curriculum, which is adaptable to changing teaching and learning needs by means of an assessment system that monitors pro-gress all along the course.

• Modularity: the LTA course is characterised by self-contained Modules, 4 general modules composed of 3 Units and 2 technique-specific modules composed of 5 units. Each general and specific Unit is aimed at the acquisition of 3 well-defined Learning Outcomes.

• Effectiveness: the LTA curriculum has been designed to fit the needs of the world of job, thus bridging an existing gap in the training world. To do so, LTA has envisaged real-life materials, ECQA certified, in English for general modules and language-specific for specific modules.

Before coming to the visual representation of the curriculum (4.2) and its detailed explanation (4.3), it is to be reminded that LTA training materials have been organised to be adapted to trai-ning and learning needs, be they vocational or academic. Moreover, training materials have been thought to be as self-contained as possible, in order to allow trainers to use them at their ease, de-pending on the course level (beginners, intermediate, advanced). Finally, materials comply with the needs of trainees with sight loss and ECQA guidelines, so as to be compliant with internatio-nal requirements and possible certification. Additionally, training materials may vary in nature and in number according to their role in the implementation pathway, the curriculum will be translated into:

- Class-work material: core material to be used “in the class” (be it physical or virtual) by trainers to achieve one specific learning outcome (LO);

- Self-study material: material to be used outside classes by the trainees, either to deepen some aspects of an LO or because self-study is considered as possible;

Page 23: Anno IV - Numero 1 - CoMe Journal

Carlo Eugeni (2019) “Curriculum Design in Diamesic Translation...’’, CoMe IV (1), pp. 17-28

www.comejournal.com 23

- Accompanying material: subtitles in .srt format, presentations in .ppt format, transcripts in .pdf format and other materials to serve the purposes of trainees with special needs;

- Suggested readings: websites, academic papers, laws, etc. providing information that can be useful in diverse settings and contexts, though not essential to acquire a LO;

- Tasks: material to apply knowledge and material to support real acquisition of a technique spe-cific or general LO;

- Tests: material used to assess one’s background before the course and acquired skills after a technique-specific Unit or a general Module, during and after the course.

On the basis of the abovementioned Pedagogical and Methodological Curriculum (PMC), we have designed a curriculum that allows for meeting the goal of the LTA project, meaning bridging the gap between labour market and societal needs through open education and social inclusion in the field of real-time intralingual subtitling, where by “real-time intralingual subtitling” it is meant the production of both verbatim and sensatim subtitles. The proposed design has a modu-lar structure and can be implemented in several educational pathways according to the learning and training needs (figure 1).

3.2 The basic temple structure

Figure 1 visually represents the LTA curriculum. It simplifies the structure of a Doric Temple en-trance, thus paying tribute to the Greek civilisation, which first introduced the notion of culture in Europe in many fields, including that of education. Understanding it is quite simple: the stairs represent the prerequisites to training. They are the basis on which the training stands.

Page 24: Anno IV - Numero 1 - CoMe Journal

Carlo Eugeni (2019) “Curriculum Design in Diamesic Translation...’’, CoMe IV (1), pp. 17-28

www.comejournal.com 24

Each of the four pillars is a general module, while the architrave is the specific ones, running parallel to them. Modules are the core structure of the training. The tympanum, with its post-as-sessment and certification, make all materials and modules a curriculum and not a simple sum of elements.

By adding details to this basic visual representation, we come to a more structured representation of the LTA curriculum for the training of real-time intralingual subtitlers.

The stairs are, in fact, a three-stepped crepidoma representing the prerequisites a trainee should possess in order to be able and successfully undergo training. Though they should not be under-stood as limiting access to training, which is open to everybody, they should be considered to reduce frustration and drop out:

- Excellent command of written and spoken language, in line with C2 level of the Common Eu-ropean Framework of Reference for Languages. This implies that the trainee knows the working language enough to avoid being taught grammar (morphology and syntax), spelling (ortho-graphy), meaning (semantics), or text types and genres (pragmatics);

- Extensive general knowledge in as many topics as possible, g factor to comply with the multi-tasking skills a subtitler needs to possess, and awareness of the many applications of a similar job. In IQ tests such elements are interrelated because in many contexts a professional is required to have an aptitude (the g factor) for the job on top of sound background and training, that of the real-time intralingual subtitler included;

- Openness to experience in order to be capable of adapting to changing scenarios, contexts, text types and people. This prerequisite is related to the previous one but is more focused on the real life of a real-time intralingual subtitler, who has to be able to adapt to many varying contexts and scenarios, especially when working as a freelancer.

3.3.2 Modules

3.3 The detailed temple structure

3.3.1 Prerequisites

Pillars are normally made of a basis (plinths), the main structure (columns) and a top (capitals).

Plinths are the first elements of the LTA curriculum, the basis of the course. They contain the training materials called pre-assessments (assessments of the preliminary theoretical, linguistic,

Page 25: Anno IV - Numero 1 - CoMe Journal

Carlo Eugeni (2019) “Curriculum Design in Diamesic Translation...’’, CoMe IV (1), pp. 17-28

www.comejournal.com 25

managerial, IT, and technical skills/knowledge/competencies of a trainee at the beginning of training). This feature makes LTA training highly flexible and almost unique, because these asses-sments mirror the structure and content of the curriculum. Depending on the results, the course can be customised on the single trainee, who will only need to be trained into some modules/units or LOs. Given that training is to be as flexible as possible, pre-assessments have been thou-ght for different prospective trainees: beginners, who possess none or just a small amount of skills and knowledge; intermediate, who know at least the content of one or more general modules; and advanced, who already know the technique and want to acquire the professional skills of verba-tim and/or sensatim subtitling or who know the profession and want to acquire a new technique. Of course, the number of possible trainees is much larger.

Columns stand for the four general modules in which the curriculum is structured: Understanding Accessibility, Linguistic Competence, IT Competence, and Entrepreneurial and Service Compe-tence. Every single module is thought as a 3-layer module: beginners, intermediate, and advan-ced. In the curriculum design, general modules have been thought as propaedeutic and as com-plementing each other. The training materials composing every single module are divided into three units, each aimed at the acquisition of a specific LO:

- Understanding Accessibilityo Concepts of accessibility, disability, multimodality and Universal Designo Knowledge of target groups and their needs and expectationso Knowledge of how accessibility is embedded in the environment

- Linguistic Competenceo Functionality: Accuracy, readability, and legibilityo How to cope with speech-related challengeso Strategies to acquire and develop specific thematic knowledge

- IT Competenceo How to set up the working environmento Input toolso Output tools

- Entrepreneurial and Service Competenceo Management and Interpersonal skillso Personal and Stress management skillso Business strategies

Training materials have already been described above and need no further explanation. What isimportant to specify here is that they structure the module as Lego-bricks: a trainee approaches

Page 26: Anno IV - Numero 1 - CoMe Journal

Carlo Eugeni (2019) “Curriculum Design in Diamesic Translation...’’, CoMe IV (1), pp. 17-28

www.comejournal.com 26

training with a quantity of Lego-like bricks (his/her previously possessed skills/competences/knowledge) and a capacity to use them (prerequisites) as verified by preassessments (plinths). When he/she starts training some bricks may be useful and will allow him/her to shorten the time needed for constructing the temple, in other words, to complete a Unit or finish training; some others may be redundant and used to reinforce or retrieve a LO; some others are useless and will not be used in the temple construction.

Capitals represent general modules’ peri-assessments. These allow trainees to understand if their learning progression is the expected one.

On top of pillars there is a horizontal epistyle, which stands for the specific module (the techni-que used to produce subtitles in real time). It is the architrave of the curriculum design, running simultaneous to every single module. Tasks will guarantee that the technique is mastered enough to meet the single LOs per each level (beginners, intermediate, advanced). Units and periasses-sments of the specific module are not designed as the others. The number of Units is 5 and not 3 as in general modules. Each unit is aimed at training one’s command of a fast-writing technique, beating one’s records, and/or passing real-life tests, particularly useful for certification purposes. Peri-assessments are envisaged at the end of every single unit, and not at the end of every single module, as in general modules. These Units, designed for respeaking and velotyping, are:

- Psycho-cognitive skills: How to listen and speak simultaneously- Metalinguistic skills: How to turn non-verbal elements into verbal input- Dictation/typing skills: how to write fluently, quickly and accurately- Editing skills: When and how to correct oneself and another respeaker/velotypist- How to develop factors for high performance such as flexibility, and self-motivation

On top of the temple is the triangular pediment composed of the cornice and the tympanum. The outer cornice represents post-assessments, which can be used by trainers also to let future trainees understand what training is about and what the final result is expected to be, so as to motivate them since the beginning of the training process. The inner tympanum is the completion of the course (a diploma, certification, etc.) telling people what trainees have become: “real-time intra-lingual subtitler through respeaking and/or velotyping”.

Page 27: Anno IV - Numero 1 - CoMe Journal

Carlo Eugeni (2019) “Curriculum Design in Diamesic Translation...’’, CoMe IV (1), pp. 17-28

www.comejournal.com 27

Visually, the LTA curriculum for the training of the real-time intralingual subtitler looks like a solid building. This represents the consistency of its construction, in the light of the most recent contributions in the field of curriculum design, with input from the world of professional trai-ning, which has fed the curriculum structure with specific learning content (Learning Outco-mes). Despite this, the LTA curriculum is not meant as a unique block of materials to take or le-ave. On the contrary, its main features are feasibility, adaptability, modularity, and effectiveness.

Building blocks of the curriculum are 6 modules (4 general modules and 2 technique-specific modules), aimed at training future real-time intralingual subtitlers through respeaking (the 4 general modules + the module specific to respeaking) or velotyping (the 4 general modules + the module specific to velotyping). General modules are composed of 3 Units, while specific modules of 5 Units, each including 3 LOs. Each module is further subdivided into 3 levels of expertise: beginners, intermediate, and advanced.

This allows for many implementations of the LTA curriculum, from vocational training to Hi-gher Education teaching5. Such flexibility is extended at both the level of general modules or specific units (macro-level) and that of training materials (micro-level), thanks to a three-fold assessment system which allows for bridging the gap between trainees needs and the market of job.

BERNABÉ, R. (2019) “Identifying parameters for creating Easy to Read subtitles”, in CoMe IV (1), available at http://comejournal.com/journal/issues/ [last accessed 20 December 2019].

D’HAINAUT, L. (1975) Concepts et méthodes de la statistique, Bruxelles: Editions Labor. DEN BOER, C. (2001) “Live interlingual subtitling”. In Gambier, Y. & H. Gottlieb (eds.) (Multi)media translation. Concepts, practices and research. Amsterdam and Philadelphia: John Benjamins.

DUTKA, L. & A. SZARKOWSKA (2016) “Respeaking as a part of translation and interpreting curricu-lum?” Slideshare available at https://www.slideshare.net/agnieszkaszarkowska/respeakin-gas- a-part-of-translation-and-interpreting-curriculum.

EMT (2009) “Competences for professional translators, experts in multilingual and multimedia communication”,http://ec.europa.eu/dgs/translation/programmes/emt/key_documents/emt_competen ces_translators_en.pdf [last accessed 21 November 2019].

4. Conclusions

Bibliography

5 For examples of application of this curriculum to formal and informal education, see https://ltaproject.eu/wpcontent/uploads/2020/02/LTA_Report-SSML-IO2_LTA-Curriculum-FINAL.pdf [last accessed 18 Novem-ber 2019]

Page 28: Anno IV - Numero 1 - CoMe Journal

Carlo Eugeni (2019) “Curriculum Design in Diamesic Translation...’’, CoMe IV (1), pp. 17-28

www.comejournal.com 28

EUGENI, C. (2008) Le sous-titrage en direct pour sourds et malentendants: aspects théoriques, profession-

nels et didactiques du respeakerage télévisuel, Macerata: CEUM.

EUGENI, C. & R. BERNABÉ (2019) “The LTA project: Bridging the gap between trainingand the pro-fession inreal-time intralingual subtitling”, in Mazur, I. & G. Vercauteren (eds.) Linguistica An-

tverpiensia, New Series: Themes in Translation Studies, 18, pp. 87–100, available at https://lans-t-ts.uantwerpen.be/index.php/LANS-TTS/article/view/512/453 [last accessed 20 November 2019].

HAMAOUI, N. (2010) “Training and Methodology in audiovisual translation”, in Interpreting and

Translation Studies Journal, 14 (1), pp. 343-369.

LAMBOURNE, A. (2006) “Subtitle respeaking: A new skill for a new age”, in EUGENI, C. & G. MACK, (eds.), inTRAlinea special issue: Respeaking, available at http://www.intralinea.org/specials/ar-ticle/Subtitle_respeaking [last accessed 20 November 2019].

ONCINS, E., EUGENI, C. & R. BERNABÉ (2019) “The Future of Mediators for Live Events: LTA project - Academic and Vocational Training”, in KATAN, D. & C. SPINZI (eds.) CULTUS 12, Training media-

tors: the future, pp: 129-153, available at https://www.cultusjournal.com/files/Archives/Cul-tus_2019_12_008_Oncins_et-al.pdf [last accessed 20 November 2019].

PYM, A. (2011) “Training translators”, in MALMKJAER, K. & K. WINDLE (eds.), The Oxford handbook

of translation studies. Oxford: Oxford University Press, pp. 475-489.

ROMERO-FRESCO, P. (2018), Subtitling through speech recognition: Respeaking. Manchester/Kinderho-ok: St Jerome.

ROMERO-FRESCO, P. & C. EUGENI (FORTH.) “Live subtitling through respeaking”, in BOGUCKI L. & M. DECKERT (eds.), Handbook of Audiovisual Translation and Media Accessibility, Palgrave, London.

SAFAR, H. (1992) Curriculum d’éducation et projet pédagogique. Paris : Éditions du Cercle.

Page 29: Anno IV - Numero 1 - CoMe Journal

www.comejournal.com 29

1. Introduction

DIANA OŢĂT

Before adhering to the latest vistas on the development of virtual translation environments andproducts, as well as their harmonisation with different sociocultural prerequisites, we highlightretrospectively the fundamental input of the Representational Function of Language as formula-ted by Bühler’s (1934/1990) salient Sprachtheorie. The tripartite model developed by the German psycholinguist methodises the three main functions of the linguistic sign underpinning each com-municative event, i.e. the informative (Darstellung), expressive (Ausdruck) and vocative (Appell) function. In the same vein, Jakobson (1959/1966) tackles the functions of language and establi-shes six socio-cultural variables as main regulatory parameters to achieve verbal communication. According Jakobson (ibid), a theory of language is based on a theory of translation, and, each of the six parameters triggers a different language function. Thus, the communication context determines the referential function (VÎLCEANU 2003), while the addressor stands in direct relation to the emotive function, reaching the addressee (the conative function) via a connection channel, i.e the phatic function, towards a shared communication code, respectively the metalingual fun-ction and the linguistic expression of the message - labeled as the poetic function (see MAYBIN & SWANN 2010: 45). Premised on the early 1980’s functionalist approach to map out a “framework for a general theory of translation” (NORD 2012: 27) that materialised under the collaborative effort

Multimodal corpora input to translation training

University of Craiova - [email protected]

Subscribing to current views in that ignoring the image(s) means ignoring significant elements of a messa-

ge potential, we set out to highlight some main features of multimodal texts that modern translators must

decode as to deliver high quality translation products. Featuring the diachronic transformations multimo-

dal texts underwent within different time frames, the present paper aims at mapping out the contempo-

rary dynamics of multimodal texts with a view to the challenges and resources multimodal corpus-based

investigations bring to the field of translation. Zooming in, we embark on the design of a translation trai-

ning research trial in an attempt to test the applicabilit degree of dedicated software to multimodal corpus

analysis. Raising students’ awareness on the vaster and richer field of Corpus-Based Translation Studies,

special attention is paid to the development of trainees’ technological competence within the contemporary

framework of the multifaceted translator’s competence.

Keywords: multimodal texts, Corpus-Based Translation Studies, dedicated software, translation com-

petence

Abstract

Page 30: Anno IV - Numero 1 - CoMe Journal

Diana Otat (2019), “Multimodal corpora input to translation training’’, CoMe IV (1), pp. 29-42

www.comejournal.com 30

carried out by Reiss and Vermeer (1984) in what prominent translation scholars would consider the paradigm shift towards modern Translation Studies, we share the re-interpreting views on Skopostheorie (VERMEER 1989), to meet the contemporary technology-driven societal needs. Setting the extra-linguistic context and the purpose (scope) of the target language text (TT) as the key parameters to deliver successful product-and client-oriented translation services, current translation strategies tend to replace the target-reader objective with the realization of an inte-grative response, generated by a multimodal perception of the target culture audience. Based on Massaro’s (1987) interpretation of the communicative event, we approach modern translation perspectives as joint interdisciplinary endeavours to code - decode and recode multimodal per-ceptions, since the target audience tends instinctively to process not only written messages, but also images, sounds and/or performances.

Modern translators have been constantly challenged not only to observe the source language text (ST) form, content and function transference into the target language (TL), but to successfully render the interplay of the modes the message and the function of a source-language multimodal text in order to secure at least as a high impacting reaction among the target-culture audience as with the SL audience.Subsequently, cognition is activated through the auditory and visual interface, while experien-cing stimuli sent via various modalities. Under the circumstances, (MASSARO 1987) argues that an accurate interpretation of any communicative event would fail, if the nonverbal components of interaction (auditory and visual perception) were not taken into account.

Although programmatic initiatives to establish clear cut frontiers among long-established and young disciplines have been judiciously carried out throughout past and more recent decades, in-ter-and transdisciplinarity views have been reshaping traditional subjects’ identities, fostering an integrative research approach that incorporates cooperation between scholars and practitioners within different fields of research. Functioning as a main representative of this perspective, mul-timodality has been addressed as a shared concept, “omnipresent in most of the communicative contexts in which humans engage” (VENTOLA ET AL. 2004: 1) ever since ancient times, if it were only to mention the Egyptian hieroglyphs that embodied visual, spatial and textual shapes transmit-ted via papyrus, clay, metal or leather support, depending on the content and the context a certain message was communicated (LUTKEWITTE 2020: 1), the Medieval manuscripts that incorporated multimodal messages via calligraphic and illustrative elements (JONES 2013: 3), or the more recent websites, audiovisual products and videogames.

Given this background, a translation-oriented approach to the concept of multimodality calls upon translation theorists and practitioners to cross disciplinary boundaries in the search of in-tegrated research methods to meet the complex requirements of multimodal translation. In ter-ms of translation, multimodality challenges scholars and professionals to simultaneously decode

2. Translating multimodal texts: a modern approach to earlier norms

Page 31: Anno IV - Numero 1 - CoMe Journal

Diana Otat (2019), “Multimodal corpora input to translation training’’, CoMe IV (1), pp. 29-42

www.comejournal.com 31

the written message and transfer into verbal information the visually perceived stimuli. We share Littau’s (2011) perspective that once the introduction of state-of-the art communication modes - from papyrus and manuscripts to web-based texts and hyperlinking, theories of translation will under-go reconfiguration to upgrade critical and analytical toolkits, and test novel translation policies.

Defined by Gibbons (2012: 8) as “the coexistence of multiple modes” manifested in a cer-tain background, or as the reflex process of a target audience to decode “the coexi-sting modes”, contemporary translation-oriented views on the dynamics of multimodaltexts enfold the functionalist approach and link the multiplication of modern multimodal texts to the systematic classification of the four main text types as featured by Reiss (1971/2000) and adopted later on by translation scholars such as Newmark (1981) or Munday (2001). According to their communicative intention, texts are divided into informative texts - as “plain communi-cation of facts” (MUNDAY ibid: 73-75), aimed at transmitting factual information via a logical and referential language dimension; expressive texts that rely on the aesthetic dimension of language to communicate the author’s intention to the readership, and operative/vocative texts (parallel terminology - see NEWMARK 1981: 15) characterized by “the inducing of behavioural responses” to persuade the resership/receiver of the message to act in a certain way. The fourth type of texts, as defined by Reiss (1971/2000), comes to supplement the above-mentioned functions with visual images, sounds and pictures, labeled by author as audiomedial texts.Departing from the Katharina Reiss’ (1971/2000: 164–165) initial classification of text types to meet the communicative function of the target text into the target language, audio medial texts have been ulteriorly featured by the same author as hyper texts that can either inform, instruct, persuade or enchant the target readership. However, further approaches to text type and langua-ge function in translation would emphasise the role of hybrid texts, since authors such as Reiss (1971/2000), Newmark (1981), Bassnett (1997), Munday (2001), Snell-Hornby (2006) argue that each text implies the coaction of at least two functions. Although the functions of the ST and TT text may change, multimodal aspects need to be detected and transferred into the target culture to meet the sociocultural values, norms, expectancies of the audience. The target cultural dimen-sion will give rise to further challenges, particularly with respect to what would not seem to get translated. Hence, beyond intralingual and intersemiotic translation (see JAKOBSON 1959/1966) of multimodal texts, Zanettin (2011) advocates that the visual elements of such multimodal texts are subject to alteration, editing and even removal, most often to comply with the target socio-cul-tural landscape.

In other words, we grow aware that the selection of the most appropriate translation strategy when commissioned with the translation of a multimodal text is rather challenging. Equipped with what until recently scholars would define as a text-based modus operandi, translators are now facing interdisciplinary requirements to accommodate a multimodal message into a tar-get-culture. Accordingly, Snell-Hornby (2009: 44) features multimodal texts as complex messages aimed to circulate via different media, channels and modes among a target audience, taking into account that the design and development of such texts involve the fusion of visual and sound elements, alongside different graphic sign systems.

Page 32: Anno IV - Numero 1 - CoMe Journal

Diana Otat (2019), “Multimodal corpora input to translation training’’, CoMe IV (1), pp. 29-42

www.comejournal.com 32

2.1 Multimodal shared objectives: TS and CTS

To put in a nutshell, multimodality, and multimodal texts correspondingly, would lack any im-pact at all, if the coexisting modes did not effectively interchange while being decoded by the target audience, since multimodality has been defined as the receiver cognitive-based perception of the interaction of modes (RODRÍGUEZ-INÉS 2017).

Engaged in the constant upgrading of the communication framework, which tends to be reloca-ted within virtual environments via digitized modes, crowdsourcing and cloud-based platforms, translators need to keep up with both challenges and opportunities. Under the circumstances, the translation of multimodal texts does not only complement the service provision agenda of the translation market, but multimodality itself becomes a resource for translation. In this respect, López Rodríguez et al. (2013) highlight the crucial role played by images in the development of current thematic maps and word banks. According to the authors, the dynamics of the contem-porary transdisciplinary specialised and highly specialised terminology would not be properly understood, if both visual and written components were not taken into consideration. Similarly, the translation if both visual and written components were not taken into consideration. Simi-larly, the translation of audiovisual texts becomes, beyond the challenges commissioned, a re-source for the translating team (translators, sound and image technicians, editors, etc.) to design and further develop specialised thematic maps, dedicated software and computer assisted tools.

The challenge and the potential generated by the translation of multimodal texts involves the active participation of Translation Studies researchers set out to develop tailor-cut strategies and tools best applied to transfer the interplay of text, image and sound from the source setting into the target cultural landscape.

Concurrently, challenging tasks have been carried out within the relatively recent field of Cor-pus-Based Translation Studies (CTS), geared towards message intertraffic from and into different sociocultural contexts “as a mediated communicative event”. (BAKER 1996: 243) Under the new “conceptual paradigm”, CTS ambitions to investigate and develop “complementary theoretical approaches and methodologies grown out of the cross-fertilisation with new fields of studies as varied as pragmatics, critical linguistics, postcolonialism, gender studies and globalisation”, while observing “well-established areas of enquiry” such as the polysystem, skopos, poststructu-ralism perspectives (LAVIOSA 2004: 29) seem to be more receptive to the stimuli conveyed by multi-modal texts, although CTS experts still cannot answer all the questions regarding multimodality.

Endorsing that “multimodality cannot be described as a monolithic concept” as it “covers a wide variety of genres, forms of communication, and combinations of modes and semiotic resources”, Tuominen et al. (2018: 4) subscribe Tymoczko’s (2007, pp. 83–90) tagging of translation as a “clu-ster concept” and share the high applicability of this concept, since translation “cannot simply be defined in terms of necessary and sufficient features” (TYMOCZKO 2007: 85), particularly due to the diversity and pluriformity translation products manifest.

Page 33: Anno IV - Numero 1 - CoMe Journal

Diana Otat (2019), “Multimodal corpora input to translation training’’, CoMe IV (1), pp. 29-42

www.comejournal.com 33

Transplanting the cluster-concept from the field of Translation Studies to Corpus-Based Transla-tion Studies, we grow aware that the mission of a corpus designer and/or researcher involves the development of a multifaceted competence to meet the requirements of a wide range of in-terdisciplinary tasks with overlapping responsibilities. However, such an approach would con-tribute to the constant updating nature of CTS, faced with the investigation of most varied con-texts, genres, and multimodal messages as well as their most appropriate transfer into the target language communicative framework. Moreover, such perspectives open the doors to adaptive investigation models to cover the broad spectrum of multimodal texts. Within this framework, contemporary corpus linguists and translation researchers plead for the design and testing of integrategrative analysis models that incorporate both verbal and non-verbal characteristics of multimodal messages (BAKER 1996, LAVIOSA 2004, PÉREZ GONZÁLEZ 2014, FANTINUOLI & ZANETTIN

2011, O’SULLIVAN, 2013; etc.) . Translation scholars and corpora researchers argue therefore that the translation of multimodal texts involves first a close investigation of the connection between the visual and written components. Adopting this procedure, further expert recommendations envisage a hybrid-type analysis that relies on internal and cross-disciplinary methods. Thus, for example, Baumgarten (2008) resorts to linguistics, visual analysis and cinematic narrative as she sets out to investigate the interconnection between visual and verbal elements in film translation. However, multimodal corpus design and investigation are still facing a series of conceptual and technological challenges with a view to the incorporation of multimodal analysis models within a corpus linguistics methodology.

In terms of corpus design, multimodal texts are still raising some critical issues, when it comes, for example, to multimodal comparable corpora, since corpus designers claim that it is almost impossible to align an ST automatically with a TT, given the variations that may occur between ST and TT modes.

As far as corpus analysis is concerned, CTS researchers such as Olohan (2002), Laviosa (2010), Jones (2013), Rodríguez-Inés (2017) and others highlight the technological advances in transla-tion-oriented corpus analysis and the reliable outputs generated via computer-assisted inve-stigation of monomodal corpora. The achievement of valid results was secured following text structuring into paragraphs or lines, at different language levels (morphology, lexical, syntactic, semantic, pragmatic, etc.) or on structural aspects of a text (paragraphs, lines, etc.). It has been argued that even though cutting-edge technology has considerably contributed to the investiga-tion of multimodal texts, enabling the user to establish some basic relations between image and text or image and sound, it seems that even the latest generation of dedicated software may still lack of options. A central issue in this respect is text partition into analysis segments, as such seg-ments do not render, for example, the semantic annotation. Moreover, integrated research studies reported that current computer-assisted tools would need further upgrading in order to generate

3.1 Design and analysis issues

3. Featuring multimodal corpora: design and investigation issues

Page 34: Anno IV - Numero 1 - CoMe Journal

Diana Otat (2019), “Multimodal corpora input to translation training’’, CoMe IV (1), pp. 29-42

www.comejournal.com 34

4. An integrated multimodal corpus analysis

detailed semiotics-oriented results, particularly when a multimodal text involves the simultane-ous analysis of wordings, images and sounds, i.e. the establishing of the semiotic codes before rendering it into the target language.

Also, previous investigations of multimodal corpora signaled the need for complex tagging algorithms. Although the annotation and tagging systems of monomodal corpora have beco-me completely automatic, generating resourceful information with a click of a button, multi-modal text tagging remains a challenging task to accomplish. The complexity of such proces-ses is mainly triggered by the socio-cultural dimension of the translation competence, and respectively by the identification of the word-image-sound culture-related interconnections and their accommodation into the target language. At text level, Valdés (2008) claims that lo-calization could be a highly effective strategy to transfer multimodal messages and semio-tic codes from one cultural background into another cultural setting. The author argues that in the case of advertisements, for example, we could keep the modes, i.e. images, but the TT would far more impacting, if we replaced the ST images with images more familiar to the TT culture. Again, we grow aware that a close study of the word-image and/or word-sound inte-raction specific to both ST and TT socio-cultural realities secures high-quality outcomes, even if, sometimes, we needed either cu cut or to ad images or symbols that do not exist in the ST.However, even in such cases corpus researchers still need to perform the tagging stage manually, hence a rather cumbersome and prone to errors process that does nor secure coherent results.

Sharing this perspective, Hovy and Lavid (2010: 30) claim that a lack of automatic tagging sy-stems in multimodal corpus analysis may lead to controversy due to a lower degree of accuracy, and, implicitly, to less reliable theoretical frameworks. In the same spirit, Evison (2010) argues that such research approaches are applicable to only small and specialised multimodal corpo-raAlthough, on the other hand, some insist on this particular feature of multimodal corpora, i.e. smaller size, to highlight the efficiency of comparison-based research methods that generated more filtered and accurate results to activate bottom-up analysis approaches towards a wider conceptual vision of the research question.

With a view to raise our students’ awareness of the constant need to develop and update a “har-monised translator’s multilayered competence” (VÎLCEANU 2016: 96)to meet the current transla-tion market needs, we set out to run a task-based project work that builds up future translators’ technological proficiency. In line with the 2017 EMT Competence Framework, we aim at deve-loping our students “knowledge and skills used to implement present and future translation technologies”, guiding them to use the most relevant IT applications, and adapt rapidly to new tools and IT resources, so as to become independent and self-reliant users of search engines, corpus-based tools, text analysis tools and CAT tools that enable them “pre-process, process and

Page 35: Anno IV - Numero 1 - CoMe Journal

Diana Otat (2019), “Multimodal corpora input to translation training’’, CoMe IV (1), pp. 29-42

www.comejournal.com 35

manage files and other media/sources as part of the translation, e.g. video and multimedia files, handle web technologies”.(https://ec.europa.eu/info/sites/info/files/emt_competence_fwk_2017_en_web.pdf)

Our project envisages the design and investigation of a small multimodal corpus, in order to test students’ involvement degree, on the one hand, and the applicability degree of dedicated analysis software to small multimodal corpora.

The project implementation was achieved via the MAXQDA 2020. As raw research resources we decided for a 2016 VPRO documentary - in English, with Romanian subtitling, available at: ht-tps://www.youtube.com/watch?v=69JXP4tnBMo.After importing both the source language video (EN) and the target language transcript (RO) into the software (see Figure 1 below), students were asked to segment and encode both the video and the transcript observing the research parameters illustrated in Table 1 below.

Page 36: Anno IV - Numero 1 - CoMe Journal

Diana Otat (2019), “Multimodal corpora input to translation training’’, CoMe IV (1), pp. 29-42

www.comejournal.com 36

Page 37: Anno IV - Numero 1 - CoMe Journal

Diana Otat (2019), “Multimodal corpora input to translation training’’, CoMe IV (1), pp. 29-42

www.comejournal.com 37

The features provided by the software enabled students to establish for each of the parameters set a degree of synchronization between the source language video and the target language transcript by analysing the translation strategy/strategies applied.

To achieve their objectives, students had first to segment the video and the transcript by selecting the Code System feature. At this project stage, students were able to allocate a colour-based code for each of the parameters set, hence to segment the multimodal corpus almost automatical-ly. Thus, a specific colour was assigned to each research parameter: localisation, exoticisation, neutralisation. Furthermore the Code System feature also assisted users to resort to tagging by adding a Code memo for each segment established ( see Figure 2 and Figure 3 below).

Page 38: Anno IV - Numero 1 - CoMe Journal

Diana Otat (2019), “Multimodal corpora input to translation training’’, CoMe IV (1), pp. 29-42

www.comejournal.com 38

After the segmenting and encoding stage, the students could select one of the Visual Tools featu-res to automatically generate a Document Portrait, a Code Relation Map or a Document Compari-son Chart, thus visualizing the multimodal synchronization degrees as related to the parameters set (see Figure 4 below).It is worth mentioning that the software enables users to work simultaneously within the same project from different devices via the Teamwork feature. Thus, each project member can work on a part of the project, segmenting, encoding, taggig and kepp real-time communication with other team memebrs.

Page 39: Anno IV - Numero 1 - CoMe Journal

Diana Otat (2019), “Multimodal corpora input to translation training’’, CoMe IV (1), pp. 29-42

www.comejournal.com 39

Interpreting the results obtained, the students could establish that the highest correlation degree between the source language multimodal corpus and the target corpus is achieved at lexical le-vel, most frequently via localisation, while face mimics and gestures are synchronized with the transtex text. Syntactically, correlation was achieved by means of neutralisation, while synchroni-zation was obtained in terms of image sequence vs. subtitling. In terms of cultural accommoda-tion, no replacement, cut or image addition was encountered. However, the cultural inputs were transferred at language level, via localisation 54 % and exoticisation 46%.

In terms of students’ active involvement in the research project, we could record a higher interest and cooperation compared to the more classical training methods. Beyond the novelty of the research project, we highlight that students’ interest increased almost proportionally with the growth of the project, motivated by the user-friendly interface and the factual data generated by the software.

Profiled as an intrinsically versatile field of research, Corpus-Based Translation Studies brings in the spotlight contemporary design and development needs for cross-functional methods and tools to secure “the growing range of language services” (EMT COMPETENCE FRAMEWORK 2017: 4) to comply with the contemporary technological and societal upgrading trends.

5. Conclusion

Page 40: Anno IV - Numero 1 - CoMe Journal

Diana Otat (2019), “Multimodal corpora input to translation training’’, CoMe IV (1), pp. 29-42

www.comejournal.com 40

Moreover, the integrative approach fostered by CTS towards niche fields of research underpins a solid translator education and training to meet contemporary individual, societal or institutional requirements, enhancing at the same time the nature of translation as a target of research towards innovative and ambitious goals.The theoretical framework and the development of our trial research project focused on the tran-slation-oriented particularities of multimodal texts as revealed by corpus-based analysis. Althou-gh, corpus-based investigations of multimodal texts and, subsequently, of comparable multimodal corpora still require further technological developments in terms of mode alignment, segmenta-tion and modes tagging, we share the perspective that corpus-based investigations have grown as reliable research strategies for sustainable multimodal translation research.

BAKER, M. (1996) “Corpus-based translation studies: the challenges that lie ahead”, in SOMERS, H. (ed.). Terminology, LSP and translation studies in language engineering: in honour of Juan C. Sager. Amsterdam and Philadelphia, John Benjamins, pp 175-186.

BASSNETT, S. (1997) “Text types and power relations”, in A. TROSBORG (ed.). Text Typology and Tran-

slation. Amsterdam/Philadelphia: John Benjamins Publishing.

BÜHLER, K. (1934/1991) Theory of Language: The Representational Function of Language. Amster-dam: John Benjamins Publishing.

EMT COMPETENCE FRAMEWORK - 2017: https://ec.europa.eu/info/sites/info/files/emt_competence_fwk_2017_en_web.pdf, [accessed Jan 16 2020].

EVISON, J. (2010) “What are basics of analyzing a corpus?” in A. O´KEEFFE & M. MCCARTHY (eds.), The Routledge handbook of corpus linguistics. London: Routledge, pp. 122–135.

FANTINUOLI, C. & F. ZANETTIN (2015) New directions in corpus-based translation studies. Berlin: Langua-ge Science Press.

GIBBONS, A. (2012) Multimodality, cognition and experimental literature, New York, NY: Routle-dge.

HOVY, E. & J. LAVID (2010) “Towards a ‘science’ of corpus annotation: A new methodological chal-lenge for corpus linguistics”, in International Journal of Translation, 22(1), pp. 13–36.

JAKOBSON, R. (1959/1966) “ On linguistic aspects of translation”, in R., BROWER & A. REUBEN (eds.). On Translation. New York: Oxford University Press, pp. 232–239.

References

Page 41: Anno IV - Numero 1 - CoMe Journal

Diana Otat (2019), “Multimodal corpora input to translation training’’, CoMe IV (1), pp. 29-42

www.comejournal.com 41

JONES, R. H. (2013) “Multimodal Discourse Analysis”, in C. A. CHAPELLE (ed.). Wiley Encyclopedia of Applied Linguistics. London/New York: Wiley Blackwell.

LAVIOSA, S. (2004) “Corpus-based translation studies: Where does it come from? Where is it going?”, in Language Matters 35 (1), pp. 6-27.

LAVIOSA, S. (2010) “Corpora.”, in Y. GAMBIER & L. VAN DOORSLAER (eds.). Handbook of Translation Studies, pp. 80-86.

LÓPEZ RODRÍGUEZ, C.I., VELASCO, J.A. & M. TERCEDOR SÁNCHEZ (2013) “Multimodal representation of specialised knowledge in ontology-based terminological databases: the case of EcoLexicon”, in The Journal of Specialised Translation, issue 20, pp. 49-67,https://www.researchgate.net/publication/256413834 [accessed Jan 26 2020].

LUTKEWITTE, C. (2020). Writing in a Technological World. New York: Routledge.

MASSARO, D.W. (1987) Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Hillsda-le, New Jersey: Lawrence Erlbaum.

MAXQDA Data Analysis Software: https://www.maxqda.com/what-is-maxqda

MAYBIN, J. & J. SWAN (2010) The Routledge Companion to English Language Studies. London. New York: Routledge.

MUNDAY, J. (2001) Introducing Translation Studies: Theories and Applications. Oxon: Routledge.

NEWMARK, P. (1981) A Textbook of Translation. New York. London: Prentice Hall.

NORD, C. (2012) “Quo vadis, functional translatology?”. In Target, 24(1), pp. 26- 42.

O’SULLIVAN, C, (2013) “Introduction: Multimodality as challenge and resource for Translation”, in The Journal of Specialised Translation, Issue 20, pp.2-14.

OLOHAN, M. (2002) “Corpus Linguistics and Translation Studies: Interaction and Reaction” in Lin-guistica Antverpiensia, pp. 419-429.

PÉREZ GONZÁLEZ, L. (2014) “Multimodality in Translation and Interpreting Studies”, in S. BERMANN & C. PORTER (eds.). A Companion to Translation Studies. Chichester: Wiley-Blackwell, 119-131.

REISS, K. (1971/2000) “Type, kind and individuality of text: decision making in translation”, in L. VENUTI (ed.). The Translation Studies Reader. London. New York: Routledge, pp.160-171.

Page 42: Anno IV - Numero 1 - CoMe Journal

Diana Otat (2019), “Multimodal corpora input to translation training’’, CoMe IV (1), pp. 29-42

www.comejournal.com 42

REISS, K., VERMEER H. (1984) Groundwork for a General Theory of Translation. Tubingen: Niemeyer.

RODRÍGUEZ-INÉS, P. (2017) “Corpus-based insights into cognition”, in SCHWIETER, J. W. & A. FERREI-

RA (eds.). The handbook of translation and cognition. Hoboken, NJ: John Wiley & Sons, pp. 265–289.

SNELL-HORNBY, M. (2006) The Turns of Translation Studies: New Paradigms Or Shifting Viewpoints?. Amsterdam/Philadelphia: John Benjamins Publishing.

SNELL-HORNBY, M. (2009) “What’s in a turn? On fits, starts and writhings in recent translation studies”, in Translation Studies, 2(1), pp. 41-51.

TUOMINEN, T., JIMÉNEZ HURTADO, C., KETOLA, A. (2018) “Why methods matter: Approaching mul-timodality in translation research”, in Linguistica Antverpiensia, New Series: Themes in Translation

Studies, 17, pp. 1–21.

TYMOCZKO, M. (2007) Enlarging translation, empowering translators. Manchester: St. Jerome.

VALDÉS, C. (2008) “The localization of Promotional Discourse on the Internet”. in: CHIARO, D., HEISS, C. & C. BUCARIA, (eds.). Between Text and Image. Updating Research in Screen Translation.

Amsterdam/Philadelphia: John Benjamins, pp. 227-240.

VENTOLA, E., CHARLES, C., KÄLTENBACHER, M. (eds) (2004) Perspectives on Multimodality. Amster-dam: John Benjamins Publishing.

VERMEER, H. J. (1989/2000) “Skopos and Commission in Translational Action”, in L. VENUTI (ed.). The Translation Studies Reader. London. New York: Routledge, pp. 221–32.

VÎLCEANU, T. (2003) Translation. The Land of the Bilingual. Craiova: Universitaria.

VÎLCEANU, T.(2016) “Evaluating Online Resources for Terminology Management in Legal Tran-slation”, in DEJICA, D., HANSEN, G., SANDRINI P. & I. PARA (eds.), Language in the Digital Era. Challen-

ges and Perspectives. Warsaw: De Gruyter Open, pp. 96–108.https://doi.org/10.1515/9783110472059-010.

ZANETTIN, F. (2011) “Comics”, in M. BAKER & G. SALDANHA (eds.). Routledge Encyclopedia of Transla-

tion Studies (2nd ed.). London/New York: Routledge, pp. 37–40.

Page 43: Anno IV - Numero 1 - CoMe Journal

www.comejournal.com 43

ROCÍO BERNABÉ – ÓSCAR GARCÍA

Access services such as audio descriptions and intralingual subtitles provide accessible audiovi-sual content to audiences with sensory disabilities (ORERO 2004; MATAMALA and ORERO 2013; GRECO 2016). The applied branch of Translation Studies demands that translators use evidence-based tools for creation (RABADÁN 2010; TOURY 1995, 2012). An example of such, taken from the field of subtitling for the Deaf and Hard-of-Hearing (SDH), are the guidelines proposed by NEVES (2005) in her descriptive research. The parameters studied with eye-tracking technology by ARNÁIZ-UZQ-

UIZA (2012b) also fall into this category as do the quantitative and qualitative data about viewers’ preferences provided by ROMERO-FRESCO (2015) in the volume dedicated to the quality of subtitles. Lastly, another evidence-based tool is the Spanish Standard for SDH, UNE 153010:2012, (AENOR 2012).The lack of empirically-based tools for producing Easy to Read subtitles requires that translators resort to experience-based ones such as the guidelines published by Inclusion Europe in 2009. Entitled Information for All, the guidelines are an output from the European project Pathways which aimed to foster life-long learning for people with intellectual disabilities.The resulting European guidelines are in English with translations into 15 other languages, and are available at https://easy-to-read.eu/european-standards/.This exploratory research draws

Identifying parameters for creating Easy to Read subtitles1

Internationale Hochschule SDI München - [email protected] Inclusión Madrid - [email protected]

Access services that provide audiences with cognitively accessible audiovisual content are less studied

than those which target sensory barriers (e.g., intralingual subtitles, audio descriptions). One factor that

limits said development is the lack of evidence-based parameters for production. This exploratory study

aims establish parameters for Easy to Read subtitles by comparing the Easy to Read (E2R) guidelines by

Inclusion Europe and the Spanish standard for subtitling for the Deaf and Hard-of-Hearing (SDH). The

comparison yielded a set of 16 parameters for production that are mentioned in both guidelines as well as 3

parameters that emerged from the E2R guidelines.

Keywords: cognitive accessibility, easy access services, easy-to-read audiovisual content, Easy to Read

subtitles.

1. Introduction

Abstract

1 The authors would like to thank Pilar Orero. This work has been carried out within the framework of the Doc-toral Programme of the Autonomous University of Barcelona.

Page 44: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 44

upon the proposal by BERNABÉ and ORERO (2019) that ‘easy’2 access services can be developed by merging guidelines from the world of Audiovisual Translation and Easy to Read. The aim is to describe to what extent Easy to Read and SDH parameters overlap and can interbreed. Though such “marriage(s) of convenience” (MATAMALA and ORERO 2013: 1) already exist, there are also con-straints as identified by scholars in the fields of interlingual and intralingual real-time subtitles (DÍAZ CINTAS and REMAEL 2007; EUGENI 2008; ROMERO-FRESCO 2009; SZARKOWSKA 2013).The authors point out that each modality needs its own applied parameters to be able to satisfy the needs of a specific targeted audience within specific contexts. The next section describes the compared documents and the methodology followed.

The structure set out in this study is based on a two-stage workflow to produce E2R subtitles proposed by the authors. The first stage focuses on creation by using parameters that consider endusers’ needs, while the second focuses on validation by involving end-users as recommended by scholars and current professional practice in E2R (SHARDLOW 2014; SAGGION 2017; PLENA INCLU-

SIÓN MADRID 2018; INCLUSION EUROPE 2009; IFLA 2010). The figure below illustrates the two stages.

This study focused on the first stage and, more specifically, on the identification of subtitling parameters. The next sections provide an overview of the identified parameters as well as re-commendations from the comparison of the Easy to Read guidelines Information for All and the Spanish standard for Subtitling for the Deaf and Hard-of-Hearing.

2 Methodology

2 Derived from the use of Easy to Read.

Page 45: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 45

The order of comparison followed the classification used in the Spanish SDH standard (UNE 153010:2012):

• visual (section 4)• temporal (section 5)• speaker identification (section 6)• sounds effects (section 7)• contextual information and off-screen voice (section 8)• music and songs (section 9)• editorial criteria (section 10)

For each section, comparison data revealed parameters and recommendations that were:• shared, which expressed shared recommendations by E2R and SDH,• non-shared, which brought to light contrary recommendations,• only E2R, which derive from E2R and are not included in the SDH standard, and• only SDH, which lack a corresponding E2R recommendation.

The comparison yielded a total of 53 parameters: 16 were found in both documents, while 34 were exclusively in the SDH standard and only 3 in the Easy to Read guidelines. Table 1 provides an overview of the distribution.

The overview shows that 18 parameters are found in both documents, which accounts for a 34,5% overlap. However, a closer look reveals that only 20% of the recommendations are shared. The following sections present the results ordered by section.

3 Results

Page 46: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 46

Section 4 of the SDH standard includes parameters regarding how subtitles should be presented visually on-screen. A total of 13 parameters were identified: 10 from the SDH standard and 3 from the E2R guidelines. Out of the 10 parameters from the SDH standard, 6 are also found in the E2R guidelines.

Table 2: Comparison of visual aspects

3.1 Visual presentation

Page 47: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 47

The six parameters found in both document shared recommendations regard on-screen place-ment, number of subtitling lines, minimum and maximum font size, font type, and contrast. Con-cerning on-screen placement, recommendations agree on a lower-bottom position, which should be maintained throughout the show. The SDH standard specifically advises to use a centred posi-tion and to change it only if a subtitle line is covering relevant information. As for the number of subtitle lines, E2R advises not to use too many layers of subtitles, while SDH sets a limit of 2 lines or a maximum of 3, to be used in exceptional cases.With regards to font-type and contrast, the reviewed documents recommend fonts that support legibility. E2R recommendations are specific and warn about the use of special designs, different font-types and sans-serif or condensed fonts. Both recommendations also agree on the need for good contrast. While SDH refers to the 4.5 minimum as recommended by WCAG guidelines (W3C 2016), E2R provides guidelines for implementation.With regards to font size, the recommendations agree that subtitles should adapt to the size of the screen. However, a closer look shows that E2R recommends using a large font of at least Arial 14 and larger than usual writing in movie subtitles. The fulfilment of this requirement may contradi-ct the abovementioned recommendation of avoiding many layers of subtitles.

The four parameters classified as Only SDH were: on-screen placement of sound information; static lines; line per speaker; and (d) characters per line. The absence of E2R recommendations may be grounded on the fact that E2R guidelines have been less studied in audiovisual contexts as already mentioned above.

Page 48: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 48

Lastly, comparison brought to light three parameters deriving from E2R: (a) ‘Customisation’, (b) ‘Text alignment’ and (c) ‘Sentences per line’. As for the first, E2R recommendations outline the need for personalisation of the service and ask for customisable subtitles that can be turned off/on at any time during viewing.

The parameter ‘Text alignment’ calls for left-alignment to support readability and states to never justify texts. While this recommendation is not included in the Spanish standard, empirical data collected by ARNÁIZ (2012b) showed that reading speed of all groups, and especially of SDH participants, was greater with left-aligned texts as compared to centred texts. Lastly, the E2R recommendation ‘New line per sentence’ is partially shared with other SDH recommendations concerning how to present utterances from dialogues (KARAMITROGLOU 1998; BBC 2019).

Section 5 of the SDH standard includes three parameters pertaining to time-based aspects about the temporal display of subtitles. These are on-screen time of subtitles, synchrony, and latency in the case of real-time subtitling. The comparison yielded a total of 3 parameters: 2 found in both documents, 1 new parameter from E2R, and 1 mentioned exclusively in the SDH standard. Table 3 shows the results.

Temporal aspects are closely related to how a person reads and how she/he performs in terms of comprehension. SDH research in this field is extensive and has evidence-based rules such as the use of 35-37 characters per line and on-screen times from 1 to 6 seconds (DÍAZ CINTAS 2003, ROME-

RO-FRESCO 2010, ARNÁIZ-UZQUIZA 2012a).Comparative data show that the E2R recommendations are vague in this regard, which points to a lack of knowledge about how persons with reading difficulties read subtitles and how they per-form in terms of comprehension. SHANAHAN (2019: 1) explains that the study of habits and skills in struggling readers should take into consideration key factors beyond speed rates such as the ability “to decode easily and continuously and to maintain their concentration” during reading.

3.2 Presentation of subtitles: temporal aspects

Page 49: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 49

Lastly, recommendations seem to disagree with regards to synchrony. While SDH advocates for synchrony with the spoken word, E2R advises that subtitles should be on screen as long as possi-ble, which could affect synchrony and rhythm as defined in SDH.

Section 6 of the standard includes nine parameters regarding how to identify speakers on and off the screen. The comparison did not yield any parameters from E2R. While three parame-ters were found to overlap, only one recommendation was shared.

3.3 Parameters for speaker identification

Page 50: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 50

The comparison shows that both documents agree on the need for a parameter to signal a voice speaking in the background (ID 25: Off-screen voice). However, the underlying motivations dif-fer. While SDH recommendations focus on providing a visual mark for a voice in off, E2R focuses on providing viewers with information about what a background voice is and what type of infor-mation a background voice should provide.

Lastly, SDH recommendations in this section advise to use colours and abbreviations for iden-tification purposes, neither of which are recommended in the E2R guidelines used in this study. However, validation practice in E2R has shown that the use of colours in headings and subhea-dings supports E2R readers (REAL PATRONATO SOBRE LA DISCAPACIDAD 2015). The Spanish standard on Easy to Read (UNE 153101 EX) also supports this view and includes the use of colour as a technique to visually separate headings from the content in section 7.1.

Section 7 of the SDH standard lists seven parameters pertaining to the description of sound ef-fects in subtitles. The comparison did not yield parameters arising from the E2R guidelines. Table 5 show that the E2R guidelines do not consider such parameters and only general recommenda-tions may be linked to them.

3.4 Sound effects

Page 51: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 51

These results tend to segue into a discussion about whether rendering this type of information is necessary; if it supports understanding or if on the contrary it leads to overload. The only referen-ces found in the E2R guidelines are general and warn about the risks of providing too much or too little information: “Do not give people more information than they need to understand your point”, “Always make sure you give people all the information they need”, and “Only give them the important information” (INCLUSION EUROPE 2009: 17).

Section 8 of the SDH standard includes six parameters. Contextual information is provided in SDH subtitles in order to render non-verbal elements conveying linguistic and paralinguistic in-formation. Non-verbal linguistic information is part of the linguistic information communicated in a situation and includes, for instance, pitch, accent, and intonation. In turn, non-verbal paralin-guistic information refers, for instance, to speakers’ attitudes and emotions (LLISTERRI 2019).

The comparison shows that the E2R guidelines do not consider these parameters. As in the pre-vious section, only general recommendations apply.

3.5 Contextual information and off-screen voice

Page 52: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 52

The E2R guidelines do not mention parameters to convey contextual information. Only general E2R recommendations seem to apply, which outline the need to explore what information needs to be made explicit, when, and how.The use of capital letters and italics (ID 35 and 38) are not shared by the E2R recommendations, which warn specifically about their use.

Section 9 of the SDH standard lists five parameters regarding how to subtitle music and songs. As in sections 7 and 8, Sound effects and Contextual information, no parameters were found in the E2R guidelines.

3.6. Music and songs

Page 53: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 53

As in the previous sections, only E2R recommendations may apply. In addition, in this section the recommended use of special characters to tag songs (ID 43) goes against the E2R guidelines, which warn about the use of special characters.

Section 10 of the SDH standard covers ten parameters concerning language usage, grammar, punctuation, and style guidelines. The comparison shows that seven parameters overlap, but that recommendations are not always shared.

3.7. Editorial criteria

Page 54: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 54

Page 55: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 55

The reviewed documents share a high number of editorial parameters. With regards to E2R, its recommendations are specific enough for creation and validation. Furthermore, the comparison identified a lack of parameters and recommendations for real-time contexts.

The table presents the 16 shared parameters and the 3 from the E2R guidelines. Only the E2R re-commendations have been included. This table is for informative purposes only.

3.8 Parameters for Easy to Read subtitles

Page 56: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 56

The comparison has shown that the reviewed documents refer to similar parameters with re-gards to visual and temporal aspects, editorial criteria, and speaker identification. The classifica-tion of the parameters also brought to light that the E2R guidelines report less on how to convey music, sound, and contextual information. In addition, specific E2R parameters were found.Overall, the outcome supports the initial statement that access services can benefit from know-ledge from related services but will still inevitably retain their own characteristics. This is evi-dent especially when comparing specific recommendations. While the reviewed guidelines often agree about the type of parameter, the exact recommendations within differ so as to meet the needs of the targeted audience, in this case, persons with reading and learning difficulties.

The study has also highlighted the need for further research in order to clarify several remaining problem areas. One of these is, for instance, to what extent the need for bigger fonts may lead to more than two subtitling lines. Regarding sound, music and contextual information, it would be

4. Conclusions

Page 57: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 57

AENOR (2012) "Norma UNE 153010: Subtitulado para personas sordas y personas con discapa-cidad auditiva. Subtitulado a través del teletexto", Madrid: AENOR.

AENOR (2018) “Norma UNE 153101:2018 EX. Easy to read. Guidelines and recommendations for the elaboration of documents”, Madrid: AENOR.

arnáiz-uzquiza, V. (2012a) “Los parámetros que identifican el Subtitulado para Sordos. Anális-is y clasificación” in Multidisciplinarietat en traducció audiovisual [A taxonomy on the parameters of

subtitles for the Deaf and Hard-of-Hearing. Analysis and classification], Agost Canós, R., Orero, P. & E. D. Giovanni (eds.), Alacant: Universitat d’Alacant, pp.103-132.

Arnáiz-Uzquiza, V. (2012b) Subtitling for the Deaf and Hard-of-Hearing: some parameters and their evaluation, PhD diss., Universidad Autónoma de Barcelona, Spain.

BBC (2019) “Subtitling guidelines”. available at: http://bbc.github.io/subtitle-guidelines/ [last access: 12/02/2020].

Bernabé, R. (2020) “New Taxonomy of easy-to-understand access services” in Traducción y Accesibi-lidad en los medios de comunicación: de la teoría a la práctica, MonTI 12, Richart-Marset, Mabel and Francesca CALAMITA (eds.).

Bernabé, R. & P. Orero (2019) “Easy to Read as Multimode Accessibility Service”, Hermeneus 21, avai-lable at: https://revistas.uva.es/index.php/hermeneus/issue/view/236 [last access: 12/02/2020].

Díaz Cintas, J. & A. Remael (2007) Audiovisual translation: subtitling, London and New York, Rout-ledge.

useful to study how redundant information is received by E2R audiences, who usually perceive information iso-semiotically, meaning through the same channels as the original. With regards to synchrony with images, there is a need to understand to what extent the E2R recommendation “Subtitles should be on the screen as long as possible” differs from current subtitling practices. Another unresolved question concerns reading speeds. Additionally, E2R editorial recommendations for written documents such as avoiding italics, writing out numbers and dates and avoiding special characters and colours should be explicitly studied for subtitles.Lastly, the E2R recommendation to “Always make sure you give people all the information they need” brings up the question as to whether E2R subtitles and access service should have a more informative function. One example is the recommendation […] “to present the background voice before they start talking on the background”.

References

Page 58: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 58

Eugeni, C. (2008) “Les sous-titrages en direct. Aspects théoriques, professionnels et didactiques”, EUM, Macerata.

Greco, G. M. (2016) “On Accessibility as a human right, with an application to media accessibility”, in Matamala, A. & P. Orero (eds.) Researching audio description. New approaches, London: Palgrave Mac-Millan, pp.11-33.

IFLA (2010) “Guidelines for easy-to-read materials”, available at: https://www.ifla.org [last  access: 12/02/2020].

INCLUSION EUROPE (2009) “Information for All. European standards for making information easy to read and understand”, available at: http://sid.usal.es/libros/discapacidad/23131/8-4-1/infor-mation-for-all-european-standards-for-making-information-easy-to-read-andunderstand. aspx [last access: 12/02/2020].

KARAMITROGLOU, F. (1998) “A Proposed Set of Subtitling Standards in Europe”, Translation Journal

2(2), available at: https://translationjournal.net/journal/04stndrd.htm [last access: 12/02/2020].

LLISTERRI, J. (2019) “L’aspecte comunicatiu del llenguatge”, available at: http://liceu.uab.es/~jo-aquim/general_linguistics/gen_ling/estructura_linguistica/sistema_signes/As pecte_comuni-catiu_llenguatge.html [last access: 12/02/2020].

MATAMALA, A., & P. Orero (2013) “When modalities merge”, Perspectives 21(1), available at: 10.1080/0907676X.2012.722656 [last access: 12/02/2020].

NEVES, J. (2005) “Audiovisual Translation: Subtitling for the Deaf and Hard of Hearing”, unpubli-shed PhD, London: University of Surrey Roehampton, available at: https://www.academia.edu/1589609/Audiovisual_translation_Subtitling_for_the_deaf_and_hardof-hearing [last access: 12/02/2020].

ORERO, P. (2004) “Audiovisual translation: A new dynamic umbrella”, in Topics in audiovisual translation, Orero, P. (ed.). Amsterdam: John Benjamins, pp. 53-60.

PLENA INCLUSIÓN MADRID (2018) “Validación de textos en lectura facil: Aspectos practicos y so-cio-laborales”, available at: https://plenainclusionmadrid.org/recursos/validacion-de-tex-tos-enlectura-facil-aspectos-practicos-y-sociolaborales-2/ [last access: 12/02/2020].

RABADÁN, R. (2010) “Applied Translation Studies”, in Handbook of Translation Studies, Yves GAM-

BIER, Y. & L. VAN DOORSLAER (eds.). Amsterdam/Philadelphia: John Benjamins Publishing Com-pany, volume 1, pp. 12–17.

Page 59: Anno IV - Numero 1 - CoMe Journal

Rocío Bernabé, Óscar García, “Identifying parameters for creating Easy to Read subtitles”, CoMe IV (1), pp. 43-59

www.comejournal.com 59

REAL PATRONATO SOBRE DISCAPACIDAD (2015) “Constitución española en lectura facil publicada en 2015”, available at: http://hdl.handle.net/11181/5589 [last access: 12/02/2020].

ROMERO FRESCO, P. (2009) “More Haste Less Speed: Edited vs. Verbatim Respeaking”, VIAL: Vigo

International Journal of Applied Linguistics 06, pp. 109 – 133, available at:http://vialjournal.webs.uvigo.es/pdf/Vial-2009-Article6.pdf [last access: 12/02/2020].

ROMERO-FRESCO, P. (Eds.) (2015) The Reception of Subtitles for the Deaf and Hard of Hearing in Europe, Bern, Schweiz: Peter Lang CH.

SAGGION, H. (2017) Automatic Text Simplification (Synthesis Lectures on Human Language Technolo-

gies), Morgan & Claypol.

SHANAHAN, T. (2019) “How important is reading rate?”, available at: https://www.readingrockets.org/blogs/shanahan-literacy/how-important-reading-rate [last access: 12/02/2020].

shardLow, M. (2014) “A Survey of Automated Text Simplification.” International Journal of Advan-

ced Computer Science and Applications, Special Issue on Natural Language Processing, pp. 58-70, avai-lable at: https://core.ac.uk/download/pdf/25778973.pdf [last access: 04/01/2020].

szarkowska, A. (2013) “Towards interlingual subtitling for the deaf and the hard of hearing”, Per-

spectives: Studies in Translatology 21 (1), pp. 68-81.

toury, G. (1995) Descriptive translation studies and beyond. Amsterdam/Philadelphia: John Benja-mins Publishing Company.

toury, G. (2012) Descriptive translation studies and beyond, 2nd edn. Amsterdam/Philadelphia: John Benjamins Publishing Company.

W3C (2016) “Contrast (Minimum): Understanding SC 1.4.3”, available at: https://www.w3.org/TR/UNDERSTANDING-WCAG20/visual-audio-contrast-contrast.html [last access: 12/02/2020].

Page 60: Anno IV - Numero 1 - CoMe Journal

www.comejournal.com 60

1. Introduction

FLORIAN LEONI

Depuis l’apparition de YouTube ou de son concurrent français Dailymotion en 2005, les vidéos du web n’ont jamais cessé de gagner en popularité. Réseaux de niche à leurs origines, les pla-teformes de partage vidéo sont rapidement devenues une nouvelle manière de consommer du contenu multimédia. Les clips publiés sur ces sites sont de plus en plus nombreux et de plus en plus variés. Devant cette popularité croissante, le souci de la traduction du contenu a fini par se poser. YouTube et ses concurrents ont d’abord opté pour un système de sous-titrage automatique, avec tous les problèmes que cela peut présenter (découpe hasardeuse, non-respect du nombre de caractères, mauvaise retranscription, etc.), avant de proposer également aux utilisateurs de publier leurs propres sous-titres. Si ces traductions permettent aux internautes les plus habitués aux langues étrangères (notamment l’anglais) de comprendre les vidéos, il peut poser plusieurs difficultés à tous les autres utilisateurs d’internet. La littérature scientifique nous apprend en effet que plusieurs facteurs peuvent influencer la lecture de sous-titre. Une étude menée par la CERM, Centre d’études et de recherches multimédia de l’Université de Mons (Belgique), a permis de montrer que la moindre variation de style (changement de la police, de la couleur, de la position, etc.) peut avoir des conséquences sur la lecture du texte (HAMAOUI, HANNACHI, BAUWENS, DODERO, & OUCHEN 2015). Cela étant, plusieurs autres études ont dévoilé que cette influence ne dérangeait pas de manière significative le spectateur (KÜNZLI & EHRENSBERGER-DOW 2011; BISSON, HEUVEN, CON-

KLIN & TUNNEY 2014).

La traduction audiovisuelle adaptée aux vidéos du web:Sous-titrage vs voice-over

IRSTL – Université de Mons - [email protected]

Afin de traduire leurs vidéos, les plateformes web telles que YouTube ont opté pour le sous-titrage. Cela

étant, les recherches menées jusqu’ici sur cette méthode nous permettent de douter de son efficacité sur

internet. Or, d’autres techniques, le voice-over notamment, pourraient s’avérer plus adaptées à ce type de

contenu. Pour le vérifier, nous avons mené une expérience auprès de 12 participants, jeunes et habitués aux

vidéos du web. À l’aide de l’eye-tracking, nous avons observé le comportement de leur regard. Cela nous

a permis, entre autres, de mettre en avant une certaine perte d’informations liée à la multimodalité et au

voyage abondant du regard que cela suppose.

Keywords: Traduction audiovisuelle, TAV, internet, YouTube, vidéos du web, sous-titrage, voiceover,

eye-tracking, confort visuel

Abstract

Page 61: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 61

On peut alors supposer que d’autres facteurs viennent perturber cette lecture. À ce sujet, Carmen Muñoz (2017) évalue notamment les conséquences de l’âge et de la maîtrise des langues étrang-ères. Pour ce faire, l’auteure a utilisé l’eye-trackingafin d’enregistrer le visionnage de deux vidéos sous-titrées (l’une en anglais, l’autre en espagnol) auprès de trois catégories d’âge distinctes (en-fants, adultes, adolescents). Il en ressort que les enfants passent plus de temps sur un sous-titre que les catégories plus âgées. L’une des raisons est que les enfants ont une vitesse de lecture moins élevée que les adultes. Par conséquent, ils ont besoin de s’attarder plus longtemps sur les soustitres pour les lire. De manière plus générale, ce temps de lecture varie d’une personne à l’au-tre, mais aussi en fonction du contexte. On pourrait donc imaginer que la rapidité caractéristique du web participerait à augmenter la vitesse de lecture moyenne d’un spectateur. Cette dernière pourrait donc être prise en compte dans notre recherche. En outre, l’étude de Muñoz (2017) nous permet également de voir que l’identité du public cible a son importance. Ce constat est d’ailleurs corroboré par une recherche d’envergure réalisée à la demande de la Commission européenne, et visant à évaluer les habitudes de consommation des Européens en matière de traduction audiovi-suelle. Cette étude met notamment en avant le fait que ces habitudes varient en fonction du pays (donc, de la langue), mais aussi de l’âge du public (SAFAR ET AL. 2011).

En ce qui concerne les vidéos du web, la cible serait constituée des milliers de personnes qui uti-lisent YouTube chaque jour. Or, ces personnes ont des identités particulièrement diverses. C’est ce qu’a révélé une étude menée en 2013 par l’Ipsos à la demande de Google, maison mère de YouTube. Cette recherche défend l’idée que le public de YouTube constitue une génération à part entière, appelée la « Gen C », qui ne se définit pas par son âge, mais par sa présence sur le web (In-

troducing Gen C: The YouTube Generation, 2013). Les caractéristiques de cette génération vont nous intéresser pour deux raisons. D’abord, parce que la variété constitutive de ce public renforce nos suppositions selon lesquels une partie de ces spectateurs risque de rencontrer des difficultés avec le sous-titrage. Ensuite, et surtout, parce que cette génération développe ses propres habitudes de consommation. L’étude nous montre notamment que ces utilisateurs passent plus de temps sur YouTube que devant la télévision. De ce fait, la « Gen C » n’a sans doute pas encore de goûts déterminés en matière de traduction audiovisuelle. Cela nous permet d’envisager la possibilité de proposer une forme de traduction alternative au sous-titrage. Parmi les méthodes de TAV les plus courantes, le voice-over est celle qui présente le plus d’avantages en commun avec le sous-titrage. En effet, ces deux techniques sont particulièrement rapides et peu couteuses.

Avant de comparer ces deux méthodes pour savoir laquelle est la plus intéressante pour le specta-teur, il convient d’identifier les deux grands problèmes que risque de poser le sous-titrage. D’une part, il y a la question de la qualité. Les sous-titres sur YouTube sont souvent rédigés par des ama-teurs, soit des consommateurs, soit les réalisateurs eux-mêmes. On parle alors de fansubs. Il arrive également, comme mentionné précédemment que les sous-titres soient générés automatiquement par la plateforme. De manière assez logique, on peut mettre en doute la qualité de ces sous-titres qui sont réalisés sans respecter les contraintes spatio-temporelles habituelles. À ce sujet, David Orrego Carmona (2016) note que fansub n’est pas synonyme d’illisibilité, ni de mauvaise qualité.

Page 62: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 62

L’auteur conclut en fait que les jeunes générations sont de plus en plus habituées à ce type de sous-titrage, et que les communautés qui les produisent le font de mieux en mieux (ORREGO CARMONA 2016).Cela pourrait expliquer la raison pour laquelle le monde professionnel ne s’est jamais vraiment intéressé au sous-titrage sur internet. Pourtant, dans le cadre très particulier des vidéos du web, il reste un détail qui n’a pas encore été pris en compte : la multimodalité. Peu de youtubeurs se contentent de parler dans leurs vidéos, et beaucoup appuient leurs arguments ou leurs blagues avec des images ou des textes qui apparaissent à l’écran. Or, Gostisa et Urbain (2017) ont montré que cette multitude d’informations pouvait déranger la lecture d’un sous-titre.Afin de mettre en avant cette multimodalité et les autres problèmes causés par le sous-titrage, et dans l’optique de comparer cette méthode au voice-over, nous ferons, comme la plupart des études mentionnées ci-dessus, appel à la technologie de l’eye-tracking. Cet outil enregistre le regard d’un participant au cours d’un visionnage, et analyse son comportement en fonction de diverses métriques. Ainsi, l’eye-tracking nous aidera à répondre aux questions suivantes :

- Comment se caractérise la multimodalité dans les vidéos du web ?- Quelles zones de l’image attirent le regard d’un spectateur ?- Quels problèmes peuvent être causés par la multimodalité ?- Comment se définit le parcours du regard face à ces nombreuses informations visuelles ?- Le spectateur a-t-il le temps d’analyser toutes les informations à l’écran ?- La méthode du voice-over est-elle une solution pour pallier le manque de confort du

soustitrage?

Nous présenterons alors une vidéo eye-trackée à 12 participants répartis en deux groupes. Le premier groupe de six individus visionnera une version sous-titrée en français (le commentaire original étant en anglais). Le second groupe, comportant également six participants, recevra la même vidéo, mais dans une version doublée en français grâce au voice-over. Pour plus de fia-bilité, nous choisirons des personnes ayant déjà une certaine maîtrise de l’anglais, et habituées à ce type de contenu. Une fois les enregistrements terminés, nous les analyserons à l’aide du logiciel Tobii Studio, selon plusieurs métriques et aires d’intérêts qui seront définies plus tard dans cette étude. Cette expérience et l’ensemble des points étudiés au fur et à mesure de cette recherche nous permettront de répondre à une plus grande question : quelle est la méthode de traduction audiovisuelle la plus adaptée aux vidéos du web?

Nous avons réalisé notre expérience auprès de 12 participants, âgés de 18 à 25 ans (m = 19,9), répartis en deux groupes de 6 individus. Chaque groupe avait pour tâche de visionner une même vidéo, traduite selon deux méthodes différentes (sous-titrage pour le groupe VOST, voi-ce-over pour le groupe VF). Les participants étaient tous des étudiants de première année en

2. Méthodologie

2.1. Participants

Page 63: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 63

1 La consommation est mesurée selon la réponse à la question suivante : « À quelle fréquence regardez-vous des vidéos YouTube ou similaires ? ». Elle est jugée « quotidienne » quand les participants ont dit visionner « plusieurs vidéos par jour ».

Les types de contenus vidéos disponibles sur les plateformes telles que YouTube sont extrêmem-ent variés. On y trouve aussi bien des courts-métrages, des séries ou des sketchs humoristiques que des clips musicaux, des tutoriels de cuisine ou des leçons de sport. Ainsi, il est difficile de sélectionner une vidéo et de la considérer comme un exemple valable pour toutes les autres. Cela étant, nous pouvons distinguer, à travers tous ces genres différents, quatre grandes catégories :

- La fiction- Le podcast- Le tutoriel- Le clip musical

2.2. Choix du document

traduction et interprétation à l’Université de Mons, avec la combinaison linguistique anglais/arabe. Tous parlaient le français couramment, mais deux d’entre eux avaient pour langue mater-nelle l’arabe, et deux autres, l’italien. Le reste des participants étaient francophones de naissance. Par ailleurs, 10 des 12 participants avaient l’habitude de visionner plusieurs vidéos par jour sur internet. Les deux derniers participants fréquentaient YouTube quotidiennement pour y écouter de la musique. Enfin, 9 des 12 participants avaient également l’habitude de regarder des vidéos, films et séries en version originale sous-titrée (VOST) en français (Fr) ou en anglais (En). Deux des participants préféraient la version originale (VO), et un seul autre, la version doublée en français (VF).

Page 64: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 64

La fiction, même si elle a su créer ses propres codes sur internet, ne diffère pas tellement des oeuvres plus classiques observées au cinéma ou à la télévision. Leur cas ne posera donc pas de problème particulier sur le point de la traduction et de l’adaptation : le sous-titrage y étant par-faitement adapté. De la même manière, les clips musicaux ne nous concerneront pas ici, car ils répondent à des codes encore différents et présentent, en théorie, peu de multiplicité d’informa-tion. Les cas du podcast vidéo (genre typique du web) et du tutoriel seront, en revanche, particu-lièrement intéressants. Par souci de simplicité, nous avons choisi de nous concentrer, dans notre expérience, sur le premier cas, pour ensuite émettre quelques suppositions concernant le second. Ainsi, nous avons choisi une vidéo qui réponde aux critères du podcast, c’est-à-dire qui n’est ni une oeuvre de fiction, ni un tutoriel, et qui fait appel à un youtubeur seul face à la caméra (utili-sant un plan dit face cam) pour parler d’un sujet particulier de manière (souvent) humoristique. L’anglais étant la langue la plus présente sur YouTube, nous avons voulu nous concentrer sur un youtubeur anglophone. Nous avons alors sélectionné l’un des plus connus : Ray William John-son, créateur américain du podcast « Equal 3 ». Enfin, parmi la multitude de vidéos publiées par le youtubeur, nous en avons sélectionné une qui réunissait suffisamment de matière pour notre analyse du regard et qui était en même temps simple à sous-titrer. C’est ainsi que nous avons choisi la vidéo intitulée « Harry Potter Abridged », dans laquelle Ray William Johnson présente et critique de manière ironique et humoristique trois vidéos trouvées sur le web.

L’enregistrement avec l’eye-tracking s’est déroulé en deux sessions. La première d’entre el-les a accueilli neuf participants, constituant l’échantillon de base. Une semaine plus tard, la seconde session a fait appel à trois participants supplémentaires. Ces nouveaux enregi-strements étaient destinés à compléter les données déjà collectées. Les deux sessions se sont déroulées de la même manière, suivant les mêmes étapes, et dans les mêmes conditions. Les participants, tous étudiants, avaient cours dans une salle de classe voisine. Chacun leur tour, ils étaient invités à entrer dans la salle d’étude et à s’installer devant un ordinateur. Le local en question était une petite salle de réunion au milieu de laquelle étaient disposées plusieu-rs tables, collées les unes aux autres pour former un grand rectangle. Au fond de la salle se trouvait une cabine d’enregistrement. L’ordinateur était placé près de cette cabine, à côté d’un second appareil (utilisé pour la régie son), hors tension durant l’expérience et à proximité duquel se tenait le chercheur. De l’autre côté de la salle était installé un équipement de visio-conférence, composé d’un écran de télévision et d’une caméra, également hors tension lors de l’expérience.

Une fois entrés dans la salle, les participants étaient invités à s’asseoir devant un écran d’or-dinateur de type PhilipsBrilliance 231B (cpy) avec une résolution de 1600x900 pixels. L’écran était relié à une tour d’ordinateur discrète, placée sur une table annexe, à droite du partici-pant, et fonctionnant sous Windows 7. Les participants étaient installés confortablement sur

2.3. Collection des données

Page 65: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 65

2.3. Analyse des résultats

une chaise de conférence standard, suffisamment basse pour que le regard soit capté facile-ment par l’eye-tracker. Ce dernier se composait d’une barre horizontale disposée au bas de l’écran et reliée par un câble USB à un mini-PC, fourni par la société Tobii pro en complément du logiciel d’eye-tracking Tobii Studio. Dans ce logiciel, nous avions préparé deux timelines (une par groupe) qui comportaient une instruction simple ainsi que la vidéo étudiée. Par ail-leurs, l’eye-tracker utilisé était le modèle Tobii Pro X3-120, fonctionnant à une fréquence de 120 Hz.Une fois installés, les participants recevaient une mise en contexte de la part du chercheur qui expliquait son parcours et l’objectif de sa recherche. Les participants devaient ensuite répondre à un bref questionnaire, validé en amont par les professeurs N. Hamaoui et H. Safar.Ce questionnaire posait les cinq questions suivantes :

1 Quel âge avez-vous ?2 Quelle est votre combinaison linguistique ?3 Quelle est votre langue maternelle ?4 À quelle fréquence regardez-vous des vidéos sur YouTube ?5 Quand vous regardez un film, laquelle de ces versions choisissez-vous généralement ?

a Version originaleb Version originale sous-titréec Version doublée

Une fois cette étape terminée, les participants recevaient un casque audio connecté à l’ordinateur pour pouvoir regarder la vidéo sans être dérangés par des bruits extérieurs. Nous leur avons en-suite demandé oralement de visionner la vidéo le plus naturellement possible, comme s’ils étaient installés devant leur ordinateur personnel, sans contrôler leur regard. La consigne était répétée à l’écrit avant le lancement de la vidéo sous la forme suivante : « Regardez la vidéo suivante le plus naturellement possible ». Enfin, avant que l’enregistrement ne commence, le logiciel invitait les participants à suivre un point rouge à l’écran, dans le but de calibrer leur regard aux capteurs. Une fois cela fait, l’enregistrement pouvait débuter et durait le temps de la vidéo (c.-à-d. environ 4 minutes).

L’analyse statistique des résultats a été réalisée grâce au logiciel Tobii Studio. Ce dernier a l’avan-tage de nous offrir une variété d’outils simples d’utilisation, ce qui nous permet de produire fa-cilement les statistiques et de nous concentrer davantage sur leur interprétation. Le premier outil auquel nous avons eu recours est la heatmap (carte de chaleur) qui nous a permis de distinguer les différentes zones qui constitueront nos aires d’intérêt. Les autres fonctions que nous avons utilisées étaient, quant à elles, purement statistiques. Nous verrons ainsi, dans les pages suivan-tes, comment nous avons utilisé le pourcentage de fixation, le nombre et la durée des visites, et le nombre et la durée des fixations, afin de mesurer ce que nous avons appelé le confort visuel.

Page 66: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 66

Nous avons étudié ces métriques par le biais de quatre « aires d’intérêts », à savoir celle de l’in-formation principale (le youtubeur), de l’information secondaire (les images et textes à l’écran), des soustitres et du logo. Notons enfin que nous n’avons analysé que les scènes où apparaissait le youtubeur, excluant les vidéos « externes » présentées et critiquées par Ray William Johnson.

La multimodalité est une caractéristique propre à tout type de contenu vidéo, que ça soit à la télévision, au cinéma ou sur internet. De manière générale, la vidéo, depuis l’apparition du cinéma parlant, passe son message à travers deux canaux : l’image et le son. Enfin, l’image, notamment dans la fiction, comporte régulièrement différents plans de lecture, contribuant chacun à la com-préhension du film. Le web n’échappe pas à ces règles, et tend même à utiliser abondamment la multimodalité. Outre les images filmées que nous venons de mentionner, les youtubeurs aiment illustrer leur propos à l’aide d’images fixes ou de textes ajoutés au montage. Il convient alors de se demander si cette multimodalité ne risque pas d’apporter une trop grande quantité d’informa-tions simultanées au spectateur.Pour répondre à cette question, nous avons commencé par utiliser l’outil heatmapde Tobii Studio. Il s’agit d’une sorte de carte de chaleur où la « chaleur » correspond en réalité au temps passé par le regard sur chaque pixel de l’image ou de la vidéo. Dans notre cas, cette carte s’est avérée d’autant plus efficace que les passages que nous avons choisis d’analyser s’articulent très souvent autour d’un seul et même plan fixe. Le youtubeur, Ray William Johnson, apparait sur l’un des côtés du plan, laissant de la place à côté de lui pour d’éventuels textes ou images. Les sous-titres apparaissent quant à eux centrés aubas de l’écran. Du fait de cette uniformité du plan des passa-ges analysés, nous pouvons nous placer directement à la fin de la vidéo sous-titrée (VOST) avec une carte de chaleur accumulant tous les points de la vidéo. Ce faisant, nous obtenons l’image suivante :

3. Résultats

a. La multiplicité d’information

Page 67: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 67

Et grâce à l’utilisation de ce plan unique, nous savons quelles « zones » se cachent derrière chaque tache de couleur. Ainsi, on devine que la longue trace rouge au bas de l’image correspond aux sous-titres. Les trois petites zones vertes qui se dessinent sur la partie supérieure du plan com-prennent, quant à elle, l’essentiel de l’information. Cette dernière se compose soit du visage du youtubeur, soit des textes et images qui apparaissent parfois à l’écran. Enfin, on peut également remarquer la présence d’une quatrième zone verte, très faible, dans le coin inférieur droit de l’i-mage. Cette zone correspond en fait au logo de l’émission de Ray William Johnson, « Equalthree » ou « =3 ». La lecture de cette carte nous permet ainsi de confirmer l’existence des trois zones que nous envisagions d’utiliser comme aires d’intérêt, à savoir celle d’information principale, celle d’information secondaireet la zone des sous-titres ajoutés par nos soins. La carte de chaleur nous a également révélé l’existence d’une quatrième zone qui pourrait être porteuse d’intérêt : celle du logo. Cette zone est en fait très symbolique, du fait de son emplacement et de sa discrétion. Le logo est ainsi peu visible et, même pour un spectateur anglophone regardant la version originale, il faut un peu de temps avant de remarquer sa présence. Pourtant, il est bien présent et apporte une information que nous pourrions qualifier de « cachée ». Il convient enfin de noter également que cette information, dans le cas de cette vidéo, est très peu enrichissante, puisqu’elle n’a pas de lien direct avec le sujet traité. Cela fait donc de la zone « logo », une zone d’information que nous dirons d’arrière-plan.

Fermons cette parenthèse nécessaire sur le logo pour revenir à nos quatre zones d’information. Chacune a été reprise dans un autre outil du logiciel pour concevoir ce que Tobii Studio appelle des « aires d’intérêt » (areas of interestou AOIs). Ces aires vont ensuite permettre au logiciel de nous fournir un certain nombre de statistiques en fonction de plusieurs métriques que nous choi-sirons nous-mêmes.Pour commencer, nous nous sommes intéressés au pourcentage moyen de fixation par participant enregistré pour la vidéo sous-titrée (VOST). Avant d’analyser les résultats de cette métrique, il est important de bien en comprendre le sens.Lorsque l’on visionne une vidéo, le regard parcourt l’image et s’arrête sur une multitude de poin-ts à chaque millième de seconde. Chacun de ces arrêts est ici appelé « fixation », et chacune de ces fixations est enregistrée et comptée par zone. Le pourcentage de fixation calcule le rapport entre le nombre de fixations par zone et la durée d’apparition de ces zones (Tobii Pro, 2016)2. En effet, si les passages analysés présentent systématiquement, et pendant toute leur durée, le visage du youtubeur, les images et textes qu’il aura ajoutés en « information secondaire » apparaissent de manière fortuite et bien plus brève. Ainsi, le pourcentage de fixation devient intéressant, car il nous permet de voir si le spectateur a regardé chaque zone de manière équitable, ou s’il en a favorisées ou sacrifiées certaines.

b. Le pourcentage de fixation

2 Notons également que le pourcentage proposé ici correspond à la moyenne des participants de chaque grou-pe.

Page 68: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 68

Alors, le logo se montre à nouveau fort intéressant. Comme on peut le voir très clairement sur le graphique ci-dessous, chaque zone de la VOST a été regardée de manière parfaitement égale par les participants. Toutes, à l’exception de la zone du logo qui a été consultée deux fois moins que chacune des autres zones.

On peut également noter que les sous-titres ont été autant regardés par les spectateurs que les zones d’informations principales et secondaires. Remarquons cependant que regarder ne signifie pas lire. Effectivement, le pourcentage de fixation ne nous permet de savoir que si le regard s’est posé régulièrement sur les sous-titres, mais rien ne nous autorise à affirmer ici que ces sous-titres ont bien été lus par les participants. Les données fournies par le pourcentage de fixation nous con-duisent même à une autre possibilité. Sur le graphique ci-dessous, on remarque qu’en l’absence de sous-titres (dans la VF donc), la zone du logo a été plus regardée que dans la VOST.

Cela nous permet de supposer qu’il y a eu une perte d’information, et même, un sacrifice plus ou moins volontaire du spectateur. Ce sacrifice peut même nous sembler raisonné, puisqu’il semble que les participants aient favorisé les zones d’informations plus importantes (la zone principale ou les sous-titres), au détriment des zones « d’arrière-plan » comme le logo. Si cette perte d’infor-mation apparait ici au niveau du logo, elle pourrait également s’observer dans le sous-titre. On peut, en effet, supposer que les sous-titresont été regardés sans nécessairement être lus. Ils sera-ient alors à l’origine d’une gênepour le regard du spectateur, qui serait automatiquement attiré par le sous-titre, sans pour autant l’exploiter.

Page 69: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 69

Pour le confirmer, il nous faut d’abord savoir comment le regard se comporte réellement face à cette multiplicité d’informations. Ce comportement, que nous appellerons « parcours », peut aisément être reconstitué grâce aux métriques liées aux « visites » des aires d’intérêt. Par visite, nous entendons tous les allers-retours effectués dans une aire d’intérêt par le regard. Une visite commence lorsque le regard entre dans une aire et se termine dès qu’il en ressort (Tobii Pro, 2016).Le logiciel Tobii Studio nous propose ainsi de mesurer le nombre de ces visites pour chaque zone, zainsi que leur durée totale3.Le nombre des visites nous permet de savoir à quel point le regard du spectateur a voyagé entre les différentes aires d’intérêt. Plus ce nombre est élevé, plus le spectateur est allé et venu vers cette zone. Plus il est bas, moins l’aire en question a intéressé ou attiré le regard du spectateur. Si l’on se concentre sur la VOST, on peut ainsi supposer qu’il y a eu un voyage du regard assez abondant entre la zone d’information principale et la zone de sous-titres, qui présentent toutes deux les résultats les plus élevés. En revanche, on remarquera à nouveau que la zone du logo semble avoir été sacrifiée pour favoriser les trois autres. Il faudra enfin noter que, dans le cas de l’information secondaire, les résultats sont effectivement plus bas, mais ils sont à considérer avec précaution. En effet, la zone est celle des trois qui apparait le moins souvent dans la VOST. Le logo est présent tout au long de la vidéo. De la même manière, le youtubeur, Ray William Johnson, apparait sur la totalité des passages examinés, et les sous-titres s’affichent dès qu’il parle, c’est-à-dire aussi souvent qu’il est à l’image. Les informations secondaires, quant à elle, n’apparaissent que de temps à autre pour ponctuer le discours du présentateur.

Comparons à présent les résultats obtenus pour la VOST à ceux de la VF, notamment en ce qui concerne l’aire d’intérêt portant l’information principale.

3 La durée totale correspond plus clairement au temps passé en moyenne par les participants dans une même aire d’intérêttout au long de la vidéo.

c. Le voyage du regard

Page 70: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 70

Sur ce premier graphique, comparant le nombre de visites pour chaque version de la vidéo, on remarque que le regard du spectateur est moins souvent entré et sorti de la zone d’information principale de la VF. Cela peut signifier deux choses. Soit le spectateur a sacrifié cette zone pour regarder davantage l’information secondaire et le logo (qui sont, eux, en augmentation), soit son regard a moins voyagé que dans la VOST. Ce voyage moins abondant du regard semble se confirmer lorsque l’on s’intéresse à la durée de ces visites. Certes les visites étaient moins nom-breuses dans la zone d’information principale pour la VF, mais elles étaient aussi plus longues.

Plus intéressant encore, on peut remarquer que la durée des visites dans les deux autres zones de la VF est également supérieure à celle de la VOST. Nous pouvons alors supposer que le temps passé à lire les sous-titres dans la VOST a été redistribué en leur absence dans la VF. Un simple calcul suffit à renforcer cette hypothèse :

Dans la VOST, la majeure partie du temps de visite se partage entre la zone d’information prin-cipale, avec 66,25 secondes au total, et la zone des sous-titres, avec 51,59 secondes. Il est in-téressant de voir que, pour la VF, le temps octroyé par le spectateur pour la zone d’information principale à elle seule augmente de 44,51 secondes. Si l’on ajoute les 4,30 secondes qui s’ajoutent à la zone secondaire et la 0,18 seconde supplémentaire du logo, on obtient un total de 48,99 se-condes. Or, ce temps correspond, à 2 secondes près, à celui octroyé à la zone des sous-titres dans la VOST.

Page 71: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 71

d. La perte d’information

Nous avons vu jusqu’à présent que la multiplicité d’informations impliquée par le format web conduit le regard du spectateur à voyager de manière abondante. Il est possible que cela cause des problèmes de fatigue oculaire, mais ce n’est pas notre sujet. Ce voyage abondant du regard peut, en effet, poser un autre problème : celui de la perte d’informations.Nous avons déjà illustré une partie de cette perte précédemment, avec l’exemple du logo. Nous avons ainsi vu que le pourcentage de fixation dans cette zone était deux fois moins élevé que celui des autres aires d’intérêt de la VOST. Nous avons également remarqué que le regard se rendait moins souvent et moins longtemps dans cette zone. Une dernière donnée concernant le logo est également intéressante à ce sujet : la durée avant la première fixation sur le logo est deux fois plus élevée pour la VOST que pour la VF.

La durée avant la première fixation indique le temps qu’il a fallu à l’oeil du spectateur pour pénétrer dans une aire d’intérêt pour la première fois (Tobii Pro, 2016). Autrement dit, le graphi-que ci-dessus nous montre qu’il a fallu près de 2 minutes aux spectateurs de la VOST pour re-marquer la présence du logo dans le coin inférieur droit de l’image. Seule la moitié de ce temps a été nécessaire pour que les spectateurs de la VF ne remarquent ce même logo. On peut aisément en déduire qu’il y a bien une perte liée à la multiplicité de ces informations, et que cette perte est accrue en présence de sous-titres.S’il est difficile de l’affirmer avec une expérience clinique comme la nôtre, on peut tout de même supposer que cette perte d’informations (aisément observable avec la zone logo) se présente aussi dans d’autres zones, comme celle des sous-titres ou de l’information secondaire. Or, dans un cas comme dans l’autre, une partie du travail de création devient inutile. En effet, si le specta-teur ne relève pas les textes inclus par le youtubeur dans l’information secondaire, on peut se demander à quoi bon ajouter ce type de texte. De la même manière, si le spectateur sacrifie les sous-titres pour favoriser l’image, c’est le travail du traducteur qui devient inutile.

En combinant la durée des visites et leur nombre, on peut apporter des premiers éléments de réponse à cette nouvelle problématique. En effet, le croisement de ces données nous permet de savoir le temps qu’a réellement eu le spectateur pour examiner une aire d’intérêt. Ce temps peut

Page 72: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 72

nous aider à savoir si le spectateur a pu ou non assimiler l’information contenue dans la zone. On supposera ainsi que plus les visites étaient nombreuses et longues, plus le spectateur avait la capacité d’assimiler l’information de la zone. À l’inverse, moins il y a de visites et plus elles sont courtes, moins le spectateur aura la capacité d’assimiler l’information. On se retrouve alors avec quatre cas de figure :

1. Des visites nombreuses et une durée élevée;2. Des visites peu nombreuses, mais de durée élevée;3. Des visites nombreuses, mais de courte durée;4. Des visites peu nombreuses et de courte durée.

Le cas numéro un est ici celui qui permet la plus grande capacité d’assimilation. Plus l’on de-scend vers le cas numéro quatre, plus cette capacité d’assimilation diminue. On peut alors, à partir de ces informations, créer le graphique suivant, sur lequel on retrouve ces quatre cas de figure sous la forme de quatre zones distinctes :

On observe alors que les spectateurs de la version sous-titrée (VOST) ont dû avoir moins de facilité à assimiler l’information principale que ceux de la version en voice-over (VF). On sup-pose alors que les sous-titres sont la cause de cette perte d’information, et le graphique ci-des-sus semble le confirmer. En effet, le point de l’information principale en VOST apparait dans la même zone, et même proche, du point des sous-titres, comme si le regard, qui se concentre sur l’information principale dans la VF, se partageait entre cette même zone et celles des sous-titres dans la VOST. Ainsi, la lecture de ce graphique semble aller dans le sens de nos précédentes suppositions. L’analyse des fixations va nous apporter encore plus de précision sur le sujet. Là où la visite nous disait simplement si le regard est venu ou non dans une zone et s’il y est resté, les fixations vont nous dire si ce même regard a parcouru la zone en question ou s’il s’est figé sur un seul point. En théorie, plus les fixations seront nombreuses, plus le spectateur aura « analysé » l’aire d’intérêt visitée.

Page 73: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 73

De même, plus les fixations seront longues, plus le spectateur aura eu le temps d’analyser la zone et, donc, d’en exploiter les informations.

Sur ces deux graphiques, on remarque clairement que les zones d’information principale et se-condaire ont été analysées plus en détail par les spectateurs de la VF que par ceux de la VOST. On supposera alors, encore une fois, que le spectateur de la version sous-titrée n’a pas pu rele-ver toutes les informations contenues dans cette zone. La comparaison nous permet même de penser que ce sont bien les sous-titres qui ont dérangé le regard du spectateur en l’attirant vers eux, le détournant des autres zones. De plus, comme nous l’avons mentionné précédemment, regardé ne veut pas dire lu. Autrement dit, si les sous-titres ont bien attiré le regard, ça n’est pas pour autant qu’ils ont été lus et utilisés par les spectateurs. La fonction gaze plot du logiciel Tobii Studio va alors nous permettre de suivre aisément le parcours du regard de chaque specta-teur. Cette fonction restitue une par une les fixations qui ont été enregistrées à l’écran.On observe ainsi, au cas par cas, que le participant P01 du groupe VOST, qui connaissait mieux l’anglais, présente une lecture très partielle des sous-titres lorsque le youtubeur était seul à l’écran, et ne les lisait plus du tout quand des textes ou des images apparaissaient sur le côté dela vidéo. Cette lecture partielle, voire absente, s’observe également chez deux autres participants.

Page 74: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 74

Le sujet P02 semble ainsi se détacher des sous-titres au fur et à mesure qu’avance la vidéo, pour finalement ne plus les lire du tout. Le sujet P04, quant à lui, ne compte jamais plus de trois fixa-tions sur les sous-titres analysés, ce qui indique soit qu’il lisait partiellement ces sous-titres, soit que son regard se portait vers les sous-titres sans prendre le temps de les lire. Les trois autres participants ont présenté une attitude presque opposée, puisque leurs résultats montrent une lecture claire des sous-titres. C’est le cas, en premier lieu, des sujets P03 et P05, dont le regard suivait régulièrement et à la perfection le sens de lecture. Cette assimilation de l’information sem-ble également s’être faite aux dépens d’autres zones, ce qui nous laisse supposer que les sujets avaient besoin des sous-titres pour comprendre la vidéo. Pourtant, il arrivait, quand des éléments secondaires apparaissaient à l’écran, que les sujets ne regardent pas du tout les sous-titres. De même, le participant P06 a lu les sous-titres de manière quasi constante. Si le sens de la lecture n’apparait pas toujours clairement sur les images, le nombre de fixations élevé sur les sous-titres tout au long du visionnage nous laisse penser qu’il a largement eu le temps de les lire, et même qu’il sacrifiait les autres zones (information secondaire notamment) pour pouvoir le faire. Ainsi, quand on analyse l’ensemble de ces données, on peut distinguer deux grands extrêmes. D’un côté, il y a les participants qui semblent clairement lire les sous-titres. Cette lecture s’illustre à travers le nombre de fixations élevé que l’on peut compter sur la zone des sous-titres, ou par l’ordre de ces points qui suivent régulièrement le sens de la lecture en français, c’est-à-dire de gauche à droite et de haut en bas.À l’autre extrémité, se trouvent des participants qui semblaient ne lire que très peu les sous-titres. Le plus souvent, on ne peut compter qu’une ou deux fixations de leur part sur la zone des sous ti-tres, sans nécessairement que ces fixations ne suivent le sens de la lecture. On peut alors supposer que leur regard était, malgré eux, attiré par cette zone.

Nous nous sommes jusqu’à présent intéressés au cas des podcasts qui sont parmi les vidéos les plus populaires et les plus présentes sur les plateformes comme YouTube. Au sujet de ces vidéos typiques du web, nous avons constaté une multiplicité d’informations qui pouvait provoquer un voyage abondant du regard favorisant une perte d’informations. Cela étant, comme expliqué dans le choix du document traité, le web ne diffuse pas que des podcasts et les genres sur ces plateformes sont nombreux. Des quatre grandes catégories que nous avions alors distinguées, nous nous sommes intéressés en particulier au cas du podcast, mais nous avions alors laissé en suspend celui du tutoriel vidéo. Pourtant, l’utilisation du sous-titrage sur ce type de contenu pourrait poser des problèmes similaires à ceux rencontrés avec le podcast.Imaginons un tutoriel qui nous apprenne à réaliser une manipulation X. Le tutoriel est conçu de la manière suivante :Le youtubeur réalise la manipulation en expliquant les étapes. Au montage, il utilise une zone de texte intégrée sur le côté de l’écran pour résumer les principales étapes au fur et à mesure qu’il parle. Au bas de l’écran, apparaissent les sous-titres ajoutés par le traducteur-adaptateur a postériori.

e. Le cas des tutoriels

Page 75: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 75

Imaginons à présent un internaute qui souhaite réaliser la manipulation X, et qui compte, pour cela, sur l’aide de notre tutoriel. Nous pouvons alors dénombrer quatre zones d’intérêt pour l’in-ternaute, à savoir :

1. La manipulation X par le youtubeur,2. Les indications textuelles du youtubeur,3. Les sous-titres,4. La réalisation simultanée de la manipulation X par l’internaute.

Nous avons vu, à travers les différents résultats de notre recherche, que le spectateur tendait à sacrifier les zones les moins importantes (le logo, dans le cas du podcast étudié précédemment) pour favoriser l’assimilation de l’information dans les autres zones. Il n’y a, dans notre exemple du tutoriel, pas de logo à sacrifier, mais il y a bien une multiplicité d’informations visuelles. Dans ce cas, quelles seront les zones sacrifiées par l’internaute?

Déjà, il semble impossible que notre spectateur ne réalise le tuto sans regarder un seul instant ce qu’il fait de ses mains. De ce fait, la zone numéro 4 ci-dessus sera largement privilégiée. Ensuite, on peut supposer que si la personne cherche un tutoriel vidéo (et non écrit), c’est bien pour pou-voir voir comment réaliser la manipulation et pas simplement lire des instructions. Ces instructions sont, dans notre exemple, apportées par deux canaux. D’abord, par les textes ajoutés au montage par le youtubeur. On supposera que ces textes n’ont pas été traduits et qu’ilssont donc en anglais, ce qui peut encourager l’internaute à ne pas les regarder. La zone numéro 2 se retrouve ainsi la première sacrifiée. Les instructions apparaissent également dans les sous-ti-tres, puisqu’elles sont prononcées à haute voix par le youtubeur et, donc, traduites. Cela étant, si l’internaute est venu pour voir la manipulation, il va favoriser la première zone, celle de la manipulation à l’écran. Pour ce faire, on peut supposer qu’il sacrifiera également la zone des sous-titres. Ainsi, l’internaute serait susceptible de sacrifier les informations textuelles pour fa-voriser l’observation de la manipulation. Cela étant, ces informations sont bien présentes et elles risquent de détourner le regard de l’internaute, et de le déranger dans sa manipulation. Par ail-leurs, le sous-titrage attirera ici le regard, sans pour autant pouvoir être lu, et il sera donc inutile. Le recours à une autre méthode, comme celle du voice-over, peu couteuse et aussi facile à réaliser qu’un sous-titrage, permettrait d’alléger les informations à l’écran. Dispensé de sous-titres et en-tendant directement les instructions dans sa langue, l’internaute pourra se concentrer davantage sur sa manipulation et sur celle du youtubeur. Ainsi, le tutoriel deviendrait bien plus facile à réaliser, et la vidéo, ainsi que la traduction-adaptation, rempliraient parfaitement leurs fonctions.

Page 76: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 76

En seulement une quinzaine d’années, les vidéos du web ont su trouver leur place dans la culture populaire. Si bien, qu’elles constituent aujourd’hui l’une des plus grandes sources de divertisse-ment. La traduction du contenu audiovisuel sur internet reste malgré tout très amatrice et peu adaptée à ce nouveau format. À travers notre recherche, nous avons essayé d’apporter de pre-miers éléments de réponse sur ce sujet. Notre expérience, menée auprès de 12 participants, n’a pas la prétention de permettre une véritable généralisation, mais elle nous a permis de soulever plusieurs pistes de travail. D’abord, les résultats obtenus grâce à la carte de chaleur nous ont permis de confirmer l’existence d’une multimodalité forte dans les vidéos web. Sur base de ces informations, nous avons divisé le plan unique de la vidéo en quatre zones distinctes, chacune avec une importance particulière. La zone du logo, qui semblait pourtant presque inutile dans la vidéo, nous a suggéré l’existence d’une perte d’informations. Afin de confirmer cette supposition, nous avons observé et analysé les visites dans chacune des aires d’intérêt. Cela nous a permis de voir à quel point le regard pouvait voyager devant une vidéo sous-titrée. Cela étant, à ce stade, l’idée d’une perte d’information n’était toujours qu’une hypothèse. De manière à la vérifier, nous nous sommes intéressés aux fixations, qui nous ont permis d’analyser plus en détail le parcours du regard. Nous avons alors noté que les spectateurs de la VOST avaient tendance à sacrifier plus facilement certaines zones que ceux de la VF. Ce sacrifice nous est également apparu dans l’analyse au cas par cas de la lecture des soustitres. Nous avons alors découvert deux tendances. Soit les spectateurs ne lisaient pas les soustitres, pour favoriser la zone d’information principale, soit, à l’inverse, ils se concentraient sur les sous-titres, aux dépens d’autres zones. Quoi qu’il en soit, nous avons systématiquement observé un phénomène de sacrifice de zones qui semble confirmer l’idée d’une perte d’informations.Toutes ces données nous ont montré que le sous-titrage n’était pas nécessairement adapté aux vidéos du web, son utilité pouvant être mise en doute. À l’image d’internet, les créations disponibles sur les plateformes telles que YouTube adoptent un rythme souvent très soutenu. Les podcasts se caractérisent notamment par l’utilisation du jump cut, une coupure effectuée au montage pour améliorer le rythme et fluidifier le discours du youtubeur. Ce rythme sou-tenu, auquel s’additionnent des images et textes ajoutés par le réalisateur au montage, rend la lecture du sous-titre compliquée. C’est la raison pour laquelle nous avons envisagé l’utilisation d’une autre méthode pour traduire le contenu du web : le voice-over.À nouveau, les résultats observés dans notre expérience et les données relevées avec l’eye-tracking ne nous permettent aucune généralisation. Cependant, il apparait clairement, lor-sque l’on compare les deux méthodes de TAV, que le voice-over offre plus de temps à l’analyse des éléments visibles à l’écran. Notons par ailleurs que, dans cette version, nous avons tradu-it les textes apparaissant à l’écran en y superposant la traduction. Ainsi, quand ces textes ap-paraissaient, ils étaient proposés en français directement, sans que nous ayons recours à aucun sous-titre. Cela a sans aucun doute contribué à ralentir le voyage du regard des spectateu-rs, et, par conséquent, à permettre aux participants une meilleure compréhension de la vidéo.Malheureusement, notre étude, comme toute autre, a ses limites, et nous ne sommes pas allés ju-squ’à analyser la compréhension de la vidéo par notre public. Une telle enquête serait néanmoins

4. Conclusion

Page 77: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 77

très intéressante à mener et permettrait de compléter les données relevées dans notre recherche. De plus, nous avons mené notre expérimentation en diffusant la vidéo sur un écran d’ordinateur. Or, de nos jours, les vidéos du web sont le plus souvent visualisés à partir de supports portables (smartphones, tablettes, etc.) à l’écran plus petit. De même, certains téléviseurs proposent un accès direct à YouTube, permettant aux internautes de consommer ces vidéos sur un écran plus grand. Cette variation des supports peut également influencer la lecture des sous-titres et encou-rager l’utilisation du voice-over. Sur un tout autre point, nous nous sommes concentrés dans no-tre recherche sur l’accessibilité interlinguistique. Pourtant, bon nombre d’utilisateurs à travers le monde présentent des besoins spécifiques dans leur propre langue. Il pourrait alors être pertinent de se pencher sur la question de l’audiodescription et du sous-titrage adapté des vidéos du web. Après tout, nos ordinateurs et navigateurs sont, eux, déjà équipés d’outils permettant de faciliter l’accessibilité du web aux personnes atteintes de handicaps sensoriels.

BISSON, M.-J., HEUVEN, W. J., CONKLIN, K., & R. J. TUNNEY (2014), Processing of native and forei-gn language subtitles in films: An eyetracking study, Applied Psycholinguistics, 35(2), pp. 399-418. doi:10.1017/S0142716412000434

GOSTISA, J., & E. URBAIN (2017), Le FanSub. Le Monde de la Traduction, pp. 21-33.

HAMAOUI, N., HANNACHI, B., BAUWENS, M., DODERO, I., & W. OUCHEN (2015), Eye-Tracking in Audio-visual Translation: A Tool for Multimodal Analysis to Overcome Audiovisual Translation Con-straints, Le Monde de la traduction, pp. 25-45.

IGAREDA, P., & A. MAICHE (2009), Audio Description of Emotions in Films using Eye tracking, dans N. Barthouze, M. Gillies, & A. Ayesh, Symposium on Mental States, Emotions and their Em-bodiment, Edimbourg: SSAISB, pp. 20-23.

Introducing Gen C: The YouTube Generation (2013), https://www.thinkwithgoogle.com/consu-merinsights/ introducing-gen-c-the-youtube-generation/ [Consulté le Avril 12 2018]

KRUGER, J.-L. (2012), Making meaning in AVT: eye tracking and viewer construction of narra-tive. Perspectives : Studies in Translatology, 20(1), pp. 67-86.doi:http://dx.doi.org/10.1080/0907676X.2011.632688

KÜNZLI, A., & M. EHRENSBERGER-DOW (2011), Innovative subtitling. A reception study. Dans C. Alvstad, A. Hild, & E. Tiselius, Methods and Strategies of Process Research (pp. 187-200). Amster-dam: John Benjamins.

Références

Page 78: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 78

LEONI, F. (2018), Faisabilité et confort visuel de la traduction audiovisuelle sur internet [Mémoire réalisé sous la direction des professeurs H. Safar et N. Hamaoui].Mons: Université de Mons.

MUÑOZ, C. (2017), The role of age and proficiency in subtitle reading. An eye-tracking study. System(67), pp. 77-86. doi :http://dx.doi.org/10.1016/j.system.2017.04.015

ORERO, P., & A. VILARÓ (2012), Eye Tracking Analysis of Minor Details in Films for AudioDescription. MonTI. Monografías de Traducción e Interpretación(4), pp. 295-319.

ORREGO CARMONA, D. (2016), A reception study on non-professional subtitling: Do audiences notice any difference? Across Languages and Cultures, 17(2), pp. 163-181.doi:http://dx.doi.org/10.1556/084.2016.17.2.2.

PEREGO, E. (2012), Eye-tracking in audiovisual translation. Rome: ARACNE.

ROMERO-FRESCO, P. (2015), The Reception of Subtitles for the Deaf and Hard of Hearing in Europe. Bern: Peter Lang.

ROMERO-FRESCO, P. (2016), The Dubbing Effect: An Eye-Tracking Study Comparing the Recep-tion of Original and Dubbed Films. Linguistic and Cultural Representation in Audiovisual Tran-slation.

ROMERO-FRESCO, P. (2018), Eye Tracking, Subtitling and Accessible Filmmaking.Consulté le Mai 10, 2018, sur Researchgate.net : https://www.researchgate.net/publication/323178846_Eye_Tracking_Subtitling_and_Accessible_Filmmaking

SAFAR, H., MODOT, A., ANGRISANI, S., GAMBIER, Y., EUGENI, C., FONTANEL, H., . . . & X. VERSTREPEN (2011), Annexe 2 : Les pratiques de traduction audiovisuelle et les obstacles à l’utilisation du sous-ti-trage. Consulté le avril 30, 2018, sur Etude sur l’utilisation du sous-titrage: http://eacea.ec.eu-ropa.eu/llp/studies/study_on_the_use_of_subtitling_en.php

TOBII PRO (2016), Tobii Studio User’s Manual Version 3.4.5.

Page 79: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 79

Les images ci-dessous sont des exemples permettant d’illustrer la méthode utilisée pour analyser la lecture des sous-titres.

Chaque couleur correspond à un participant du groupe de la VOST :- Jaune = P01- Rouge = P02- Mauve = P03- Gris = P04- Bleu = P05- Vert = p06

• Lecture du sous-titre quand le youtubeur est seul à l’image (1).

4. Annexes

a. Analyse de la lecture des sous-titres

Page 80: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 80

• Lecture du sous-titre quand le youtubeur est seul à l’image (2)

Page 81: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 81

• Lecture du sous-titre lorsqu’un objet apparait dans la vidéo

Page 82: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 82

• Lecture du sous-titre quand une image est ajoutée par le youtubeur

Page 83: Anno IV - Numero 1 - CoMe Journal

Florian Leoni (2019) “La traduction audiovisuelle adaptée aux vidéos du web...’’, CoMe IV (1), pp. 60-83

www.comejournal.com 83

• Lecture du sous-titre quand un texte est ajouté par le youtubeur

Page 84: Anno IV - Numero 1 - CoMe Journal

www.comejournal.com 84

Introduction

PAULA-ANDREEA GHERCĂ

This article intends to identify the most frequent translation problems that may occur when de-aling with a children’s movie, in the particular case of The Star, an animation movie released by Sony Pictures in 2017. As the field of translation is vast and brimming with specialized resources, it will be interesting to see how such a movie unfolds, from the point of view of translation. Since the target audience is clearly established – children, and the message must be easy to convey, the subtitling task should comply accordingly. A comparison between the original English script and its corresponding Romanian subtitled variant will be made, with translation issues being highli-ghted, both when the translation product is acceptable and when it could have been improved. Suggestions for perfected translations will be made at various points; however, these should be regarded as a subjective point of view being expresses and not necessarily as unquestionable translation solutions, since, as Newmark (1988) said, it is very important to look at the translation from the point of view of a never-ending series of discussions and interpretations.The Star tells the story of the first Christmas, seen through the eyes of the animals that accompa-nied Mary and Joseph on their journey to Bethlehem. The movie starts with a brave mill donkey named Bo that yearns for the life he might have outside the mill. After many struggles, he finally manages to escape from the mill and alongside his dove friend Dave and the friendly sheep Ruth, he follows the star that guides them to Bethlehem and unwittingly, they become the protagonists of the most fascinating adventure of their lives.

Subtitling of the The Star movie into Romanianbetween the literal and the non-literal

West University of Timisoara - [email protected]

The following article aims to point at the translation problems that may occur when subtitling children’s

movies. The analysis will focus on the animation movie The Star, produced by Sony Pictures in 2017 and it

will highlight the ways in which the characters’ lines in English have been translated into Romanian, ma-

king suggestions for improvement where this was felt necessary. Conclusions are drawn as to whether the

instances of mistranslation or inappropriate translation may have a negative impact on the way the movie

is received by the young audience it targets.

Keywords: children’s animation movies, subtitling into Romanian, literal translation, nonliteral transla-

tion, effect of translation on the audience

Abstract

Page 85: Anno IV - Numero 1 - CoMe Journal

Paula-Andreea Ghercă (2019) “Subtitling of the The Star movie into Romanian’’, CoMe IV (1), pp. 84-90

www.comejournal.com 85

A number of examples of both literal and non-literal translation of the script lines will be di-scussed in what follows.At the beginning of the movie, when the little mouse is drawn into Mary’s house by the pleasant smell of a pie and wants to steal it from her, Mary sees him and says the following: “Don’t think I don’t see you, little one.” This short sentence is subtitled into Romanian as “Să nu crezi că nu te văd, micutule”, the opening line of the movie thus being faithfully rendered into Romanian, with only the transposition of little one into micutule having been required by the characteristics of the Romanian language. The rhythm of the original English script is also preserved, together with its naturalness.However, in the second line of the movie, Mary says: “I think it’s enough for both of us, though.” In Romanian, the translator omitted the adverb though in the end and only rendered the fol-lowing: “Cred că e suficient pentru amândoi.” In my opinion, the adverb though shows Mary’s change of heart after she has shown disapproval of the mouse’s intention to steal and willin-gness to share her food with the little creature.Further on in the movie, when the archangel Gabriel comes to give Mary the wonderful pie-ce of news that she is going to have a baby, he tells her the following: “The Holly Spirit will overshadow you, and the child will be called the Son of God. For nothing is impossible with God.” In Romanian, the subtitle of this fragment reads: “Duhul sfânt veghează asupra ta, și copilul va fi numit Fiul lui Dumnezeu. Căci nimic nu este imposibil pentru Dumnezeu.” This Romanian subtitle presents a couple of mistranslations: to begin with, the original English script starts with the use of the future tense, a fact that highlights the faith Mary should have in God and His will not only at the present moment, but also throughout her pregnancy and after it. In Romanian, the translator has replaced the future with the present, which, as I see it, does not underline the support God offers Mary during her entire pregnancy, but a consolation of the fact that she must not fear in the present. Moreover, the preposition with in the prepositional noun phrase with God, should have been translated either with întru (the rather archaic, but still preserved word in Romanian religious language in the phrase întru Dumnezeu), or with cu. Either of these two would have reinforced the impression of protection and safety offered by God, and not of its omnipotence, rather connoted by the preposition pentru (the actual Engli-sh equivalent of the Romanian clause is “Nothing is impossible for God”).Once the archangel left Mary and went back to the sky, he became a fabulous star that gave the name of the movie. When it appeared in the night sky and while stood there, all the ani-mals in the village and in the field gazed at it. So did also a brave little donkey, named Bo, which utters the following words: “Ok. You are not going to believe this, but I think a new star just appeared in the sky.” In Romanian, this line was rendered as: “Bine. Nu veți crede acest lucru, dar cred că o nouă stea a apărut pe cer...”. Though the rendition into Romanian is correct, a few ob-servations may be made. For instance, I think it may have been more appropriate to translate you are not going to believe this as n-o să vă vină să credeți, or as n-o să mă credeți, or as n-o să mă credeti, thus avoiding the slightly too formal character of what was suggested in the official

1. The Star script subtitled into Romanian – between the literal and the non-literal

Page 86: Anno IV - Numero 1 - CoMe Journal

Paula-Andreea Ghercă (2019) “Subtitling of the The Star movie into Romanian’’, CoMe IV (1), pp. 84-90

www.comejournal.com 86

subtilting. In the latter part of the line there is also the omission of the time adverb just, poin-ting out the fact that the star in the sky has barely appeared. In the Romanian version, its ab-sence corresponds to cancellation of the surprise element, of the novelty of the piece of news.The following day, Bo’s pigeon friend Dave comes to visit them at the mill. When the old donkey sees Dave, he remarks: “Hey, kid, your unemployed bird friend’s here.” In the Romanian subtitle, his words are rendered as follows: “Hei puștiule, prietena ta pasăre fără treabă, e aici.” (lit. “Hey kid, your bird friend that has nothing to do is here”). In the English script, the old donkey’s line is meant to be humorous and it really is. In the Romanian version, however, it is not as humo-rous as it could have been. Since “the perception of humour varies depending on every culture, person and situation, it is widely acknowledged that a joke may make some people laugh while it goes unnoticed for others” (CARRA 2009: 134). It seems the humorous effect of this English line has escaped unnoticed by the Romanian tran-slator. The translation of unemployed as fără treabă has a very different meaning as compared to the original version. From the English original, we easily understand that Dave has no job, but from the Romanian subtitling, we deduce the fact that Dave is just wandering around without anything specific to do. If I were to translate this line, I would have put it this way: “Hei puștiule, prietenul tău șomer, înaripat, e aici.” I believe that this translation is closer to the original script.At Dave’s sight, Bo is very excited and asks his friend about the world outside his mill. Bo asks Dave the following: “Hey, pal. What’s new out there today?” In Romanian, this line is translated as: “Hei, amice. Ce e astăzi nou pe acolo?” (“Hey, buddy. What is new around there today?”) Again, the Romanian rendition distances itself a little from the original version as it does not capture the whole meaning of out there. Bo actually states that he wants to know what is new outside his pla-ce, but in Romanian, this nuance is not captured. As a different option, I would have translated it with “Hei, amice. Ce mai e nou pe afară astăzi?”or with “Ce se mai aude nou de-afară, astăzi?”, much closer to the original and able to signal Bo’s “confinement” to a place separated from the world “out there”. After breaking out from the mill, Bo is chased by the angry miller, who is eager to get it back to his work. As Bo runs, he meets up with Dave, its best pigeon friend and tells him that although it is free, the miller is still chasing him. The original line is: “The miller! The miller’s on my tail!” In Romanian, the subtitle goes “Morarul! Moara e în coada mea!”. The miller in the second clause was mistranslated as moara - the mill, which makes no sense in the context given. Apart from this, the translation of the idiomatic expression on my tail is faulty. In English, its meaning is “on my tracks”, “following me up close”, whereas in Romanian the word-for-word translation only suggests that the miller is following the donkey, but the connotation “up close” is missing.Later on, after they hear the exciting news about the rolling of the royal caravan through Naza-reth, Bo and Dave are getting happier and happier when thinking of the fact that they could join them and leave Nazareth forever. In all this excitement, Dave says goodbye to Nazareth in his distinct humorous way, as follows: “Nazareth can kiss my gleaming white tail feathers good-bye.” The Romanian version is: “Nazaret poate să-mi pupe coada albă când îi spun adio.” (“Nazareth can kiss my white tail when I say good-bye to it.”).The word for word subtitle is inadequate here. The translator has disregarded the fact that Dave is a pigeon. It does not have a tail like that of a mammal. In the original script, it is precisely referred

Page 87: Anno IV - Numero 1 - CoMe Journal

Paula-Andreea Ghercă (2019) “Subtitling of the The Star movie into Romanian’’, CoMe IV (1), pp. 84-90

www.comejournal.com 87

to as a feather tail. In Romanian, however, the translator kept the translation of tail but he comple-tely left out the main characteristic of the tail, which is the feathers and he even neglected the fact that Dave is talking about his tail feathers, and not about the whole tail. Perhaps this line would have been better translated as follows: “Nazaretul poate să-mi pupe penele din coada mea albă și strălucitoare de la revedere.”(Nazareth can kiss the feathers of my white and shiny tail good bye”).Moving to Mary and Joseph’s wedding, after the party is over, Mary is thinking about the way in which she will tell Joseph about her being pregnant with the Son of God and to convince him that they are blessed with the greatest miracle of humankind. Right beside them there is Zecha-riah who’s feasting and eating at will. At a certain moment, his wife calls him to go home. While still chewing the food, Zechariah says to Joseph: “Great party, guys.” In Romanian, his words were translated as: “Bună petrecere, băieți.” Zechariah was very pleased with the party and he really means it when he tells Joseph it was great. I would have kept the superlative value of the original great and translated it by grozavă instead of bună, which is merely neutral. Guys is also mistranslated. When Zechariah expresses his good impression about the party, he refers to both Mary and Joseph, and not to the males around him only – what is implied by the Romanian word

băieți. My suggestion for improvement would be “Grozavă petrecere, oameni buni.”, where oameni buni refers to both males and females and is informal enough to be a suitable equivalent of guys.

While the movie unfolds, the focus now turns to Cyrus, Felix and Deborah, the three camels that carry the wise men to see baby Jesus. While going through the dessert, they start complaining about getting thirstier and thirstier, and about the gifts brought by the wise men to baby Jesus. On their way, the wisest of them, Deborah, tries to convince the others that the baby is not an ordinary child, given the fact that the gifts were so precious and valuable. She believes that the wise men are going to the birthday party of a future king. On the other hand, Felix and Cyrus, not as clever as Deborah, are arguing over if they are really going to a birthday party or it is a baby shower. As their discus-sion continues, they say: “It’s a birthday party.” “Baby shower.” In Romanian, their contradiction is rendered as: “Petrecere de aniversare” “Îmbăiere bebeluș” (lit. Birthday party. Bathing the baby.)If the first sentence is correctly subtitled, the second part completely lacks the cultu-re-related meaning that it has in the original. A “baby shower” does not literally imply any bathing of the baby which is, in fact, not yet born at the time the party by this name is organized. Though the custom has been lately borrowed into the Romanian culture, an equi-valent name for it has not yet appeared. Thus, atranslation involving explicitation would have been much more appropriate in this context. My suggestion is “Petrecere de bun ve-nit pentru bebelușului care se va naște” (“A welcome party for the baby who is to be born”).As she tries to convince her friends Felix and Cyrus about the fact that maybe they are not going to a party at all, but to meet the Son of God, her friends remain speechless and finally, Felix says: “Deborah’s crazier than a box of rocks.” The Romanian version of his saying is: “Deborah e mai ne-bună decât o cutie cu pietre.” Though the word-for-word translation of the idiomatic comparison “crazier than a box of rocks” manages to signal the idea that the character is considered comple-tely nuts, it does not sound natural in Romanian. A better equivalent is, to me, nebună de legat (crazy as to tie her up), which is of the same register and equally idiomatic to the original phrase.While the three funny camels, Felix, Cyrus and Deborah are still at the court of Herod,

Page 88: Anno IV - Numero 1 - CoMe Journal

Paula-Andreea Ghercă (2019) “Subtitling of the The Star movie into Romanian’’, CoMe IV (1), pp. 84-90

www.comejournal.com 88

hidden behind some bushes, Cyrus says: “Shouldn’t we be sneaking out? Why are we snea-king in?”. The two questions have been mistranslated into Romanian as: “Nu ar trebui să ne ascundem? De ce ne ascundem?”. The verb to sneak means to move stealthily and is used here in a play upon words that relies on the use of the antonymic prepositions in and out, a thing that the Romanian translation fails to render. What the camels are wondering about is why they are trying to hide instead of attempting to leave, to run away. This distinction is not captured in the translation, where only the idea of remaining in place and hiding is present.As their trip to Bethlehem continues, they finally arrive at a small town where they unfortuna-tely meet the hunter and his dogs. In this desperate chase, Bo somehow manages to save Mary by throwing Joseph’s cart towards the hunter, making him fall in the fountain. All the people who witnessed the event put the blame on Bo. Joseph became angrier and angrier, calling Bo a “good for nothing” donkey. On hearing this, Bo gets very upset and tells Dave to follow him in joining the royal caravan. Their sheep friend Ruth tries to convince him to stay and to follow the star but Bo is unshakable in his decision. On their way back, Bo suddenly hears the bells of the royal caravan and gets very excited. However, as his bandage flies up in the sky, near the star, he finally realizes his purpose and sees how mistaken he was when trying to join the caravan. At this point, Bo tells Dave: “You know, Mary may not be big and royal, but she’s important.” In Romanian, his words were tran-slated as follows: “Știi, Maria poate nu e cineva mare și de viță regală, dar ea este importantă.”Mare și de viță regală for big and royal sounds unnatural in Romanian, though it renders the meaning of the original correctly and entirely. Măreț și de viță nobilă is much more natural to native Romanians. Also, the third person singular subject is not usually expressed in Romanian, though its presence is compulsory in the surface structure of an English sentence. So, dar ea este importantă would have sounded more natural without the subject ea, as dar este importantă.

Since their trip comes to an end and they get to Bethlehem, Mary and Joseph are looking for a place to sleep inside a house. Outside, Ruth, Dave and Bo are relieved and happy they made it up to that place. However, Bo has the feeling of something negative and suddenly he sees the miller who was looking for him. Bo gets scared and tries to escape but the miller is too quick and manages to tie him down. As he is struggling to get rid of the miller’s rope, Dave intervenes and tries to stop the miller as well. This time, he says: “No, you don’t, crazy-eyed, donkey-eating miller.” In the Romanian translation “Nu, no s-o faci, morar mâncător de măgari.”, reference to the miller’s crazy eyes is omitted. It is true that the camera focuses on this detail, but the Romanian translation says nothing about it, so it may well escape unnoticed by the audience. If it were to stick closely to the original, the translation should have been “Oh nu, n-o s-o faci, morar mâncător de măgari cu privire nebună.”After the miller finally catches Bo, Ruth and Dave are planning to save him and separate. In the meantime, the three funny camels are tied up to a tree, discussing about the king who might be in great danger. They all agreed that they should set themselves free and start moving around. Predictably enough, their strategy did not work as they thought it would and they got mixed up even worse. In all this mess, Cyrus says: “Would you stop pinching me?”. In the Romanian version, his turn becomes: “Vrei să nu mă mai strângi?. The English verb to pinch has been mistaken for to

Page 89: Anno IV - Numero 1 - CoMe Journal

Paula-Andreea Ghercă (2019) “Subtitling of the The Star movie into Romanian’’, CoMe IV (1), pp. 84-90

www.comejournal.com 89

hold tight, which, given the situation, with all the ropes en tangled around the camels may not be an inappropriate modulation.As the movie continues, the three magi come to Herod’s palace bringing gifts for the new king. Herod has no idea about Jesus’ coming and believes the gifts are for him. However, they are not. Two deadly dogs scare the camels that brought the three wise men to the king. These dogs invented a game named “How high” - they bark and scare the future victims and watch how high they can jump. They played the same game with the camels and when they did that, they said: “How high? Camel high!” subtitled as “Cât de sus? Sare cămila!”With regard to our scene, the question is translated word-for-word and its original meaning – a clear one, is rendered correctly. However, the answer to the question does not find itself an appropriate correspondent in Romanian. Camel high, for which the equivalent Sare cămila (The camel jumps) was suggested should have been translated by explicictation as “Cât sare cămila!” (As high as the camel jumps) to clarify the meaning of the original. As it is, the translation in the movie remains opaque and does not have the same humorous connotation as the English version. However, as Zoe de Linde and Neil Kay point out, “humor, perhaps more than any other feature, highlights the interplay between the three semiotic systems of the medium. Some jokes depend on the synchronicity of word and image, others on the interplay between spoken and written language.” (LINDE, KAY 1999: 13). The synchronization of the lines, either original or translated, with the images in the movie may have compensated for the inappropriate character of the Romania version.

Page 90: Anno IV - Numero 1 - CoMe Journal

Paula-Andreea Ghercă (2019) “Subtitling of the The Star movie into Romanian’’, CoMe IV (1), pp. 84-90

www.comejournal.com 90

This article aimed to highlight some of the problems a translator may encounter when subtitling a children’s movie from English into Romanian. In the case examined, instances of mistran-slation or inappropriate translation were noticed, though, generally speaking, the script was acceptably subtitled (some of the lines which were translated correctly have also been pointed out here).Of what could be improved, some translation variants do not impede the reception of the movie by the children: for example, “Don’t think I don’t see you, little one.” is subtitled into Romanian as “Să nu crezi că nu te văd, micuțule”. In this particular case, the naturalness of the speech is preser-ved, without damaging the overall perception of the scene or cutting of the understanding of the message. Others, however, may have negative consequences in this respect, as, for instance: “Morarul! Moara e în coada mea!”. The miller in the second clause was mistranslated as moara - the mill, which makes no sense in the context given. In addition to this, as far as this sentence is concerned, the entire reception of the movie is disrupted, thus preventing the children from understanding the movie and possibly, confuse them with respect to the global message.Overall, neither of the occasions when the English script was mistranslated or inappropriately translated drastically distorted the meaning of the original, though some resulted into “a les-sened potential to dynamize the readers’ emotions” (Pungă 2016: 109), more specifically, into a weaker humorous impact. Consequently, the entertainment function of the movie as well as its formative function are fulfilled.

CARRA, N. (2009) “Translating Humour: The Dubbing of Bridget Jones’s Diary into Spanish”, in DIAZ, CINTAS, J. New Trends in Audiovisual Translation, Great Britain: Cromwell Press Group Ltd, pp. 134-135.

LINDE, Z., KAY, N. (1999) “The Semiotics of Subtitling”, USA: St Jerome Publishing, pp. 13-14.

NEWMARK, P. (1998) “A Textbook of Translation”, University of Michigan, Prentice Hall Internatio-nal, pp. 76-77.

PUNGĂ, L. (2016) To Delete or to Add? Omissions and Additions in Two Romanian Translations of Jack

and the Beanstalk. in DEJICA, D. et al. (eds.) “Language in the Digital Era. Challenges and Perspective”, Berlin: DeGruyter, pp. 109-119.

Conclusion

References

Page 91: Anno IV - Numero 1 - CoMe Journal

www.comejournal.com 91

MARIA LAURA MANCA1*, MARTINA COSCI2, MICHELANGELO MAESTRI3,GIULIA RICCI1, GABRIELE SICILIANO1, ENRICA BONANNI1

The main goal of the present study was to study the activation of brain areas, after performing some specific linguistic mediation tasks, namely shadowing in French and Italian, and simulta-neous interpreting from French into Italian and vice versa. In particular, we have tried to answer the following research questions:

i) which brain hemisphere is more active during some specific language mediation tasks?ii) does listening to music have a positive impact on cortical activity?iii) Are interlingual and intralingual forms of language mediation comparable in terms of men-tal activity?

Thanks to electroencephalography (EEG), we could map the hemisphere activities of both sha-dowing and simultaneous interpreting, before and after brain stimulation provided by listening

Neural aspects in simultaneous interpretingThe role of music in intralingual and interlingual activities

1 Department of Experimental and Clinical Medicine, University of Pisa, Pisa2 Civica Scuola Interpreti e Traduttori “Altiero Spinelli”, Fondazione Milano, Milan

3 Section of Neurology, University Hospital of Pisa, Pisa* Corresponding Author; e-mail: [email protected]

The correlation of music with the brain state could be reflected in modifications of electrical activity recorded

by electroencephalography (EEG). In this study, by recording EEG in three BA students in simultaneous

interpreting, we have mapped the hemisphere activities of both shadowing and simultaneous interpreting,

before and after brain stimulation provided by listening to Mozart’s sonata K448. We have chosen K448

because this excerpt is associated to a short improvement in spatial-temporal task performance, in healthy

subjects. We therefore wanted to check if there was a positive effect of Mozart’s music also in shadowing and/

or simultaneous interpreting.

Results show that brain activity was higher after listening to sonata K448 than after the shadowing exercise,

and that simultaneous translation has generated a greater activity than the shadowing exercise. This means

that listening to Mozart music does have a positive impact on brain activities, especially for those which

make more use of the right hemisphere. Of such activities, simultaneous interpreting has demonstrated to be

a harder mental activity than shadowing. In any case, the role played by music as a tool for training transla-

tors seems a good way to go, though further research is needed.

Keywords: Neural aspects, simultaneous interpreting, intralingual activities, interlingual activities

Introduction

Abstract

Page 92: Anno IV - Numero 1 - CoMe Journal

Maria Laura Manca et alii (2019) “Neural aspects in simultaneous ...’’, CoMe IV (1), pp. 91-104

www.comejournal.com 92

to Mozart’s sonata K448.Regarding simultaneous translation, the process consists of four phases: reception of the message in the source language, message processing (decoding), message re-elaboration (recoding) and finally, the delivery of that message in the target language (encoding). Interpreting implies the implementation of a set of cognitive processes at the same time, and for this reason it can be de-scribed as a type of abnormal communication, as the interpreter is submitted to a superposition of the listening and the speaking phase, as well as to a partial superposition of the reception and production of the message. The cognitive load of the brain is therefore very high (EUGENI 2008, GILE 1985).Shadowing is the out loud repetition of a spoken message, in which one tries to repeat a message which is as close as possible to the original text (GRAN 1992). Under optimal conditions, the sha-dowing exercise can be relatively easy, and the number of errors is generally minimal GRAN 1992. However, difficulties may arise while shadowing, because of a strong regional accent of the spea-ker or particularly amplified background noise.Generally speaking, shadowing has characteristics which are similar to those of simultaneous in-terpreting because both activities involve two simultaneous cognitive tasks: listening and spea-king.However, these two cognitive tasks also have some differences. First of all, the function of sha-dowing is not communicative, as in the case of simultaneous translation or of a similar intralingual activity: live subtitling through respeaking. In fact, shadowing aims essentially at enhancing the split attention ability of an interpreter. For this reason, shadowing has proven to be a useful tool in exploring the limits of information processing and the human capacity to perform several tasks simultaneously. This makes it an excellent technique for training interpreters, who must first learn to listen and speak another language simultaneously, and then move on to interpreting from one language to another (EUGENI 2008, LAMBERT 1989).This means that it is an exercise and the addressee of the output is the teacher or the shadower himself/herself. Conversely, simultaneous interpreting and live subtitling through respeaking aim at producing a professional service, namely to translate a speech for a specific audience, as spoken text into another language or written text in the same language, respectively (EUGENI 2008).Moreover, from a technological point of view, shadowing is usually performed through headpho-nes and a microphone, but it is not dependent on them. On the contrary, simultaneous interpre-ting and live subtitling through respeaking deeply depend on technology. Last but not least, the quality of a shadowed text must be phonetically the same as the text produced by a simultaneous interpreter: pronunciation must be pleasant to hear. In the case of respeakers, the message must be unambiguous for the machine, so that the software properly recognizes the voice of the respeaker (GILE 1985, GRAN 1992).Given the above-mentioned similarities between shadowing and live subtitling through respea-king and that both the addressee and technology where only simulated, the analysis cannot be car-ried on between two non-comparable activities (e.g. simultaneous interpreting and shadowing). On the contrary, it should be carried on comparable activities: either between interlingual inter-pretation and intralingual interpretation (or shadowing) or between simulated simultaneous

Page 93: Anno IV - Numero 1 - CoMe Journal

Maria Laura Manca et alii (2019) “Neural aspects in simultaneous ...’’, CoMe IV (1), pp. 91-104

www.comejournal.com 93

1 EB and GS, neurologists, together with MLM, biomathematician.2 The 10-20 system is based on the relationship between the location of an electrode and the underlying area ofcerebral cortex. Each site has a letter to identify the lobe, and a number or another letter to identify the hemisphe-re location.

interpreting and simulated live subtitling through respeaking.Given that the focus of this article is on the process and not on the product, interlingual interpre-tation and simultaneous interpreting will be used as synonyms, as well as intralingual interpre-tation, shadowing and respeaking. Through music we could learn much about the human brain, being music an effective means of accessing and/or stimulating specific cerebral circuits (KUČIKIENĖ & PRANINSKIENĖ 2018, TRIMBLE & HESDORFFER 2017). In the present study, we chose Mozart among the possible composers. The rationale to the exposure to K448 Sonata by Mozart is associated to a short improvement in spatial-temporal task performance in healthy subjects (HETLAND 2000, RAUSCHER, SHAW & KY 1993), thanks to the listening to the first movement of this excerpt. The correlation of music with the brain state could be reflected in modifications of electrical activity recorded by EEG (RIDEOUT & LAUBACH 1996). In particular, quantitative EEG is a sensitive tool to substantiate cortical function (VERRUSIO ET AL. 2015), and each frequency band power owns a functional significance.

Three female students, 21 – 22 years old, right-handed, with no musical talents were recorded whi-le performing interlingual and intralingual translation activities in French and Italian. All students have completed a BA in conference interpreting at the Faculty of Translation and Interpreting at SSML – Pisa. They performed a series of tasks:

- listening to a white noise,- listening to the first movement of Mozart’s K448 sonata,- shadowing of a text in Italian,- translating a text from French to Italian simultaneously.

These mental activities and the related brain signals were recorded thanks to EEG. The protocol was decided by the medical team1, who also analyzed data. The experimental text was conducted at the Neurological Clinic – Sleep Laboratory of Pisa University Hospital.When subjects entered the laboratory, they were asked to sit on a chair and rest for 5 minutes. Their brain signals were recorded using 19 collodion applied scalp electrodes, in line with the 10-20 International System (SILVERMAN 1963), a standardized method used to describe the location of scalp electrodes2. Recording length was approximately 50 min, using an electroencephalograph (EB Neuro S.p.A., Florence) for signal acquisition and recording. Impedance was kept below 5 kOhm for all electrodes (an impedance over this level in any electrode should not be accepted as adequate by the American EEG Society and the American Medical EEG Association, two of largest organizations representing clinical EEG in the world). The sampling rate was set to 256 Hz, becau-se sampling rates of 256 and 512 Hz are considered as optimal in normal individuals.

Materials and Method

Page 94: Anno IV - Numero 1 - CoMe Journal

Maria Laura Manca et alii (2019) “Neural aspects in simultaneous ...’’, CoMe IV (1), pp. 91-104

www.comejournal.com 94

EEG recording was performed by an expert technician, with subjects having their eyes closed, being seated on a comfortable armchair, in a quiet room at a controlled auditory intensity level. Table 1 describes the texts used for the experiment.

The raw EEG traces were saved in ASCII format before being automatically processed by using the programming platform Matlab R2019, and its Signal Processing Toolbox (The MathWorks Inc, Na-tick, MA), for Mac Os X. Signals were first examined by an expert neurologist in order to eliminate EEG segments containing artifacts. Traces were pre-processed by using Butterworth passband filter, in order to filter the noise signals coming from eyes blinking (low-frequency noise) and muscle movement (high frequency noise). Then, Power Spectrum Density (PSD) was computed by Fourier transforming the estimated autocorrelation sequence (MANCA & MURRI 2006), which is found by Welch’s method (average of non-overlapped epochs of 2 seconds, over intervals lasting 120 seconds). Finally, we estimated the average alpha, beta, and gamma bandpowers (KLIMESCH 2012). Concerning brainwaves, alpha rhythm (8 - 13 Hz) represents the relaxed and comfortable awareness without special attention and concentration level. The higher the intensity of alpha waves, the less active the brain. Beta rhythm (13 - 30 Hz) is usually related to increased alertness, attention/concentration level, active thinking, solving concrete problems.Finally, gamma rhythm (30 - 60 Hz) is generally associated with solving high mental tasks, such

Page 95: Anno IV - Numero 1 - CoMe Journal

Maria Laura Manca et alii (2019) “Neural aspects in simultaneous ...’’, CoMe IV (1), pp. 91-104

www.comejournal.com 95

Because of EEG-correlated magnetic resonance imaging studies (NAKAMURA ET AL. 1999) have de-monstrated that alpha power is inversely related to neural electrical activity, a ratio below the 0 value indicates a relatively greater activity in the right hemisphere. Following a similar pro-cess, for the R alpha power activity, we have also estimated the natural logarithm of the music / shadowing, music / simultaneous interpretation, and simultaneous interpretation / shadowing ratios. Similarly, a ratio below the 0 value indicates a relatively greater activity in the numerator (music vs. shadowing or simultaneous interpretation, and simultaneous interpretation vs. sha-dowing).

As far as brain global activity is concerned, Figure 1a shows that mean alpha activity was at its highest after the shadowing task, while mean beta and gamma activities were at their highest after listening to music. Moreover, mean beta power (showing alertness) was larger than mean alpha power after music and white noise, and mean gamma power (indicating solution strategies) was greater than mean alpha power after listening to music. However, when we compared EEG rhythms after simultaneous interpretation with EEG rhythms at the baseline, we have found a slight increase in all considered frequencies (Figure 1b).

Results

as the ones analyzed in this experiment. To obtain a measure of global activity, we averaged PSD measures of each channel, at baseline, and after each task. The procedure was separately repeated for the channels of the right and left hemispheres. The global activity was estimated by alpha / beta and alpha /gamma power ratios, aimed to detect an activation of the brain. In fact, a ratio below value 1 indicates a larger level in the beta or gamma frequency band, respectively, i.e. a gre-ater activity in the brain. Then, we studied the differences between the two cerebral hemispheres (asymmetry). Hemisphere asymmetry (CACIOPPO, TASSINARY & BERNTSON 2007) was estimated by the natural logarithm (ln) of the right (R) alpha power / left (L) alpha power ratio:

Page 96: Anno IV - Numero 1 - CoMe Journal

Maria Laura Manca et alii (2019) “Neural aspects in simultaneous ...’’, CoMe IV (1), pp. 91-104

www.comejournal.com 96

Legend Figures 1a, 1b:PRE: basal conditionPOST-WN: after white noisePOST-M: after Mozart musicPOST-IT: after shadowingPOST-FR: after simultaneous interpretation.

Figure 2 shows that mean alpha / beta ratio was below value 1 after white noise and music, while mean gamma / beta ratio was below value 1 only after music. Finally, both highest were reached after shadowing.

Page 97: Anno IV - Numero 1 - CoMe Journal

Maria Laura Manca et alii (2019) “Neural aspects in simultaneous ...’’, CoMe IV (1), pp. 91-104

www.comejournal.com 97

Legend Figure 2:PRE: basal conditionPOST-WN: after white noisePOST-M: after Mozart musicPOST-IT: after shadowingPOST-FR: after simultaneous interpretation.

Then we studied asymmetry, in order to understand which brain hemisphere is more active du-ring specific tasks. Figure 3 displayed results in terms of natural logarithms of the right / left al-pha power ratio. Logarithms were negative numbers for all tasks, suggesting a relatively greater activity in the right hemisphere.

Page 98: Anno IV - Numero 1 - CoMe Journal

Maria Laura Manca et alii (2019) “Neural aspects in simultaneous ...’’, CoMe IV (1), pp. 91-104

www.comejournal.com 98

Legend Figure 4:POSTM: after Mozart musicPOSTIT: after shadowingPOSTFR: after simultaneous interpretationDX: rightSN: left.

Finally, mean right beta power was higher than left beta power after K448, almost overlapping post shadowing and simultaneous translation statuses (Figures 5a, 5b).

Furthermore, both the right and left alpha / beta ratios were below 1 after listening to music (Fi-gure 4).

Page 99: Anno IV - Numero 1 - CoMe Journal

Maria Laura Manca et alii (2019) “Neural aspects in simultaneous ...’’, CoMe IV (1), pp. 91-104

www.comejournal.com 99

Legend Figures 5a, 5b:POSTM: after Mozart musicPOSTIT: after shadowingPOSTFR: after simultaneous interpretationDX: rightSN: left.

Comparing the different tasks, all natural logarithms of the ratios considered were negative (Fi-gure 6). This suggests a relatively greater activity in the numerator, that is, listening to music produces cortical activity superior to both shadowing and simultaneous translation. Likewise, simultaneous interpretation causes more activity than shadowing.

Page 100: Anno IV - Numero 1 - CoMe Journal

Maria Laura Manca et alii (2019) “Neural aspects in simultaneous ...’’, CoMe IV (1), pp. 91-104

www.comejournal.com 100

Our preliminary results on a small but homogenous sample suggest that a relatively greater activi-ty in the right hemisphere was observed after K448. These data are in line with the functional late-ralization as a principle of brain’s organization (HERVÉ ET AL. 2013). Today, however, theories based on neuroimaging findings suggest a less radical division and hypothesizes that the two hemisphe-res are in balance (KAROLIS ET AL. 2019). We found that the asymmetry on the right hemisphere was less evident after shadowing and simultaneous interpreting compared to music, probably because of the familiarity subjects have with the tasks required by the experiment, given their BA training into both activities. Furthermore, we found that listening to music increases beta and gamma wa-ves and decreases alpha power, thus confirming previous research in the field (JENNI ET AL. 2017, RAUSCHECKER 2001, RAUSCHER ET AL. 1994, RAUSCHER ET AL. 1995). In this case, of particular interest is the effect the K448 sonata has on linguistic tasks, though the effect is not substantial, especially on shadowing. This may be related to the previous issue, whichcorrelates beta and gamma waves to the perception of music. If K448 sonata stimulates the brainwa-ves which are more at stake in the intralingual and interlingual translation tasks, the activity which intuitively requires more effort, namely simultaneous interpreting, is more positively influenced by listening to music. Overall, the main results show that brain activity was higher after listening to Mozart than after the shadowing exercise, and that simultaneous interpretation has generated a greater activity than the shadowing exercise. As a consequence, the shadowing activity has tur-ned out to be the one which involved the least mental effort. This relates to Daniel Gile’s model for simultaneous interpreting (HERVÉ ET AL. 2013), according to which this activity implies a much greater effort than shadowing activity. In particular, this model describes simultaneous translation as an activity based on some specific “efforts”, or mental activities, aimed at perceiving and under-standing a speech. The first effort of listening and analyzing the speech increases in a non-optimal context, as the one in which interpreters perform their job. While information must be reproduced in the target language, interpreters make a production effort which varies according to situations: in case of strategic hesitations or pauses, aimed at choosing the words and structure of a sentence, these efforts increase, whereas when a verbal automatism is triggered, these efforts decrease. Fi-nally, specific interpretation strategies determine a delay in the delivery of information, which le-ads to a memory effort. In an optimal situation, the efforts are distriduted homogeneously among the three phases of interpretation. The sum of the effortsmentioned (listening, production and memory) cannot exceed a given threshold, called processinf capacity. During the shadowing activity, the subjects of the study did not need to put so much effort into producing a new message, so the whole process becomes much easier. That is why sha-dowing is the less difficult activity to perform.

In our study, even if the beta rhythm was not significantly evident after linguistic tasks, a slight growth of all three EEG rhythms was observed after simultaneous interpretation in comparison to baseline values. This result could be interpreted with a more widespread neuronal activation, correlated to the fact that the brain has to make the effort of processing many data simultaneously.

Discussion

Page 101: Anno IV - Numero 1 - CoMe Journal

Maria Laura Manca et alii (2019) “Neural aspects in simultaneous ...’’, CoMe IV (1), pp. 91-104

www.comejournal.com 101

he hypothesis should be confirmed with functional magnetic resonance imaging studies in asso-ciation with EEG.Finally, a little specific effect of Mozart music on linguistic exercises is not surprising, considering that mechanisms producing beneficial results on spatial-temporal tasks are presumably different.Mechanisms activated during spatial-temporal reasoning, in which Rauscher et al. firstly found K448 Sonata to be effective (RAUSCHER ET AL. 1993) are predominantly visual, and very different from those present during simultaneous interpretation, in which the brain perceives and proces-ses language through hearing, stores previously heard information in memory, and, finally, gene-rates an equivalent message in the target language.From a linguistic point of view, it is interesting to draw on the results of a parallel study carried out by Zunino (ZUNINO 2019), with a comparable experimental test (but in English and without music), on a group of subjects having the same characteristics of our study. What emerged from the analysis of both target texts is that shadowing proves to be less difficult than simultaneous interpreting as it does not imply interlingual translation problems. The main strategy adopted by subjects to correctly shadow the source text were omission of secondary information when the cognitive load was too high so as to carry out the activity. Concerning simultaneous interpreting, instead, subjects reacted positively to the main difficulties, managing, in most cases, to find good solutions to get around the various obstacles introduced in the source text, by means of strategies like reformulations, compressions, summaries, or even improvisation strategies and strategies ba-sed on the intensive use of décalage and echoic memory. Overall this has brought to an increase in the memory effort to recover specific information. And when the workload was too demanding (mainly high speech rate, technical terminology, morpho-syntactic difficulties, and in one case physiological factors), errors and omissions also occurred, thus compromising the quality of the target text.Compared to our study, where subjects were required to interpret from French into Italian, subjects found it simpler to interpret mainly because French is morphologically and syntactically closer to the Italian language and the main difficulties have been linked to the very specific terminology and to numbers. As a consequence, many calques were found resulting either in false friends or nonwords.Generally speaking, the two experiments have led to two different EEG results, with simultaneous interpreting from English into Italian showing a higher involvement of gamma waves, responsible for problem-solving activities. This means that shadowing into one’s nativelanguage can be considered as a point in the continuum that goes from easier to harder without clear-cut boundaries between interlingual and intralingual simultaneous activities and profes-sions.

Page 102: Anno IV - Numero 1 - CoMe Journal

Maria Laura Manca et alii (2019) “Neural aspects in simultaneous ...’’, CoMe IV (1), pp. 91-104

www.comejournal.com 102

Overall, in this proof-of-principle study, our research questions have been answered. Concerning research question number 1 (which brain hemisphere is more active during some specific langua-ge mediation tasks?) we have found a relatively greater activity in the right hemisphere. As for research question number 2 (does listening to music have a positive impact on cortical activity?) li-stening to Mozart music does have a positive impact on brain activity, which is more concentrated in the right hemisphere. In this context, simultaneous simultaneous interpreting has shown to be a harder mental activity than shadowing, with the English-Italian pair proving more challenging than the French-Italian one. In light of these preliminary results, further inter-disciplinary studies with quantitative EEG analysis should be carried out, with more subjects and of both genders, with a control group that performs the same tasks without music, and including an objective me-asurement of linguistic performances of these subjects, possibly in various language pairs. Finally, concerning research question number 3 (are interlingual and intralingual forms of language me-diation comparable in terms of mental activity?), some considerations are to be done when discus-sing the data above. If we focus on the profession of live subtitling through respeaking, we need to bear in mind that it does not only consist of shadowing, but also of interfacing with a speech reco-gnition software, which uses the audio input to generate captions (EUGENI 2008). Moreover, verbal component is not the only features of the source text to report. Punctuation and other formatting need to be considered too, so that the final result makes sense to the user, which means that words like “comma” and “new line” have to be pronounced out loud (ibidem). Finally, voice commands are to be used to indicate a change of speaker or song lyrics, in order to provide the best replication of the hearing experience for a deaf or hard-of-hearing user (ibidem). All these extra efforts mean that respeakers often need to have a faster voice pace than the speaker they are captioning. Given the similarities between respeaking and shadowing mentioned above, a stimulation of the same brain areas during these two tasks seem plausible (ibidem). However, respeaking presents a num-ber of cognitive challenges that are not required during the shadowing process. In order to better understand the relationship between them, it would be interesting to carry out further research by comparing these two activities. Intuitively the hypothesis is that respeaking and similar reporting or captioning activities imply the same concentration and intensity of brainwaves that are evident in simultaneous interpreting.

Conclusions

Page 103: Anno IV - Numero 1 - CoMe Journal

Maria Laura Manca et alii (2019) “Neural aspects in simultaneous ...’’, CoMe IV (1), pp. 91-104

www.comejournal.com 103

CACIOPPO, J.T., TASSINARY, L.G., & G. BERNTSON (2007) “Handbook of Psychophysiology”. in Cambr-

dge University Press.

EUGENI, C. (2008 )”A Sociolinguistic Approach to Real-Time: Respeaking vs. Shadowing and Si-multaneous Interpreting”, in Kellett Bidoli, C. J. & E. Ochse (edds.) English in International Deaf

Communication, Linguistic Insights series, 72, Berna: Peter Lang.

GILE, D., (1985) “Le modèle d’efforts et l’équilibre d’interprétation en interprétation simultanée”, in Université Lyon 2, Lyon, France, Meta, 30.

GRAN, L., (1992) “Aspetti dell’organizzazione cerebrale del linguaggio: dal monolinguismo all’in-terpretazione simultanea”, in Campanotto.

HERVÉ, P.Y., ZAGO, L., PETIT, L., MAZOYER, B. & N. TZOURIO-MAZOYER (2013) “Revisiting human hemi-spheric specialization with neuroimaging”, in Trends Cogn Sci. 17, pp. 69-80.

JENNI, R., OECHSLIN, M.S. & C.E. JAMES (2017) “Impact of major and minor mode on EEG frequency range activities of music processing as a function of expertise”, in Neurosci Lett. 647, pp. 159-164.

KAROLIS, V.R., CORBETTA, M. & M. THIEBAUT DE SCHOTTEN (2019) “The architecture of functional late-ralisation and its relationship to callosal connectivity in the human brain”, in Nat Commun. 10, pp. 1417.

KLIMESCH, W. (2012) “α-band oscillations, attention, and controlled access to stored information”, in Trends Cogn Sci. 16, pp. 606-617.

KUČIKIENĖ, D. & R. PRANINSKIENĖ (2018) “The impact of music on the bioelectrical oscillations of the brain”, in Acta Med Litu. 25, pp. 101–106.

LAMBERT, S. (1989) “La formation d’interprètes : la méthode cognitive”, in Montréal University press.

MANCA, M.L., & L. MURRI (2006) “Fourier ed il ruolo della sua trasformata nella ricerca neurologi-ca”, in Quaderni Dipartimento di Matematica, Università di Pisa.

NAKAMURA, S., SADATO, N., OOHASHI, T., NISHINA, E., FUWAMOTO, Y. & Y. YONEKURA (1999) “Analysis of music-brain interaction with simultaneous measurement of regional cerebral blood flow and electroencephalogram beta rhythm in human subjects”, in Neurosci Lett. 275, pp. 222-226.

RAUSCHECKER, J.P. (2001) “Cortical plasticity and music”, in Ann N Y Ac. Sci. 930, pp. 330-336.

References

Page 104: Anno IV - Numero 1 - CoMe Journal

Maria Laura Manca et alii (2019) “Neural aspects in simultaneous ...’’, CoMe IV (1), pp. 91-104

www.comejournal.com 104

RAUSCHER, F.H., SHAW, G.L. & C.N. KY (1993) ”Music and spatial task performance”, in Nature, 365, pp. 611.

RAUSCHER, F.H., SHAW, G.L., LEVINE, L.J., KY, K.N. & E.L. WRIGHT (1994) “Music and spatial task performance: a causal relationship”, presented at the American Psychological Association Annual

Meeting, Los Angeles, CA.

RAUSCHER, F.H., SHAW, G.L., & C.N. KY (1995) “Listening to Mozart enhances spatial-temporal rea-soning: towards a neurophysiological basis”, in Neurosci Lett. 185, pp. 44-47.

RICCARDI, A., (2003) “Dalla traduzione all’interpretazione, Studi d’interpretazione simultanea”, in

Edizioni Universitarie di Lettere Economia Diritto, Milano.

RIDEOUT, B.E. & C.M. LAUBACH (1996) “EEG correlates of enhanced spatial performance following exposure to music”, in Percept Mot Skills. 82, pp. 427-432.

RONDAL, J.A. & X. SERON (2003) “Troubles du langage. Bases théoriques, diagnostic et rééduca-tion”, in Pierre Mardaga, Hayen, Sprimont (Belgium).

SALMON, L. & M. MARIANI (2008) “Bilinguismo e traduzione. Dalla neurolinguistica alla didatti-ca delle lingue”, in Franco Angeli s.r.l., Milano.

SILVERMAN, D. (1963) “The Rationale and History of the 10-20 System of the International Fede-ration”, in Am. J. EEG Technol. 3, pp. 17–2.

TRIMBLE, M. & D. HESDORFFER (2017) “Music and the brain: the neuroscience of music and musi-cal appreciation”, in B J Psych Int. 14, pp. 28–31.

VERRUSIO, W., ETTORRE, E., VICENZINI, E., VANACORE, N., CACCIAFESTA M. & O. MECARELLI (2015) “The Mozart Effect: A quantitative EEG study”, in Conscious Cogn. 35, pp. 150-155.

ZUNINO, F. (2019) “Interpretazione simultanea e processi cognitivi. Quando le neuroscienze incontrano l’interpretazione: analisi del cervello dell’interprete durante l’interpretazione si-multanea“, in Pisa – SSML unpublished BA thesis.

Page 105: Anno IV - Numero 1 - CoMe Journal

“La concione a Bologna” (Dario Fo, 1999) per gentile concessione dell’archivio Rame - Fo www.archivio.francarame.it

progetto grafico ADVERTO ITALIA | Adverto.Italia - impaginazione UGLY AGENCY | www.uglyagency.com

Page 106: Anno IV - Numero 1 - CoMe Journal