A topic that has always captured my interest from a very young age is the neurolinguistic perspective of bilingualism and interpreting, in other words, how does our brain work in order to make such a “magic” possible? The person that, years ago, answered my question in the most complete way is Laura Gran (1999, 1992), interpreter and interpreting lecturer at the SSLMIT Trieste, Italy. If you have the same curiosity, I would like to share with you a brief summary of what I discovered about this fascinating topic.
It is widely known that brain hemispheres’ functions are not symmetrical, but let’s be more specific. Speaking is a motor activity (Gran 1999, p. 208) engaging both hemispheres, in particular:
- the left hemisphere focuses on decoding and producing phonological, morphological, syntactic and lexical features;
- the right hemisphere deals with the interpretation of implicit meanings (e.g. from the inferences resulting from the general knowledge and the situational context) and seems to recognise better signs of emotions and other paralinguistic factors, which are very important for the comprehension of metaphors, sarcastic expressions, irony, etc. (Gran 1999, p. 212; Gran and Fabbro 1995) —if you watch The Big Bang Theory, the answer is “Yes, Sheldon Cooper must be suffering from right hemisphere damages!”—.
Bilingualism and polyglossia
Studies have already revealed that the brain structure of bilinguals is different from monolinguals’, but there’s great controversy when it comes to defining what a bilingual actually is. Based on my humble experience, I completely agree with Gran suggesting that it’s not black or white. There are so many differences among bilinguals, and they should be therefore considered as standing at different stages of a multidimensional continuum, where significant variances in terms of —phonetic, morphologic, syntactic, lexical, phonetic and semantic— structure and linguistic skills —i.e. comprehension and listening, visual and gestural production— can be found. A very good point was made by Grosjean (1982) —expert in bilingualism— claiming that bilinguals cannot be regarded as the sum of two monolinguals, since —among other aspects— the coexistence and constant interaction of the two languages imply a different organisation of linguistic functions.
The most relevant factors determining the cerebral structure of the two languages are:
- age of second language (L2) acquisition, differentiating early from late bilinguals;
- acquisition method —i.e. spontaneous or metalinguistic/at school—, differentiating compound from coordinate bilinguals —i.e. learning L2 in a bilingual context or in different socioemotional environments, respectively—;
- gender: in right-handed male monolinguals, language is lateralised mostly in the left hemisphere, whereas it is more symmetrical in females.
The early and simultaneous acquisition of the two languages is translated into a lateralisation of both languages in the left-hemisphere. On the contrary, those bilinguals acquiring L2 later and/or in different socioemotional contexts, tend to have separate cognitive systems for the two languages.
Experimental studies show that the cerebral representation can change as a result of an intense study of these languages and by practicing simultaneous interpreting. In both cases, languages, when activated, do not seem to be lateralised mainly in the left hemisphere, but rather symmetrically in both hemispheres. This is not to be interpreted as a shift from one hemisphere to another, but rather as a shift of attention, which is a fundamental feature in interpreting (Gran 1999, p. 215).
The activation threshold hypothesis
According to the activation threshold hypothesis, the comprehension/production threshold of a word or expression (item) requires that its activation exceeds that of other possible alternatives —i.e. synonyms—. Consequently, the activation threshold of all the elements competing with the item needs to be enhanced. In other words, the item is activated, whereas all its synonyms and words belonging to the same semantic fields are inhibited. This process happens both in monolinguals and in bilinguals, with the only difference that the latter do not only inhibit synonyms in one language, but also all the equivalents in the other. In fact, when a bilingual —or multilingual— person wants to speak in a certain language, the activation threshold of the inhibited language is enhanced in order to avoid interferences during the speech, but not up to the extent that loanwords or mixed expression can be completely avoided or that the comprehension of the other language may be affected. In fact, this inhibition is never total: even when a bilingual subject interacts with monolinguals, the other language system is never completely deactivated (Gran 1999, p. 223-224).
What happens in simultaneous interpreting, of course, is different from the daily routine of a bilingual. Interpreters, in fact, need to activate the target language (TL) system in order to translate the decoded source-language (SL) message. At the same time, they have to briefly retain the SL information segments that keep coming in. Consequently, the systems of both languages are activated, but not necessarily at equal levels. Probably, the SL will have a higher activation threshold than the TL. According to Gran (1999, p. 224), we can suppose that the threshold of both SL and TL is lowered so that the interpreter can draw on both language systems simultaneously.
Moreover, when a verbal message gets to the brain, two parallel processes take place: memorisation and decoding. The SL sounds are stored in memory for approximately one second and are then converted into “meaning and form” in the short-term memory until they are decoded. When this happens, form is erased and only the meaning keeps being retained in memory before being translated into the TL.
The choice of the ear in simultaneous interpreting
Another very interesting point is the choice of the ear. As you might be familiar with, interpreters in fact tend to have only one ear completely covered by the headphones, whereas the other only partially or not at all. This second ear is supposed to check the interpreter’s TL message and its quality. As in my case, the tendency among interpreters is to listen to the SL message from the left ear and to use the right ear for the TL quality-check. This spontaneous common choice is explained by Gran (1999, p. 225) in the following terms: the left ear (right hemisphere) seems to better elaborate the SL message, whereas the right ear (left hemisphere) to control the TL production.
What’s your “ear-habit on the phone”?
A final question to conclude this complex topic more lightly: among the multilingual people reading this post, do you have different “ear-habits” when speaking to the phone? In particular, do you put your mobile phone on the left ear or on the right ear for a conversation in your mother tongue? Does the ear change for a conversation in languages other than your mother tongue? I myself noticed some differences and I’m very curious to know about you. Please leave a comment to share your “ear-habit on the phone”! It would be much appreciated and indeed very interesting!If you liked this post, you might be interested in reading “The Foreign Language Effect – Mandela was right!“, explaining that a person does not think in the same way in his/her mother tongue and in a foreign language and that this difference may affect our decision-making process.
Have a nice week everyone!
GRAN, L. L’interpretazione simultanea: premesse di neurolinguistica. In FALBO, C., RUSSO, M. and STRANIERO SERGIO, F. (eds). Interpretazione simultanea e consecutiva. Problemi teorici e metodologie didattiche. Milano: Ulrico Hoepli Editore S.p.A., 1999, p. 207-227.
GRAN, L. and FABBRO, F. Ear asymmetry in simultaneous interpretation. In LAMBERT, S. (ed). A Cognitive Approach to Interpreter Training. Amsterdam/Philadelphia: John Benjamins, 1995.
GRAN, L. Aspetti dell’organizzazione cerebrale del linguaggio: dal monoliguismo all’interpretazione simultanea. Udice: Campanotto, 1992.
GROSJEAN, F. Life with Two Languages: An Introduction to Bilingualism. Cambridge MA: Harvard University Press, 1982.