I’ve just returned from the 27th annual conference organized by the Canadian Association for Translation Studies, which was held at Brock University in St. Catharines, Ontario this year. The theme was “Translation: Territories, Memory, History”, and although a number of the talks addressed topics you might expect to find in this theme, namely the history of translated texts in regions like Asia, Latin America and Brazil, others were more broadly related, addressing subjects like the history of language technologies in Canada, or “new territories” like fansubbing norms. Since many of these topics are likely to interest to people who weren’t able to attend, I thought I would summarize some of my favourite presentations and offer a few thoughts on the wider implications of these research questions. Very roughly, the talks I most enjoyed can be grouped into three broad, and somewhat overlapping, categories that also match my own research interests: technological, professional and pedagogical concerns.
Two talks on technology-related topics were particularly intriguing: Geneviève Has, a doctoral candidate at Université Laval, spoke about the history of language technologies in Canada, focusing particularly on the role of the federal government in projects like TAUM-MÉTÉO, the very successful machine-translation system for meteorology texts, and RALI, a lab that developed programs like the bilingual concordancer TransSearch. Has explored some of the reasons why entire research labs or specific research projects had been dismantled, and noted that when emphasis is placed on producing marketable results within a set period of time, funding is often pulled from projects if the results are not what the funders are looking for, even if useful research is being produced by the lab. For instance, the quest to develop a machine translation system as successful as TAUM-MÉTÉO led to later systems being abandoned when the results were not as impressive.
Valérie Florentin, a doctoral candidate at the Université de Montréal, meanwhile, gave a fascinating talk on fansubbing norms, noting that in the English to French community she studied, online forum discussions between the fansubbers showed how they wanted to ensure the subtitles would be easily understood by francophones in various countries. Thus, they avoided regionalisms as well as expressions and cultural references they thought typical viewers would not understand. They also followed style guidelines to ensure the subtitles, on which various people had collaborated, would be consistent in terms of things like whether characters should use tu or vous to address one another. In her conclusions, she wondered whether the collaborative model used by this fansubbing community (in which about eight people translate and review the subtitles for any given episode) could be useful in professional communities. Recognizing that it would be unfeasible to expect companies to pay this many people to work on a project (even if each person was doing less work than they would if they prepared the subtitles alone), she argued that the model could be useful in training contexts, allowing students to debate with one another about cultural concerns and equivalents, while also following a set of style guidelines to ensure consistency in the final product. I found this suggestion particularly relevant to my own teaching, since I like to try collaborative models with my students, and since I have argued in other talks that crowdsourcing models often offer elements that could be adopted in professional translation, such as greater visibility for the translators who work on projects.
Marco Fiola, from Ryerson University and Aysha Abughazzi, from Jordan University of Science and Technology, both spoke on translation quality. While Marco’s presentation explored competing definitions of translation quality and specifically addressed issues like understandability and usability, Aysha spoke about translation quality in Jordan, discussing the qualifications of translators and the quality of translations she obtained from various agencies. Both of these talks underscored for me the difficulty translators and translation scholars continue to have when defining quality and in determining what “professional” translation should look like.
Philippe Caignon, an associate professor at Concordia University, offered an excellent presentation on concept mapping and cognitive mapping, illustrating how these can be useful for students in terminology courses as an alternative to tree diagrams. Although he didn’t show the software itself, he did mention that Cmap Tools can be used to create concept maps fairly easily. As I listened to his talk, I decided I could incorporate concept mapping into the undergraduate Theory of Translation course I usually teach, to help students think about the terms translation and translation studies. I think examples like this one would help students see how they can visualize translation, and if they had a few minutes to work on their concept map individually before discussing their map with the rest of the class, I think we would be able to explore the different ways translation can be understood. More on this after I’ve tried it out in class.