Snippets from the Monterey Forum 2015

I’ve just returned from Monterey, Califonia, where I was at the Education Translators, Interpreters and Localizers in an Evolving World conference at the Middlebury Institute of International Studies at Monterey. As the title suggests, the talks focused on translation and interpreting pedagogy, and I came away with some new ideas after a number of very interesting presentations. Most of the day consisted of parallel sessions, so obviously I wasn’t able to attend everything. I’ll just summarize a few of the talks I particularly enjoyed. I’ve grouped them into three broad categories: those that discussed how to design new course offerings or fundamentally reshape the way a course is offered, those that touched on online teaching, and those that offered ideas or activities that could be integrated into existing translation courses.

Course design

On Saturday morning, I listened to Kayoko Takeda from Rikkyo University in Japan speak about developing a general education course on “Translation and interpretation literacy” for undergraduate students at her university. It focused on topics like the roles translators and interpreters play in society and professional issues translators and interpreters face without actually having students practice translation. What I found most intriguing was the way the course was designed: three instructors co-taught the course, and 14 guest speakers came to the Saturday-morning class to speak about topics like crowdsourcing and machine translation, business practices, subtitling, Bible translation, and community translation. These guest speakers would give students tasks to do before their talks (e.g. consulting a website, reading blog posts or articles), and then students would participate in the lectures, often by producing in-class essays on topics like rules, remuneration and rewards, or technology.

Methods and Activities for Online teaching

Saturday morning and afternoon included several presentations about teaching translation and interpretation in an online environment. Here are some of my favourites:

Suzanne Zeng talked about the University of Hawaii’s Interactive Video Service (HITS), which allows her to teach up to three groups of students simultaneously via an interactive closed-circuit TV system. While Suzanne teaches a group of students in one room, other groups of students at university campuses located on various Hawaiian islands sit in similar classrooms and participate in the class via video. All students have microphones at their desks, and when they push the button to talk, the video cameras are programmed to zoom in on the speakers so everyone else can hear what their peers have to say and see them clearly while they say it. The shared screens also allow everyone to see any PowerPoint presentations the instructor might use, and any notes he or she might write on the white board. I liked the way this system works because instructors can teach both in person and online at the same time. Having all students together simultaneously allows everyone to participate in discussions and group exercises, regardless of which island they might live on. Suzanne did mention that she has to monitor the video feeds while she is teaching so she can make sure the students in the other locations are fully engaged: addressing them directly helps remind these students that she is able to see them and is paying attention to them as well as the students in her classroom. She also noted that the system doesn’t allow for any flexibility in timing: class has to begin and end at a precise time because that’s when the video feed stops and starts; so if she is in the middle of a sentence with just a few seconds left on the counter, she’ll be cut off as soon as the clock reaches zero, and students in the remote locations won’t hear the end of what she had to say. That means instructors need to be very conscious of the clock with a system like this.

Qjinti Oblitas and Andrew Clifford, from my department at York University, offered some insight into how their interpretation students develop close ties with their peers, even though the first year of our Master’s in Conference Interpreting program is offered online. Through a variety of sometimes humourous examples, Qjinti and Andrew showed that students engaged with one another outside of the virtual classroom via private Facebook groups, text messages, Skype chats, and apps like WeChat, and they argued that the students felt a real sense of community with their peers—so much so in fact that many of the students found ways to meet one another in person if they lived in the same country or were travelling to a place near one of their classmates.

Finally, Cristina Silva said that every strategy for offline teaching could be adapted for the online classroom. She offered a variety of examples, some of which I use already, and others that I will consider using in the future—though I should point out that many of these ideas would work just as easily in a face-to-face classroom. Cristina’s suggestions included having students translate together via Google Docs, having students practice editing machine translations while using screen-sharing software so that their classmates can see their results, and encouraging students to use Dragon Naturally Speaking to record themselves while they dictated a sight translation to see whether their productivity increased compared to just typing out the translations themselves.

Activities for the translation classroom

Kent State University’s Erik Angelone offered a new way of having students assess their translation process. Arguing that other process-focused research methods like think-aloud-protocols and keystroke logging were too time consuming or too complicated to integrate into the classroom, Erik proposed using screen recorders like Blueberry Flashback Express to have students record their computer screens while they worked. Then, when students look back at these recordings, they would be able to see, for instance, whether they hesitated before translating a word but did not consult an electronic resource, which might indicate that the translation needs to be double-checked. Integrating screen recordings into the classroom would also allow students to learn from the methods other students or even professional translators had used: how do others deal with distractions like email alerts, for instance? Or how did others research a problematic word or phrase? I thought this was a very helpful idea for getting students to think about how they translate and whether their method could be more effective. One audience member did mention that the disadvantage of screen recordings is that it doesn’t show what students are doing off their computers (e.g. Consulting a paper dictionary), but Erik suggested that students could comment on their screen recordings afterwards in a retrospective interview. Of course, they could comment more informally as well, by adding a few written remarks at the end of their recording to describe any of their research techniques that wouldn’t show up in the screen recording. I’m going to integrate an activity like this into my introductory translation class next term, and after I do, I’ll write a short post about the results.

There were other talks I enjoyed as well, but this post is getting quite long. I think I’ll end with a link to the tweets that came out of the conference, which, though short, give a good overview of a larger selection of talks. You can take a look at the tweet compilation on Storify here.

Some of my favourite talks from the CATS conference at Brock University

I’ve just returned from the 27th annual conference organized by the Canadian Association for Translation Studies, which was held at Brock University in St. Catharines, Ontario this year. The theme was “Translation: Territories, Memory, History”, and although a number of the talks addressed topics you might expect to find in this theme, namely the history of translated texts in regions like Asia, Latin America and Brazil, others were more broadly related, addressing subjects like the history of language technologies in Canada, or “new territories” like fansubbing norms. Since many of these topics are likely to interest to people who weren’t able to attend, I thought I would summarize some of my favourite presentations and offer a few thoughts on the wider implications of these research questions. Very roughly, the talks I most enjoyed can be grouped into three broad, and somewhat overlapping, categories that also match my own research interests: technological, professional and pedagogical concerns.

Technological Concerns

Two talks on technology-related topics were particularly intriguing: Geneviève Has, a doctoral candidate at Université Laval, spoke about the history of language technologies in Canada, focusing particularly on the role of the federal government in projects like TAUM-MÉTÉO, the very successful machine-translation system for meteorology texts, and RALI, a lab that developed programs like the bilingual concordancer TransSearch. Has explored some of the reasons why entire research labs or specific research projects had been dismantled, and noted that when emphasis is placed on producing marketable results within a set period of time, funding is often pulled from projects if the results are not what the funders are looking for, even if useful research is being produced by the lab. For instance, the quest to develop a machine translation system as successful as TAUM-MÉTÉO led to later systems being abandoned when the results were not as impressive.

Valérie Florentin, a doctoral candidate at the Université de Montréal, meanwhile, gave a fascinating talk on fansubbing norms, noting that in the English to French community she studied, online forum discussions between the fansubbers showed how they wanted to ensure the subtitles would be easily understood by francophones in various countries. Thus, they avoided regionalisms as well as expressions and cultural references they thought typical viewers would not understand. They also followed style guidelines to ensure the subtitles, on which various people had collaborated, would be consistent in terms of things like whether characters should use tu or vous to address one another. In her conclusions, she wondered whether the collaborative model used by this fansubbing community (in which about eight people translate and review the subtitles for any given episode) could be useful in professional communities. Recognizing that it would be unfeasible to expect companies to pay this many people to work on a project (even if each person was doing less work than they would if they prepared the subtitles alone), she argued that the model could be useful in training contexts, allowing students to debate with one another about cultural concerns and equivalents, while also following a set of style guidelines to ensure consistency in the final product. I found this suggestion particularly relevant to my own teaching, since I like to try collaborative models with my students, and since I have argued in other talks that crowdsourcing models often offer elements that could be adopted in professional translation, such as greater visibility for the translators who work on projects.

Professional Concerns

Marco Fiola, from Ryerson University and Aysha Abughazzi, from Jordan University of Science and Technology, both spoke on translation quality. While Marco’s presentation explored competing definitions of translation quality and specifically addressed issues like understandability and usability, Aysha spoke about translation quality in Jordan, discussing the qualifications of translators and the quality of translations she obtained from various agencies. Both of these talks underscored for me the difficulty translators and translation scholars continue to have when defining quality and in determining what “professional” translation should look like.

Pedagogical Concerns

Philippe Caignon, an associate professor at Concordia University, offered an excellent presentation on concept mapping and cognitive mapping, illustrating how these can be useful for students in terminology courses as an alternative to tree diagrams. Although he didn’t show the software itself, he did mention that Cmap Tools can be used to create concept maps fairly easily. As I listened to his talk, I decided I could incorporate concept mapping into the undergraduate Theory of Translation course I usually teach, to help students think about the terms translation and translation studies. I think examples like this one would help students see how they can visualize translation, and if they had a few minutes to work on their concept map individually before discussing their map with the rest of the class, I think we would be able to explore the different ways translation can be understood. More on this after I’ve tried it out in class.

ACFAS Conference

I’ve just returned from Quebec City, where I was attending the 81st Congress of the Association francophone pour le savoir (ACFAS), which took place at the Université Laval this year. It was the first time I’d been to an ACFAS event, which, for those of you who might not know, is similar to the Congress of the Humanities and Social Sciences in that a number of conferences from different disciplines take place there, each organized by a different group of scholars. Unlike the Congress of the Humanities and Social Sciences, which is held at universities across Canada and is bilingual, ACFAS is usually hosted by Quebec universities and takes place entirely in French.

This year, three translation-related conferences were taking place at ACFAS, and I was able to attend two of them: La formation aux professions langagières : nouvelles tendances (Training Language Professionals: New Trends), which took place on Wednesday, and La traduction comme frontière (Translation as Borders), which took place Thursday and Friday. Unfortunately, I had to miss the third conference, Langues et technologies : chercheurs, praticiens et gestionnaires se donnent rendez-vous , (Languages and Technologies: A Meeting of Researchers, Practitioners and Managers), because it was taking place at the same time as the conference on translation as borders, where I was presenting a paper. But here are a few points I found interesting and useful at the two conferences I did manage to attend:

La formation aux professions langagières: Nouvelles tendances
This conference gave me a lot of practical ideas to integrate into my courses next year. For instance, I really enjoyed the presentation by Mathieu Leblanc, who carried out an ethonographic study at three Language Service Providers (one public and two private) several years ago. These three LSPs each had at least 35 employees, including new and experienced translators, and he spent one month at each one, conducting interviews and observing workplace practices. (Mathieu presented some of the data from this study at the CATS conference last year. I wrote about it in this post). Although his research goal had been to study translator attitudes toward tools like Translation Memories, the data he gathered during his fieldwork also allowed him to explore questions like “What do translators think about university training programs?” He noted that although both novice and experienced translators noted that university training was good overall, some areas could still be improved: students could be better prepared to meet the productivity demands they will encounter at the workplace, taught not to rely so extensively on tools like Translation Memories, and encouraged to be more critical of sources and translations.

The presentation by Université de Sherbrooke doctoral candidate Fouad El-Karnichi, focused on converting traditional courses to online environments, and I learned that other universities are using a variety of platforms to offer real-time translation courses online. At Glendon, we’ve adopted Adobe Connect for the Master of Conference Interpreting, but the Université du Québec à Trois-Rivières, is using Via for their new online BA in translation. I’ll have to take a look at it to see how it works. Fouad has just posted a few of his own thoughts on the ACFAS conference. You can read them on his blog here.

Finally, Éric Poirier, from the Université du Québec à Trois-Rivières, described a number of activities that could be integrated into a translation course to help familiarize students with online documentary resources like dictionaries, corpora, and concordancers. Here are a few of the activities I found interesting:

  • Have students use a corpus to find collocations for a base word (e.g. Winter + ~cold = harsh)
  • Have students read one of the language columns in Language Update and then translate the word that’s been discussed
  • Have students practice using dictionaries to distinguish between paronyms like affect and effect

In an online course, these kinds of activities could be integrated into the course website via an online form or a quiz that needs to be completed.

Other presentations were very interesting as well, but this post is getting a little long, and I also wanted to discuss some of the talks from the second conference.

La traduction comme frontière
Although several presenters cancelled their talks on the first day, we still had some very stimulating discussions about translation as borders, whether these borders are real, imagined, pragmatic, semantic, political, ideological or something else entirely. Two papers were particularly thought-provoking (at least to me): Chantal Gagnon, from the Université de Montréal, spoke about Canadian Throne Speeches since 1970, with particular emphasis on the words “Canada”, “Canadien/canadien” and “Canadian” in these speeches. The fact that the number of occurrences of these words in English and French differed was not really surprising, since Chantal had found similar differences in other Canadian speeches, but the fact that the 2011 Throne Speech under Prime Minister Harper differed from the others was very intriguing. Finally, Alvaro Echeverri, also from the Université de Montréal, raised some very illuminating questions about the limits of translation, particularly with respect to how we might define the term translation. Based on work by Maria Tymoczko, he proposed studying the corpus of texts before trying to determine what should be considered a translation: that way, researchers will know what kinds of translations/adaptations/inspirations to include.

So all in all, these three days in Quebec City were very stimulating, and I’m anxious to incorporate some of these ideas into my courses next year and my research this summer.

Highlights from the 12th Portsmouth Translation Conference

I returned from the UK about two weeks ago, and now that I’ve had some time to catch up on the marking and course prep work I missed while I was away, I can finally post a brief overview of some of the talks I attended at the 12th Portsmouth Translation Conference, the theme of which was “‘Those Who Can, Teach’: Translation, Interpreting and Training.” The one-day event was packed, with a 9 a.m. plenary session by Dorothy Kelly, followed by five parallel sessions throughout morning, three parallel training-themed workshops right after lunch, and then another series of 4-5 parallel sessions for the rest of the afternoon until the 5 p.m. closing plenary by Daniel Toudic.

Obviously, I got just a glimpse of the entire conference, as I attended only one talk from each of the parallel sessions. But I came back with some new thoughts on teaching techniques I could integrate into my classes, and I met some delegates who were interested in the new Master of Conference Interpreting program we’ve introduced here at Glendon (which was what I had gone to the conference to speak about). This blog post will cover three of the presentations I particularly enjoyed, along with the final plenary by Daniel Toudic.

I attended three sessions in the morning: one by Justyna Giczela-Pawtwa on how relevant undergraduate and graduate translation students consider translation theory, another by Akiko Sakamoto, who spoke about the positive and negative experiences of offering optional online translation workshops to students at the University of Leicester, and a final one by Agata Sadza, who spoke about developing a project management course for students at London Metropolitan University.

In Justyna’s talk, she presented some results from a survey of undergraduate and graduate students who were asked various questions about the relevance of translation theory. Interestingly, while most of the undergraduate students (67-70%, depending on the group) found translation theory was “almost useful” to their practice, the graduate students were more divided, with 46% responding that it was almost useful and 54% responding that it was mainly irrelevant. The MA program at the University of Gdansk, where Justyna conducted the survey, is both practical and theoretical, but has more theory than the BA level, so most of those attending Justyna’s talk (including me) were a little surprised to see that the MA students would find theory less relevant than the undergraduates. I think the results show how important it is for instructors to draw clear links between theory and practice in both undergraduate and graduate courses, to help students feel that the theory they’re learning is relevant to practical translation problems.

From Akiko’s presentation about online practical workshops for translation students, I learned about the free screen recording software BB Flashback Express, which she encouraged students to use to record themselves as they worked. Students would then post sample recordings to an online discussion forum so peers (and, to a more limited extent, the instructors) could give them feedback on their translation process. This was one solution they had to compensate for the fact that few of the students had the same language pairs and would therefore be able to offer one another very little direct feedback on the translated product. The process, at least, would be something more participants would be able to comment on.

Later, Agata spoke about the logistics of developing a one-semester project management course for students enrolled in a graduate, practical-oriented translation program. Students were broken up into groups, assigned a 6000-word text, and generally left to manage the project on their own. The class met together formally only three times (for three hours each session) to discuss progress, address problems and concerns, etc. Although I’ve incorporated group projects into my own classes (as I’ve discussed here), I’ve never run a project of the size Agata described. Moreover, Agata had some good advice to share: before breaking the class into groups, and before describing the format of the course, she asked each student about their interests, including the fields they would like the specialize in and the types of jobs–e.g. terminology, revision–that interested them the most. That helped to ensure the various interests were more evenly split among the groups. She also had students write a report about their experiences, but made sure to give them guidelines to follow, as students at this level were not all sure what sorts of things should be included in a report.

The final talk was a keynote address by Daniel Toudic, from Université Rennes 2, who spoke about the Optimale (Optimising Professional Translator Training in a Multilingual Europe) project and presented some data from a survey of over 700 non-public-sector European language service employers drawn largely from the European Union of Associations of Translation Companies.

Some of the survey results I found relevant were the fact that employers are seeking, among other things, graduates who can produce 100% quality, who can identify client requirements and who can define the resources required for a project. Interestingly, many of the skills employers did not generally find essential were technology related: understanding software/video game localization, for instance, as well as post-editing machine translation, pre-editing texts for machine translation, and using desktop publishing tools. Some respondents did note, however, that skills like pre- and post-editing would be needed in the near future. You can find a PowerPoint presentation detailing the survey methodology and survey on the Optimale website here, if you’re interested in taking a closer look.

Highlights from the Translation in Contexts of Official Multilingualism conference

As anyone who browses through enough of this blog will likely discover, my research interests are rather varied. I love technology, and I’ve presented and published papers and posts on crowdsourcing, website translation, and translator blogs. I spend a lot of time teaching, so I often post blog entries about my experiences in the classroom. But I also love history and politics–so much so, in fact that my doctoral thesis focused on the English and French translations of non-fiction texts related to Quebec nationalism, independence movements and the sovereignty referendums. So this month I’m attending two very different conferences held two weeks–and two continents– apart: the Translation in Contexts of Official Multilingualism conference in Moncton, New Brunswick, and the 12th Portsmouth conference “Those who can, teach”, in the UK. I’ve just returned from the Moncton conference, and I’ll be flying to the UK later this week.

Writing more than just a brief overview of the two conferences is beyond the scope of a short blog post (which is unfortunately all I have time to write), so I’ll share a few thoughts from the Moncton conference right now, and a few comments about Portsmouth later this month.

Some of the presentations I found particularly interesting were Chantal Gagnon‘s presentation on Liberal, Bloc Québécois and Parti Québécois translation policies around the time of the 1995 Quebec sovereignty referendum, Kyle Conway‘s research on (non)translation policies at Radio-Canada and the CBC, and Mathieu Leblanc‘s talk about translation in a Moncton public-service agency.

Gagnon’s comparison of speeches made by the Bloc Québécois, Parti Québécois and Liberal leaders during and after the 1995 sovereignty referendum really underscored, to me at least, the advantages of having an official translation policy: while the Liberal Party was able to target voters differently by adapting the French and English versions of speeches to the two audiences, the speeches made by politicians from the Quebec parties (Bloc and PQ) were translated in newspapers by journalists. Thus, only partial translations of the speeches were available, and these translations often contained minor shifts in meaning and omissions of politeness markers that the Quebec politicians may have wanted to retain. Not providing an official English translation meant the two Quebec parties weren’t able to control the message English-speaking Canadians (and English speakers outside the country) were receiving.

Conway, by examining statements made by policymakers and executives in the 1960s and 1990s, explored the question of translated news at the CBC and Radio-Canada. His presentation compared the current style of presenting news to Canadians, namely having two separate, but parallel, national news services to report on events and interview Canadian figures, and an alternative model periodically recommended by policymakers who wanted to see more bilingual or translated news. For instance, a politician’s might be broadcast in French across the country, but subtitles would be added to broadcasts appearing on English networks. Conway explored why this alternative model has not been successful in Canada, raising questions along the way about how French- and English-speaking Canadians understand one another.

The interviews Leblanc conducted in a Moncton-based federal department gave him some insight into the attitudes of bilingual public servants toward translation. The vast majority of the documents in the department were produced in English and then translated into French, even when the writer’s mother tongue was French. What I found fascinating was that many of the public servants Leblanc interviewed didn’t view translation negatively (as it often is in cases like this where the target language is the language into which texts are usually translated rather than the language from which translation generally take place). Instead, the French translations were viewed as a model to be followed. Some of the interviewees commented, for instance, that they wished they could write in French as well as the translators. Often, these interviewees didn’t write in French because they didn’t feel confident enough in their mother tongue, but the fact that the bilingual public servants also worked with unilingual anglophones also played a role: French speakers wanted to ensure their drafts could be read by everybody in the department before the document was finalized (and translated).

Moncton isn’t the only place where non-native English speakers are producing texts in English and having these texts translated into their mother tongues (and other languages). During the panel discussions and plenary talks with representatives from organizations like Canada’s Translation Bureau, the European Commission and Amnesty International, one point that came up several times was that language professionals are less frequently translating official documents into English and are instead revising English documents produced by non-native speakers and then sending these documents on for translation into other languages. Partly because non-native speakers are writing in English and their texts are being revised rather than translated into English, public-sector English translation work seems to be on the decline. This is a trend I’ll have to mention to my students, as editing (rather than translating) may be the kind of work they’ll have to look for post-graduation, given the current economy.

All in all, this was a very interesting conference, and it’s given me some new points to consider as a revise my doctoral dissertation into a book. I’ll start posting more on political and historical translations as I focus more attention on my book in the new year.

Highlights from Congress 2012 (Part 2)

The theme for this year’s CATS conference was “Translation, Texts, Media”, which led to an interesting and very diverse program covering topics ranging from dubbing, subtitling, audio description and oral translation to collaborative/crowdsourced translation, digital poetry, and pseudotranslation.

Unfortunately, I had to leave earlier the conference than I’d intended, so I missed some presentations I wanted to hear. Nonetheless, I did enjoy several presentations, three of which I thought I’d briefly discuss here.

The first was University of Ottawa professor Elizabeth Marshman’s presentation on LinguisTech, a website filled with technology-related resources such as tutorials for translation tools (corpora, term extractors, text aligners, search engines, word processors, etc.), blogs, discussion forums, and grammar, translation and style tips. I’ve heard Elizabeth speak before about the tutorials, as she helped develop them for University of Ottawa students. The resources are now available to the general public, and they’re definitely something undergraduate translation students should make use of. Professors will likely find the resources helpful too, as they can pass out the tutorials in class without having to spend time preparing the materials themselves.

Another very interesting presentation was by Philippe Caignon, from Concordia. As a follow-up to his earlier talk on integrating blogs into the classroom (which I discussed in this 2010 post), Philippe spoke about integrating wikis into his terminology course. As he argued, wikis are often used by companies like Hydro-Québec for terminology management, so incorporating wikis into the classroom helps expose students to a technology they might need to use in the workplace. Some of the advantages to wikis are similar to those I’ve discussed already when I’ve blogged about integrating Google Docs into the classroom: students can collaborate with one another and easily revise one another’s work. One advantage to the wiki platforms Philippe was using (TermWiki and PmWiki) is that he was able to receive alerts whenever a student modified a term entry. This meant he didn’t have to scroll through the revision history to track student contributions (something that is still a fairly time-consuming activity in Google Docs). For professors who aren’t teaching terminology courses but who would like to integrate wikis into their courses, Philippe mentioned wikispaces as a free, customizable platform. Definitely worth checking out!

Finally, I really enjoyed listening to Université de Moncton’s Mathieu Leblanc speak about his ethnographic study of translator attitudes toward translation memory systems. His work, though still in an introductory phase, is really crucial to shedding more light on the workplace practices of professional translators and how these practices have changed over time. Mathieu conducted interviews with salaried translators and on-site field observations at three Atlantic-Canada translation companies. In his presentation, he discussed some of the respondents’ views about segmentation in translation memories, as well as their perceptions of how their translation habits have been affected by the software. Since Mathieu had only begun to analyze the vast amount of data he collected, I’m looking forward to his future publications on the topic, as this is an area with important implications for translator training and workplace practices. It even contributes to creating a history of contemporary workplace practices, which would be invaluable for future Translation Studies researchers.

All in all, the conference was a great experience this year. I’m looking forward to next year’s conference in Victoria, B.C., on science in translation. I’m hoping to have time to return to Wikipedia’s translators, and study how scientific articles have been translated and revised within the encyclopedia, given that my 2011 survey indicated many English Wikipedia translators have no formal training in translation.

Highlights from Congress 2012 (Part 1)

I’ve just returned from the 2012 Congress of the Humanities and Social Sciences, which I attended mainly for the 25th annual CATS conference. This year, Congress was held at Wilfrid Laurier University in Waterloo. Now that I’m back, I thought I’d write two short posts about some of the presentations I enjoyed. This post will focus on a session I attended outside CATS, and the next will focus on three presentations I found particularly interesting during the CATS conference.

To follow up on my earlier post about role-playing in the classroom, I was particularly happy to have been able to get to Waterloo a day early so I could attend the Reacting to the Past session offered by University of Alberta law professors James Muir and Peter Carver as part of the Canadian History Association’s annual meeting. The session was designed to recreate (to a certain extent) the Quebec Conference of 1864, and participants were assigned roles from one of the delegations at the conference (Tories, bleus, Reformers, Nova Scotians, New Brunswickers, etc.). Something I really appreciated about the session–apart from being able to see a Reacting to the Past game actually being played–was the fact that Muir and Carver provided participants and observers with detailed documentation that outlined the rules and goals of the game, the objectives of each group, the points and voting mechanisms, and the grading system. I also had a helpful chat with James Muir after the session to ask some questions about game play mechanics, such as how much class time should be spent on a game (he recommended between 1.5 and 2 hours per session) and how instructors could assess a student’s participation (he recommended, for instance, marking students on their engagement with the game, their attempt to understand their character, their attempt to consult texts other than assigned readings, and their effort to respect the pedagogical purpose of the game by playing fairly rather than trying to gain points without caring about the content of the proposals they submit). On a less positive note, however, the documentation they provided really opened my eyes to the amount of preparation involved in creating a game: The document students receive is nearly 20 single-spaced pages long, and any game that follows a similar format will require nearly as much detail before it can be integrated into a classroom.

Nonetheless, based on this session, and the documentation Muir and Carver helpfully provided, I’ve been working a game for my undergraduate Theory of Translation course this September. It will be based on the development of CAN/CGSB-131.10-2008, the Canadian standard for Translation Services and will allow us to consider questions like what qualifications professional translators should have and what effects standards have on the language industry and its clients. It will also allow students to apply theoretical approaches like skopos, and discourse or register analysis when they make their arguments.

I’ve also realized that a game like the one demonstrated at Congress takes about 4-6 hours to play, spread out in 1.5-2 hour sessions spanning about 4 weeks. That means I’d need to create 1 or 2 other games if I want to focus the entire 13-week Theory of Translation course on learning through role-playing. The other two scenarios I’ve been mulling over are one of the early controversies over biblical translation (e.g. Luther) to help students debate the source- vs. target-oriented approaches to translation and consider the various effects translation can have in a society, and the the controversy over a Galician translator’s “sexist” translation of The Curious Incident of the Dog in the Night-Time, which I mentioned in my last post on this topic. This particular controversy would allow the class to explore not just feminist approaches to translation, but also ethical, cultural and linguistic issues.

My main idea behind having three different games is to ensure that each one focuses on themes from specific chapters of Jeremy Munday’s Introducing Translation Studies, allowing us to apply the concepts discussed in the books via the game that unfolds over the course of 2-4 weeks. I’ll lecture for 1-1.5 hours, and then we’ll play the game for the remaining 1.5-2 hours. I think this will be a good way to apply translation theories and to help students develop their argumentation skills. I’ll write a follow-up post in April, once I’ve had a chance to use the games in the classroom and see what the students thought.

Highlights of the Monterey Forum 2011

I’ve just returned from my trip to Monterey, where I attended (and presented at) the Monterey Forum on Innovations in Translator, Interpreter and Localizer Education. (See my last post for more details). Although many of the presentations focused on interpreter training, I did come back with some new ideas and tools to integrate into my translation classes. I thought I’d write about some of the points I found useful, in case those who weren’t able to attend are interested in some of the tools and teaching strategies the presenters discussed. For brevity’s sake, I’ll focus on just four presentations, although many others were also very helpful.

On Friday, Jost Zetzsche, one of the keynote speakers, briefly touched on crowdsourcing in response to a question from an audience member. Jost argued that translation programs need to train not just translators, but team leaders, because crowdsourcing has become such an integral part of the translation process in a few specific industries, such as social media. Although Jost believes crowdsourcing is unlikely to affect the translation industry as a whole, it will affect the way translation is performed for companies like Facebook or Twitter. If these companies don’t turn to professional translators to lead the crowd, Jost argues that translators should instead go to them, and that translator trainers should therefore prepare students to fill these kinds of leadership roles. While I could see Jost’s point, I haven’t quite decided where I stand on this issue. On the one hand, incorporating some kind of leadership training into the translation curriculum will likely be beneficial to students, whether they go on to manage crowdsourced translation projects, small translation companies, or large translation agencies; on the other hand, training translators to work in the shadows of crowdsourced projects (as the invisible correctors of translations generated by bilinguals who usually have little or no formal training in translation), helps lower the professional status of translation, giving the impression any bilingual can produce accurate, effective translations, when these translations are actually being reviewed and revised up by professionals who work behind the scenes. I’m not sure there is an easy answer to this problem, but it’s something translator training programs will likely have to consider for at least the next few years.

On Saturday, I attended a number of interesting talks. Cécile Frérot’s presentation on bilingual corpora and corpus-analysis tools was particularly useful, since she teaches French/English translation at the Université Stendhal and the corpora she discussed would therefore be well-suited to most translation classes here in Canada. She discussed the Corpus of Contemporary American English (available for free at this website), a fantastic resource of more than 400 million words (spread over 175,000 texts published between 1990 and 2011). As the website notes, the corpus is evenly divided between five genres: spoken, fiction, popular magazines, newspapers, and academic journals. Users are able to restrict searches to any one of these genres, sort by frequency, search for collocates and more. In addition, the website interface allows users to search the British National Corpus, the TIME Corpus, which contains 100 million words published between the 1920s and the 2000s, and the Corpus of Historical American English, which contains 400 million words published between 1810 and 2009. I can’t believe I hadn’t discovered this website before! I will definitely be introducing it to my students next year. She also mentioned AntConc a free monolingual concordancer for Windows, Mac OS and Linux, in case instructors want to develop their own corpora for use in the classroom (e.g. a corpus of scholarly/scientific articles published in a specific domain, to be used in a specialized translation course).

Barry Slaughter Olsen, from the Monterey Institute, talked about filming his students in class while they interpreted, and then uploading the videos to YouTube so students could watch themselves and their classmates and then leave feedback for one another. One of the more helpful points I got from his presentation was the need to give guidance to students when you ask them to leave online feedback for their peers. One student wanted to be told which issues to focus on each week. Another thought students should be asked to comment on one or two other students each week, instead of being expected to provide feedback to everyone. Both these points are applicable to translation students as well. If the class had just spent some time exploring ways to reduce wordiness, for instance, students could be asked comment on how well some of their peers incorporated these strategies into their homework. This would help reinforce what the students had learned, while giving more students direct feedback, since instructors can’t possibly comment in detail on everyone’s homework in a class of 15+ students. I’ll definitely be incorporating this tip into my translation classes next year, when I start using WordPress as a platform to complement our in-class sessions. I had already been thinking of having students comment on the submissions of their peers, so hearing what Barry’s students thought about the peer-to-peer feedback process was very helpful.

Finally, my Glendon colleagues María Constanza Guzmán and Lyse Hébert, discussed translator engagement, ideologies and agency. They really made me realize the importance of encouraging students to recognize their own ideologies and so they can realize that their backgrounds, training, and (political, social….) beliefs are inevitably going to affect their translations. Until now, I had left questions of engagement, ideology, and agency for the translation theory course, but I see now that even practical translation classes can benefit from having students briefly consider a) what their own views, prejudices, assumptions, etc. are, b) whether and how these views might differ from the ST author’s and the intended SL readers, and c) whether students should (or even want to) compensate for these differences.

I’ll have lots to consider over the summer, as I revise the syllabi for my existing courses and prepare for my new classes. If you were at the Forum, feel free to add your thoughts on these or other presentations. And if you weren’t able to attend, feel free to comment on anything I’ve said or to discuss your own teaching strategies and tools. I’m particularly interested in hearing about whether you encourage peer-to-peer feedback in your classes and if so, how it works.

Monterey FORUM 2011

I’m writing from my hotel in Monterey, California, where I’ll be attending Monterey Forum on Innovations in Translator, Interpreter and Localizer Education at the Monterey Institute of International Studies. Here’s a copy of the program, for those who are interested. I’m looking forward to today’s presentations on technology in the classroom; I’ll write a post or two about the parts I found more interesting or useful for teaching translation.

My presentation will look at how Google Docs can be integrated into translation classrooms to enhance collaboration. I’ll be talking about some of the cloud-based productivity suites currently available (e.g. Microsoft Web Apps, Adobe Acrobat’s Buzzword, Tables and Presentations, Zoho, ThinkFree Online, and of course, Google Docs. I’ll spend some time covering privacy, confidentiality, security and reliability issues when using cloud-based services like Google Docs, and then I’ll talk about some of the ways I’ve used Google Docs in my own classes, which I’ve talked about in this blog already (here, here and here, for instance).

I used Google Docs to create the presentation, and I’ve embedded a link to it below, in case anyone is interested in taking a look at it:

I’ll write more about the conference over the next few days.

Translation and the October Crisis

This year marks the 40th anniversary of the October Crisis, an event that was sparked by the FLQ’s October 5, 1970 kidnapping of British trade commissioner James Cross and which worsened when a second FLQ cell kidnapped Quebec Minister of Labour Pierre Laporte on October 10. Although Cross was held for 59 days before being released in return for safe passage to Cuba for his kidnappers, Laporte’s lifeless body was found in the truck of a car on October 17. During the Crisis, the federal government implemented the War Measures Act, which banned the Front de libération du Québec and made membership in the association illegal. This legislation was replaced by the Public Order (Temporary Measures Act), which remained in effect until April 30, 1971.

To help commemorate the Crisis, the School of Canadian Studies at Carleton University in Ottawa is holding a conference entitled “Just watch me!” 40th Anniversary of the October Crisis and War Measures Act in Canada next Thursday and Friday (October 14 and 15). I’ll be speaking about a specific case where translation helped make available a work that was technically banned in Canada: French copies of Pierre Vallières’ Nègres blancs d’Amérique had been seized, but the English translation was released by McClelland and Stewart in early 1971, when the Public Order (Temporary Measures) Act was still in effect.

In 1966, Pierre Vallières, an ardent supporter of Quebec independence and a member of the FLQ was being sought in connection with the deaths of Jean Corbo, a sixteen year old who was killed while planting a bomb near Dominion Textile Co., and Thérèse Morin, who was killed when an FLQ bomb exploded in the LaGrenade shoe factory. Vallières and his colleague Charles Gagnon were eventually arrested in New York, where they were protesting in front of the United Nations in an effort to help raise awareness about their belief that Quebec should become a free, socialist nation. While in prison, Vallières penned Nègres blancs d’Amérique, an autobiographic essay in which he argued that French Canadians in Quebec were like the Blacks in the United States: alienated, hated, exploited, and second-class citizens. Nègres blancs was finally published by Parti Pris in 1968, and by 1969, the Attorney General of Quebec ordered that all copies of the book be seized—including the one deposited in the Bibliothèque nationale du Québec—and that the author, publisher and distributors be accused of sedition and conspiracy to overthrow the provincial and federal governments. Vallières, however, claims this ban served only to increase the number of copies of his book that were sold in secret (1994: 9).

White Niggers of America: The Precocious Autobiography of a Quebec ‘Terrorist’, the first (and only) English translation of Nègres blancs d’Amérique, was published by New York-based Monthly Review Press, an independent socialist publishing house. The translation was republished in Canada—virtually unaltered—by McClelland and Stewart in 1971, the same year it was released by Monthly Review Press. American translator Joan Pinkham was responsible for translating Vallières’ book into English, and she corresponded with Malcolm Reid, a Canadian journalist who agreed to act as a consultant on the translation. In my presentation, I’ll be tracing the English translation’s journey from Monthly Review Press to McClelland & Stewart, exploring its effects on Canadians, and illustrating the different motivations behind its publication in Canada and the United States.

References: Vallières, Pierre. (1994). Préface (1994): Demain l’indépendance?. Nègres blancs d’Amérique. Montréal: Typo.