Thursday, 12 June 2014 09:33

Panel 16: Assessment of student learning on translator (and interpreter) education programmes

Innovative approaches to the assessment of student learning on translator education programmes
Dorothy Kelly, University of Granada, Spain

Assessment is an essential part of any teaching and learning process, and translator training is no exception to this. Yet, despite substantial advance in translator education methods, Hatim & Mason's position of 17 years ago regarding assessment, still holds: "The assessment of translator performance is an activity which, despite being widespread, is under-researched and under-discussed" (Hatim y Mason 1997: 197). In particular, the assessment of student learning on translator education programmes continues to be under-researched, and is often confused with translation quality assessment. According to an extensive survey of translator trainers carried out in Spain in 2010 (Kelly, 2010), assessment is the single major area of their activity which provokes most concern and insecurity amongst trainers on university programmes, and the area where they would most appreciate support in the form of innovative proposals and trainer training.

Of the three basic functions of assessment: diagnostic, formative and summative, most emphasis has traditionally been placed on the summative function, that of awarding marks, professional accreditation/certification, or even professional posts. On programmes across the world and indeed in professional examinations, summative assessment continues to be based almost exclusively on traditional translation exercises, in varying examination conditions, corrected by a single teacher/examiner, often using marking scales which deduct points for each 'error' identified, starting from a maximum score representing a notional optimal performance.

Much less attention has been paid to diagnostic assessment or needs analysis, and to formative assessment. Yet the former is essential for effective planning of any course module, and indeed for full programme design at all levels. And the latter, giving constant and detailed feedback to students on their progress throughout the learning process, is central to any student-centred educational process.

In response to demand from trainers for innovation in this aspect of their activity, the panel will contemplate aspects of all three basic functions of assessment, focusing on diagnostic, formative and summative assessment of student knowledge and learning on training programmes as distinct from translation quality assessment (excluded from the scope of the panel). It will present research carried out into innovative approaches to assessment on training programmes and accreditation. Issues to be addressed include: the alignment of assessment with competences and intended learning outcomes; the alignment of assessment with classroom methods and activities; principles of student-centred assessment; variety of assessment instruments and activities; team, peer and self assessment; internal versus external assessment; the use of learning portfolios, alone or in combination with other assessment instruments; the assessment of 'generic' competences such as interpersonal or intercultural competence; the formulation of assessment criteria (rubrics); grading practices in general; norm and criterion-referenced grading; accreditation systems and their impact on assessment in training programmes; the impact of innovative approaches on student learning and student experience.

For informal enquiries: [dkellyATugrDOTes]

dot emt ugr

Dorothy Kelly is professor of Translation at the University of Granada, where she is also Vice Rector for International Relations. She obtained her B.A. in Translating and Interpreting at Heriot-Watt University, and her doctoral degree from the University of Granada. Her main research interests are translator training, directionality in translation and intercultural competence. She is editor of the Interpreter and Translator Trainer. She was a member of the EMT Expert Group, a member of Spain's national Bologna Experts Team until 2013, and is currently the Chair of the Executive Board of the Coimbra Group of Universities.



See other thematic panels


Discussion time at the end of each session


Introduction by convenor

PAPER 1: Assessment instruments in translator training

Anabel Galán & Amparo Hurtado, Universitat Autónoma de Barcelona

PAPER 2: Assessing products and processes: developing process-oriented criteria to measure translation performance

Gary Massey, Peter Jud and Maureen Ehrensberger-Dow

PAPER 3: A manageable combined assessment approach: competence and decision-making

Catherine Louise Way

Paper 4: Assessment in competence-based, technology-enhanced, collaborative translation classes

Viviana Gaballo

Paper 5: An empirical study on summative assessment instruments and tasks in translation teaching.

Stefano Pavani and Amparo Hurtado Albir


Paper 6: Using rubrics to scaffold learning. How the integration of criterion-referenced descriptors enhances student-centred formative assessment

Bryan J. Robinson, M. Dolores Olvera-Lobo and Manuel Escabias-Machuca

Paper 7: Implications of ATA Examination Data for Student Assessment

Geoffrey Koby


Karl McLaughlin

Wrap-up by convenor


PAPER 1: Assessment instruments in translator training

Anabel Galán & Amparo Hurtado, Universitat Autónoma de Barcelona

In this paper we focus on assessment instruments in translation teaching, and we propose various assessment instruments taking competence-based training as a starting point. Translation teaching's foremost assessment instrument has traditionally been a text's translation. The translation of a text only accounts for a specific action carried out by the student. It does not provide information on the process they have followed, their ability to identify and resolve problems (the internal and external strategies they have used), their assimilation of implicit theories, their ability to regulate their own learning process, etc. Therefore, it is insufficient for the purpose of obtaining information on a student's level of competence, and other instruments are needed. Assessment proposals specifically for translation teaching which focus on more than just the correction of translations remain scarce. Some exemples are: Hurtado Albir (1999) who puts forward various assessment instruments for diagnostic, formative and summative assessment with the translation-task-and-project-based approach as her framework; Presas (2012) suggests criteria and instruments for appraising annotated translation assessment tasks; Kelly (2005), Hurtado Albir (2007, 2008, 2014) and Galán-Mañas (2009, 2010) advocate the use of portfolios as an alternative method and put forward a possible organisational structure for them. In a competence-based approach, assessment instruments should serve to collect information about the acquired competences, to assess the end product and the process, to promote student self-assessment, and to obtain a maximum amount of information on a student's competence. Furthermore, a criteria-based form of assessment should be developed, using indicators, assessment criteria and performance levels in every case. In this paper we will analyse the current situation of assessment in Translation Studies and we will propose examples of various instruments that can be used for diagnostic, formative and summative assessment in translator training:

- Texts to translate with prototypical problems according to the level of competence.

- Reports of different kinds. For instance, in the translation report the student can identify problematic fragments encountered when translating a text, explain the process followed, specify the sources consulted, etc.

- Questionnaires for diagnostic purposes (Orozco and Hurtado Albir 2002), self-assessment questionnaires, and questionnaires to collect information on translation problems or translation knowledge, etc.

- Reflective diaries with students' on their reflections on their learning process.

- Translation process recordings to analyse the process: pauses, corrections, type of searches, etc.

- Students portfolio with questionnaires, gist translations, comparative translation analyses, reports, translations, etc. carried out and selected by the student to illustrate their progress.

Finally, different types of rubrics for various assessment tasks (translations, reports, students' portfolio, self-assessment, etc.) will be presented.

Bionote: Anabel Galán-Mañas has a PhD in Translation, Interpreting and Intercultural Studies. She teaches translation at Universitat Autònoma de Barcelona (UAB). Her research interests are translator training - especially in blended learning environments - and the use of information and communication technologies. She is a member of the PACTE group. Amparo Hurtado Albir is Full Professor in Translation Studies at Universitat Autònoma de Barcelona. She is team leader of a number of research projects on translation pedagogy and the acquisition of translation competence, head of the PACTE research group, and author of numerous publications on the theory and pedagogy of translation.

PAPER 2: Assessing products and processes: developing process-oriented criteria to measure translation performance

Gary Massey, Peter Jud and Maureen Ehrensberger-Dow

Since Krings' (1986) groundbreaking exploration of translators' cognitive processes, translation researchers have been developing tools and techniques to investigate the processes behind translation products, and the effects of those processes on target-text quality. Process research methods have also found their way into translator education, serving to complement traditional product-oriented teaching by encouraging metacognition and self-regulation. Alongside more established techniques to access and evaluate translation processes, such as written commentaries and dialogue protocols, those currently proposed and successfully deployed in recent didactic and diagnostic experiments include screen recording combined with various forms of retrospection, self-evaluation, peer evaluation and trainer-student dialogue (e.g. Angelone 2013a, 2013b; Enríquez Raído 2013; Massey & Ehrensberger-Dow 2013). Over the past few years, all the compulsory entrance tests for our institute's MA in Professional Translation have been recorded on-screen, introducing a process-oriented component to the diagnostic assessment of, and the formative feedback on, the performance and potential shown by candidates. Building on studies of process-oriented teaching and testing methods already implemented at our institute (cf. Massey & Ehrensberger-Dow 2013), as well as on work indicating how certain process measures may correlate with translation quality and even predict subsequent performance (e.g. Massey & Ehrensberger-Dow 2014), we have been attempting to identify indicators and predictors of performance in the processes of candidates taking and retaking our MA entrance tests. After reporting on the design and results of these exploratory studies, this paper discusses the possible applications and implications of our findings in the diagnostic, formative and summative assessment of translation competence. The ultimate objective of our research is to extend and refine traditional product-oriented measures by generating readily applicable criteria with which to evaluate observable screen-recorded actions and behaviour. It is hoped that these will offer hard-pressed staff and institutions an efficient, feasible means of assessing translation performance based not only on target-text products, but also on the processes that went into their making.

Bionote: Gary Massey is deputy head of the Institute of Translation and Interpreting at Zurich University of Applied Sciences (ZHAW) and director of its MA in Applied Linguistics. His research interests include translation processes and translation pedagogy. Peter Jud has an MA in Translation and is a lecturer at the ZHAW Institute of Translation and Interpreting. He researches translation processes and translation pedagogy. Maureen Ehrensberger-Dow is Professor of Translation Studies, teaching on the BA and MA programmes at the same Institute, and principal investigator of three nationally funded research projects, including one on translation workplace processes.


A manageable combined assessment approach: competence and decision-making

Catherine Louise Way

Training in competences is not new to Translation Studies trainers who have, for some time now, used different models of translator competence (Krings, 1986; Ammann, 1990; Hurtado, 1995, 2007; Gile, 1995; Neubert, 1994, 2000; PACTE, 1998; Kelly 1999, 2002, 2005) to develop objectives and learning outcomes for their translation programmes. Competence based training (CBT) is also used increasingly in translation courses, however, assessment of such training has received little attention. Whilst training in certain competences with specific activities or exercises is common in early training stages, it is in the later stages of training, nevertheless, when all the competences intertwine to intervene in the creation of the final product. This is when assessment becomes a much more complex question. If students are to be assessed not only on the quality of their final product, but also on how their translator competence develops, assessment requires an individualised approach. We have tried and tested the use of Project Management with authentic translation briefs in the final stages of undergraduate courses in order to draw the trainees' attention to their different competences and the translation process, without neglecting the final product. This provides a clear working framework that emulates professional practice. Furthermore this team work approach is combined with the use of the Achille's Heel sheet (Way, 2008), thereby allowing students to see where their strengths and weaknesses lie in their own translator competence and pinpointing areas to be improved, whilst allowing discussion of the strategies to do so. When faced with the task of assessing trainees' competences and their development during a translation course, many lecturers consider individualised assessment of the process a complex and time consuming alternative. In the methodology that we propose, assessment is both individual and collective and performed as the trainees explain and discuss the process they have pursued to reach the final translation product. Despite fears that this may be labour intensive, we will discuss how to perform these tasks in an efficient, manageable way. In this paper we propose to present examples of practical ways to introduce both project management and the assessment of trainees' competence in translation courses which are based on a structured framework of decision-making (Way 2014) by using practical examples of tried and tested methodologies, that have proven to be successful in large student groups over recent years in our translation programme. This approach has not only increased student participation and motivation, but has also improved trainees' final results.

Bionote: Catherine Way is a Senior Lecturer in Translation at the University of Granada and member of the AVANTI research group. She has authored/co-edited books and papers and is a member of the Editorial Board of ITT (previously Editor and co-editor) and Puentes. She is a member of the International Advisory Board of journals including Fachsprache, IJLLD, the book series Aprende a traducer and has peer reviewed for Major Translating Minor and Continuum. She has recently co-edited the Proceedings of the last EST Conference for John Benjamins. Her main fields of research are Legal Translation and Translator Training.


Assessment in competence-based, technology-enhanced, collaborative translation classes

Viviana Gaballo

This paper rests on the assumption that assessment should be consequential to the methodology used in the learning and teaching approach adopted, so as to prevent the "pedagogical schizophrenia" (a phenomenon which unfortunately seems to be still widespread and which can be defined as the inconsistent relation between the chosen pedagogical approach and the relevant assessment methodology), which brings many students to repeatedly fail their exams or to fail to achieve the expected outcome. Since the digital turn of the 21st century has affected many aspects of teaching and learning in general, programme design, course delivery and assessment shall have to be re-thought to host the digital world. Furthermore, as network technology rapidly expands, and internet-based teaching and learning increasingly replaces traditional classrooms, also Language Studies (LS) and Translation Studies (TS) programmes need to apply updated pedagogical approaches that can meet the emerging needs of the Net g learners of today. Based on previous research on translator education (Kiraly 2000; Pym 2009; Göpferich & Jääskeläinen 2009; Stewart, Orbán & Kornelius 2010) and on the systemic-functional model of translation competence developed by Gaballo (2009), this study aims at providing a coherent picture of how to apply innovative approaches to the assessment of student learning (Goodyear, Banks, Hodgson & McConnell, 2004) on competence-based, technology-enhanced, collaborative translation programmes.

Bionote: Viviana Gaballo is Assistant Professor in English language and translation at the University of Macerata, Italy. Her main interests include translation competence, ESP and networked learning. She has developed a systemic-functional model that can be used to both define and assess translation competence.


An empirical study on summative assessment instruments and tasks in translation teaching.

Stefano Pavani and Amparo Hurtado Albir

The purpose of this paper is to present an ongoing research project on summative assessment in translation teaching between Spanish and Italian. The general hypothesis of this research is that the "traditional" summative assessment system (the translation of a text) that is often used in translator training centers is not a completely reliable instrument and does not gather enough data about students' translation competence.

This research is approached from two perspectives: a descriptive one and an empirical one. This dual perspective is reflected in the structure of our paper: in the first part, we will discuss the results of a survey administered to a number of translation professors in Italy and Spain (modeled on Martínez 1992, Waddington 2002 and Kelly 2010) about the type of tests used to assess their students, their use of correction scales, assessment rubrics (if it is the case), etc. In the second part, we will present a proposal of summative assessment for students of translation from Spanish into Italian, which will be empirically validated with a group of students of the B.A. in Linguistic Intercultural Mediation at the University of Bologna. In addition, preliminary results of the research will be presented.

For the elaboration of this proposal of summative assessment we designed a teaching unit about the translation of tourism texts and it was administered to a group of Italian students of Spanish into Italian general translation. The teaching unit was designed following the translation task-based approach (Hurtado 1996,1999,2014) and many types of instruments are used: texts (to analyze, compare, correct and translate), questionnaires, information sheets, contrastive tasks, translation process recording, etc. The unit presents a multidimensional assessment and has various formative and summative assessment tasks. In addition, after the completion of the teaching unit, students will prepare a portfolio and perform a "traditional" summative evaluation exam (the translation of a text).

Subsequently, the results and the information collected using the different assessment tasks and the portfolio will be compared with the traditional test (the translated text) by means of ad hoc questionnaires answered by expert translation teachers.

The paper will emphasize diversified assessment instruments and tasks, which are multidimensional, criterion-referenced and competence-based. Our proposal aims at gathering more information about the degree of acquisition of students' translation competence (including the different subcompetences) and about the translation process and the strategies used by students as it does not assess translation only as a product.

Our assessment proposal not only can be used to teach translation between Spanish and Italian, but also in other combinations of close languages as it has a theoretical and pedagogical apparatus that allows reproducibility.

Bionote: Stefano Pavani is a PhD student in a cotutelle doctoral program between Universitat Autònoma de Barcelona (UAB), Spain and Università di Bologna, Italy. He holds an MA in Translation Studies from the UAB and he is a professional ENG, SPA > ITA translator. AMPARO HURTADO ALBIR is Full Professor in Translation Studies at Universitat Autònoma de Barcelona (UAB), Spain. She is the team leader of a number of research projects on translation pedagogy and the acquisition of translation competence and head of the PACTE research group. She is the author of numerous publications on the theory and pedagogy of translation.


Using rubrics to scaffold learning. How the integration of criterion-referenced descriptors enhances student-centred formative assessment

Bryan J. Robinson, M. Dolores Olvera-Lobo and Manuel Escabias-Machuca

Context: Criterion-referenced descriptors offer a transparent approach to translator training that promotes the development of higher-order cognitive skills (Bloom 1973) - in particular analysis and evaluation, which are crucial to professional translators - and develops the interpersonal competences essential to efficient teamwork and specified in European tertiary education since the initiation of the Bologna process (Pagani 2002) but often ignored by academics. Descriptors in the form of rubrics provide students with scaffolding (Kiraly 1999, 2000, 2003) that supports and directs their development through essentially social constructivist activities (Robinson et al 2008), undertaken both in- and out-of-class, when the criteria they embody are in harmony with curricular objectives and the activities themselves allow for structured incremental growth in learning (Vygotsky 1978). Objectives: Our hypotheses are that (1) rubrics provide learners with tools they can learn to use and apply with substantial certainty that the grades awarded will gradually coincide with the tutor-set "standard" grades; that (2) the application of rubrics in team- and individual self- and peer-assessment activities will enhance the quality of their learning processes by developing higher-order cognitive skills; and that (3) the use of self- and peer-assessment of collaborative teamwork competences can broaden the learning experience at the tertiary level bringing actual learning closer to the aims of the Bologna process by including transverse competences. Method: In the present communication, we describe the use of rubrics as formative tools that provide valuable feedback in the context of our approach to their use in the classroom (Olvera-Lobo et al 2007; Robinson et al 2006; Robinson et al In press [a]). We draw on extensive data in order to measure their success in providing feedback during translation quality self- and peer-assessment workshops. Participants: Our sample consists of three consecutive generations of final year students (2010-11 n1=73; 2011-12 n2=73; 2012-13 n3=92) using a single rubric for self- and peer-, team and individual assessment (Robinson 1998). Furthermore, we present initial results on the use of a pilot rubric developed for the individual self- and peer-assessment of collaborative processes in team-based activities (Robinson In press [b]) with data drawn from the 2012-13 cohort (n3=92). Statistical analysis: We use the Shapiro-Wilks test to assess the normality of the grades awarded by individuals, teams and the tutor, ANOVA (for normally distributed grades), and the Kruskal-Wallis test and Friedman test (for non-normal distributions) to compare the average grade assigned by the different sources of variability (individual and team [self- and peer- awarded] and tutor) and detect possible differences between them. Finally, we use Cohen's kappa coefficient and the intraclass correlation coefficient, to compare interrater agreement in grades assigned by participants self, team and tutor. All statistical analysis is with R software. Conclusions: We believe our results will confirm the reliability of this approach and encourage the wider application of rubrics and the consequent collection of data from other contexts that will shed further light on their value in translator training.

Bionote: Bryan J. Robinson teaches translation at the University of Granada, is a translator for the bilingual Revista Española de Cardiología (Elsevier), and Examiner with the International Baccalaureate; M. Dolores Olvera-Lobo teaches Documentation and coordinator of the Scientific Information: Access and Evaluation research group (HUM-466); Manuel Escabias-Machuca is a teacher and researcher in the Department of Statistics and Operative Research.


Implications of ATA Examination Data for Student Assessment

Geoffrey Koby

To assess student learning, one must first understand the kinds of problems that can arise in professional translation. Marking examinations for summative assessment differs from marking for formative assessment, yet important insights can be obtained from summative data. This paper examines the categories and numerical breakdown of error markings from one year of American Translators Association (ATA) certification examinations. The error types recorded fall into two large categories, transfer errors and errors of language mechanics. This paper analyzes a variety of aspects of error marking, including frequencies of error types in each language pair and across all language pairs, proportion of transfer errors vs. errors of language mechanics, distribution of error severities, categories never used, etc., as well as error types/frequencies broken down by score bands. This information can inform teachers' choice of marking categories and scales. Simply providing feedback to students using these categories and severities is useful in and of itself, but combining this feedback with qualitative/analytical comments adds additional dimensions to the feedback process. In addition, students can receive papers marked in this way for self-correction before teachers provide additional feedback.

For the present paper, numerical error data has been collected from 527 ATA certification examinations from 2006 in 23 language pairs (11 languages into English and 12 from English into other languages), and recorded on the ATA Framework for Standard Error Marking. For each examination, this data includes the language pair, passage score, passage type (A, B, C), pass/fail result, and individual errors by severity and category. This data is then aggregated to show patterns of error severities and categories for each passage within a language pair, and across passages, and across languages.

The ATA examinations are marked using the ATA error marking scale, in use since 2002 and designed for use in standardized testing conditions (see Koby/Champe 2013). Passages are corrected by two graders who assign a category (e.g., Omission, Usage) and error points to each error using a severity scale (1/2/4/8/16 points, based on the ATA "Flowchart for Error Point Decisions") focusing on each error's effect on text usefulness. ATA's pass threshold is 17 points (18 points fails), with no limit on the number of points that can be assigned.

Previously published research on this scale has shown methods to adapt it to classroom teaching (Doyle 2003, Koby/Baer 2005), and analysis of reading-level difficulty correlated to errors made (Howard 2009). This paper will expand this research by providing a large-scale analysis of the categories assigned in an actual testing program.

Bionote: Geoff Koby is associate professor of German/Translation at Kent State University. His research focuses on translation evaluation and assessment, particularly the ATA certification examination. Recent publications include "Welcome to the Real World: Professional-Level Translator Certification" (2013, w/G. Champe), "Certification and Job Task Analysis (JTA): Establishing Validity of Translator Certification Examinations" (2013, w/A. Melby), and "The ATA Flowchart and Framework as a Differentiated Error-Marking Scale in Translation Teaching" (2014). He also recently translated Einstein's Opponents: The Public Controversy Surrounding the Theory of Relativity during the 1920s (Milena Wazeck, 2014). His teaching focuses on translation assessment and financial, legal, and business translation.



Karl McLaughlin

The initial stages of note-taking for consecutive interpreting can often constitute something of a double-edged sword both for students and trainers. The acquisition of this crucial new skill is attractive and exciting for students who – following several arduous weeks of memory development, speech structure work and presentation enhancement training – invariably pin their hopes on notes resolving the difficulties encountered in the accurate reproduction of speeches just heard. At the same time, however, this acquisition process can prove daunting and highly frustrating if it does not embed properly through an adequate pedagogical approach. In many cases, students who initially struggle to master the multiple demands of note-based consecutive tend to perceive that they simply "cannot do it", although without necessarily being able to identify the precise reasons for their unsatisfactory performance. This oral communication discusses various strategies for consolidating the new technique by helping students focus more clearly on the different components involved in note-taking, avoiding the hit and miss impression that can often set in. The strategies include second-listening note repetition and revision, the use of a video camera to analyse individual note-taking technique and, in particular, the use of a structured questionnaire for diagnosing more precisely where individual and collective problems arise. By breaking down the job into its various phases (listening, writing, pre-production and the actual reproduction of the speech in the target language) and investigating students' self-perception of their performance in each, the diagnosis helps pinpoint specific and improvable aspects on which to focus in subsequent exercises, while also helping mitigate what can often be an initial and excessive fixation with symbol learning as the basis of their note-taking approach. The practical diagnosis strategies offered are based on the author's extensive experience of teaching consecutive interpreting on postgraduate courses in Britain, Spain and other countries.

Bionote: Karl McLaughlin has been a professional conference interpreter and translator since 1988, and has combined professional practice of both disciplines with teaching at university level in Spain, Britain and other countries for over twenty years, including the universities of La Laguna, Bradford and Leeds. His main research interests lie in aptitude testing for interpreting, methods for training in consecutive interpreting and quality issues in interpreting.

WRAP-UP SECTION – by the convenor

Back to top


© Copyright 2014 - All Rights Reserved

Icons by