Abstract
In this chapter, we argue that education and training in translation quality assessment (TQA)is being neglected for most, if not all, stakeholders of the translation process, from translators, post-editors, and reviewers to buyers and end-users of translation products and services. Within academia, there is a lack of education and training opportunities to equip translation students, even at postgraduate level, with the knowledge and skills required to understand and use TQA. This has immediate effects on their employability and long-term effects on professional practice. In discussing and building upon previous initiatives to tackle this issue, we provide a range of viewpoints and resources for the provision of such opportunities in collaborative and independent contexts across all modes and academic settings, focusing not just on TQA and machine translation training, but also on the use of assessment strategies in educational contexts that are directly relevant to those used in industry. In closing, we reiterate our argument for the importance of education and training in TQA, on the basis of all the contributions and perspectives presented in the volume.
Access provided by CONRICYT-eBooks. Download chapter PDF
Similar content being viewed by others
Keywords
- Translation quality assessment
- Principles to practice
- Translation industry
- Translation students
- Translation teaching
- Translation pedagogy
1 Introduction and Background
The advent of translation technologies has called for a more pragmatic approach to translation quality evaluation in both research and practice across the language services industry. In an increasingly competitive market where quality-focused translators come under intense pressure from clients to sustain quality standards while offering more attractive rates and faster turn-around times, models and tools to support translation quality assessment (TQA) are a necessity. The uptake of computer-assisted translation (CAT) tools, which have been widely shown to boost productivity, reduce turn-around time and enhance phraseological consistency (see, e.g. Granell Zafra 2006; Federico et al. 2012; Christensen and Schjoldager 2016; Moran et al. 2018), has proceeded in parallel with the gradual refinement of TQA models and tools (see O’Hagan 2013; Doherty 2016).
While the importance of adopting CAT tools is now a given, and there is a growing realisation of the potential of machine translation (MT) in the translation community at large, we feel that there is still a strong need to raise awareness and instil appreciation of the role of TQA. This is particularly important for translation educators, trainers, students, researchers, as well as professionals who are keen to keep their skillsets up-to-date to remain competitive on the market, e.g. by attending lifelong training programmes. In addressing the educational needs related to TQA, we consider both formal teaching contexts, typically as part of academic or vocational translator training programmes (such as degree programmes in translation studies or specialisation courses for language graduates) and more flexible and focused training opportunities in professional and industry-oriented settings, including tool-specific training for accreditation purposes, self-paced upskilling, online tutorials and webinars, etc. It is increasingly common, for example, to find ad-hoc training sessions and workshops specifically aimed at professionals within the programmes of industry-oriented conferences, and often translators associations expect that their members obtain certain qualifications or attend recognised training events on a regular basis (these may even be organised by the association as part of a professional development programme), to maintain full membership or retain their certified status.
In particular, most well-established translator training programmes at university level now include components focusing on the use of CAT tools and other translation technologies with their related skills, most notably MT and post-editing (PE), a development which, in our view, is certainly positive and responsive to industry needs. However, a cursory search of online translation programme descriptions and course syllabi available in English, as well as our own direct experience as educators in academia and in the industry, indicate that educators and their students are not yet sufficiently familiarised with TQA models and tools that are now commonplace in the industry. The focus of academic translation training programmes still appears to be firmly on theoretical frameworks that have only tenuous links with quality evaluation in real-world professional practice (see Castilho et al. this volume). While we recognise the value of more theoretically-oriented components in the training of well-rounded translators, we also advocate the importance of making room for the teaching of state-of-the-art quality evaluation metrics and tools that graduates are likely to encounter when they enter the translation marketplace.
In this respect, we believe that the role played by increasingly sophisticated evaluation procedures in professional translation is becoming even more important in today’s technologised industry, which makes it essential for educators and students to be well-acquainted with the key principles and concepts in this area. This would ideally be achieved by embedding TQA knowledge and skills in curricula and syllabi, for example, by including recent literature in lecture content, by providing advanced workshops on TQA topics, by using industry-based marking criteria for translation assignments, by giving students clear guidelines for meeting expectations in industry contexts, and by introducing reproducible measures of quality that will then be familiar to graduates when they enter the language service industry. In the rest of this chapter, we substantiate these proposals with concrete examples and suggestions, with the purpose of encouraging the incorporation of TQA issues in a variety of formal academic educational contexts as well as more flexible industry-oriented training scenarios.
2 Translation Technology Education and Training
The importance of technology in translator education and training is well established and widely acknowledged, with several sources arguing for translation programmes to help students to become informed and critical users of the variety of technological tools they will encounter in their professional career (e.g. Pym 2003; Kenny 2007; EMT Expert Group 2009, 2017; Bowker and Marshman 2010; Doherty et al. 2012; Marshman and Bowker 2012; Doherty and Moorkens 2013; Doherty and Kenny 2014; Kenny and Doherty 2014). A general requirement for technical ability has consistently been a part of contemporary translation competence models for professionals and for university training for several years now (e.g. Beeby et al. 2009; EMT Expert Group 2009, 2017; Scarpa and Orlando 2017).
This is not to say that all university-trained translators are currently using the full range of translation technologies available to them. Indeed, we would never expect this to be the case, as different technologies may be more or less useful depending on a whole host of factors, including the area in which the translator is operating, the text type and the language pair in question, the file formats being used, and the quality levels expected, to name just a few. Rather, the wider field of translation has evolved and it behoves translation scholars and teachers to remain up to date, if not ahead of, such changes in their scholarship and teaching. The requirement to keep abreast of technological developments is arguably even more crucial for professional translators, who have to position themselves within rapidly changing markets as practitioners who may be asked to offer a diverse range of translation-related services including, for example, organising the language resources and terminological assets to be used via CAT tools in large multilingual translation and localisation projects, PE, diagnostic evaluation of MT systems, subtitling, etc., all of which depends, of course, on their clients, language pairs, fields of specialisation, etc. (Moorkens 2017).
Such an argument underlines why it is important to teach translation technology to the translators of today and tomorrow. An increasing number of publications are presenting detailed descriptions of how particular tools can be incorporated into a more narrowly-construed translation technology syllabus, or a more broadly-construed translation studies curriculum. For example, most of the papers in the Journal of Translation Studies 2010 special issue on teaching CAT (Chan 2010) fall into the former category, while work carried out under the banner of the Collection of Electronic Resources in Translation Technologies (CERTT; Bowker and Marshman 2010) at the University of Ottawa takes a broad, holistic view and attempts to create the conditions in which a range of technologies can be easily integrated into courses across the translation studies curriculum (Bowker and Marshman 2010; Marshman and Bowker 2012).
With a few notable exceptions such as Wältermann (1994), Kenny and Way (2001), Doherty et al. (2012), Kenny and Doherty (2014), and Sycz-Opoń and Gałuskina (2017), systematic studies on best practice to teach translation students about MT are difficult to find. Bowker and Marshman (2010, 204) mention, for example, that tutorials and exercises for the teaching of MT, exemplified by the rule-based system Reverso Pro, have been created as part of the CERTT project, but they do not give any further details. The other papers in Chan (2010) that mention MT say little if nothing about teaching MT. Flanagan and Christensen (2014) investigate how MA-level trainee translators interpret industry-focused PE guidelines designed to achieve publishable quality from raw MT output, and find that the trainees have difficulties interpreting them, primarily due to competency gaps, which leads to a set of proposals to address such shortcomings in academic training. Koponen (2015) notes students’ variable post-editing speed and difficulty in following quality guidelines, but nonetheless views her teaching of PE as an important step in students’ understanding of MT as a tool rather than a threat.
Pym’s (2013) assertion that “there has actually been quite a lot of reflection on the ways MT and post-editing can be introduced into teaching practices” probably reflects more accurately the reality of the early 2000s than subsequent developments in translation pedagogy. Arguably, the heyday of reflection in the area was between 2001 and 2003, when the European Association for Machine Translation (EAMT) devoted some pioneering workshops to the teaching of MT (e.g. Forcada et al. 2001; EAMT/BCS 2002), and a workshop in 2003 devoted to teaching translation technologies at MT Summit IX (e.g. Forcada 2003; Knight 2003; Mitamura et al. 2003; Robichaud and L’Homme 2003; Vertan and von Hahn 2003; Way and Gough 2003).
Of particularly relevance here are the short papers by Knight (2003) and Way and Gough (2003). Way and Gough (2003) describe the development and assessment of a course in MT, focusing on RBMT and SMT, for undergraduate students in computational linguistics. Knight (2003) describes resources for introducing concepts of SMT, from which many researchers and lecturers have drawn, and remains valuable to this day. Since these workshops, however, teaching-oriented discussions on more recent approaches to MT, particularly hybrid and neural MT systems, have not yet emerged.
It is also true that for decades there was hardly any exchange between MT researchers and developers on the one hand, and professional translators and translation theorists on the other; this was mostly because translators have historically tended to see MT as a threat (Englard 1958), and (like translation theorists) the difficulties that MT faced in the days of rule-based systems were too banal from their point of view to take MT seriously (Taillefer 1992). Arguably, this scenario has since changed quite considerably. For a long time, the annual Translating and the Computer conference series (organised by ASLIB in London since 1978Footnote 1) was arguably the only forum where these communities met, with some stimulating debate ensuing among kindred spirits discussing MT from different, but surprisingly complementary, perspectives, as reported by Kingscott (1990) from one such conference in the late 1980s. Following the advent of data-driven and machine-learning approaches that have made MT systems much more powerful and readily available for countless language pairs (including via free online MT services, see Gaspari and Hutchins 2007), today different commentators have different views on the degree of involvement that is desirable for translators in the development of SMT, in particular. These views range from the irenic (see Karamanis et al. 2011; Way and Hearne 2011) to the disruptive (e.g. Wiggins 2011). Even where translation scholars and teachers do engage with SMT, they can disagree on how much translators need to know about the technology: is it enough to use Google Translate? Enough to fix the output of MT systems with effective PE? Or enough to be able to build customised MT systems? In addition, depending on the future one predicts for the translation profession, different types of content would seem appropriate in the translation curriculum. In the somewhat deterministic picture presented by García (2011) and Pym (2013), translators are morphing into post-editors whose primary purpose will be to fix the output of MT systems.
Others, including Kenny and Doherty (2014), argue that translators who are able to remain abreast of technological developments will continue to be in demand and well placed in whatever form the language services industry takes in the future. We argue that confining their own professional profile to a narrowly-defined role undervalues the language expertise of translators. In addition, the recent trends of growing digital connectivity and communication are creating increasing needs to translate expanding numbers of texts and text types. These trends present opportunities for translators to take an advisory role, specifying the appropriate workflow for texts to be translated (Moorkens 2017), orassessing training data to be selected for MT system training and development, evaluating MT output, managing terminology, and refining workflows (O’Brien 2012); all of these roles benefit from a combination of skills involving, crucially, TQA.
3 TQA Models for Education and Training
Against this very dynamic background, there is a definite need for translation graduates to be familiar with a variety of TQA models and to have the skills to carry out TQA in a critical and efficient manner in a range of specific situations and contexts. Huertas Barros and Vine (2017) found that although over 53% of UK universities surveyed had recently updated their translation evaluation criteria, most do not believe that contemporary industry TQA is “relevant to an academic setting”. The evaluation needs in industry and academia necessarily differ based on the pragmatic requirements of each scenario. For academics, it may involve testing a system or evaluating what sort of effort is required for a translator to work with a text type or a specific workflow. For industry, it may be to test the quality produced by a translator, or to assess the usefulness of translation leverage, also to measure productivity in terms of words processed per unit of time. The evaluation type and translation workflow are also likely to differ depending on the perishability of the content, as represented in Fig. 1 viz. broad categories of content types. Texts are considered perishable if they are for immediate consumption with little or no purpose thereafter, such as online travel reviews that are likely to slip into disuse as new reviews are added, social media posts or Internet fora messages concerning volatile information that are not locked to the top of a forum or thread; clearly, if this short-lived content requires translation into other languages, typically for quick multilingual dissemination, its quality is unlikely to be paramount or to warrant extensive evaluation (see also discussions of “fitness for purpose” in the chapters by Way and Jiménez-Crespo in this volume). Conversely, non-perishable texts (literary works and marketing copy being prime examples) are typically carefully crafted so as to possess aesthetic value and/or to clearly convey important, often durable, messages. These features must be accurately preserved in translation, thus requiring accordingly robust TQA procedures.
Since the advent of translation technology, and as computing power has grown, translation workflows have become more varied, complex, and technologised. This has necessitated a broader range of translation evaluation methods, suitable for the workflow, text type, or target audience. For researchers and language professionals, it is important to be aware of different types of evaluation to fit a particular project or workflow, and to consider whether a more innovative or agile approach could be used to replace another. We contend that TQA has become an essential component of vocational translator training for a range of roles, and is an advantage for translation graduates, some of whom begin their professional lives as freelance or in-house translators, but may then move to a translation management role at some point during their careers.Footnote 2
Interestingly, in a large-scale survey that involved 438 language professionals, Gaspari et al. (2015) found that 35% (“largely freelance translators”) had “no specific TQA processes in place”. This can have a direct effect on freelance translators’ earnings, as error typologies, for example, are usually applied to samples of translations, particularly those produced by less experienced translators, or those who have not built up a long-term trust relationship with their employer or client. If a translation is considered to be below a pre-defined threshold, it may be sent back to the translator for revision, with obvious consequences on the management of the relevant translation project and on the status of the professional concerned. There have been instances reported of translators being overlooked for subsequent jobs when translation sample scores fall below the employer’s threshold (Koo and Kinds 2000). Only a handful of resources exist for training in contemporary TQA, i.e. those including CAT and MT. Doherty and Kenny (2014) describe the design of an SMT syllabus for postgraduate student translators with a focus on TQA using both human and automatic evaluation metrics, in order to enable the students to become critical users of the range of evaluation methods and metrics available to them; their investigation also reports the views of the students on this innovative syllabus devoted to teaching SMT with strong TQA elements, along with the students’ self-assessment of their own learning. Depraetere and Vackier (2011) evaluate the use of an automatic QA tool (QA Distiller) as part of translation evaluation, which they find useful as a complement to human annotation, despite a high prevalence of false positives (incorrectly identified errors). Delizée (2011) also mentions the use of error typologies as part of summative assessment at the end of the Master’s programme.
In Fig. 2, we propose a step-by-step guide to help educators and translators choose one of the various types of TQA compatible with their own translation scenario. The various TQA methods included in the figure are explained in more detail by Castilho et al. in this volume. In addition, Lommel (in this volume) explains the evolution of industry error typologies for human translation that often propose error types for language and formatting that may be considered minor, major, or critical errors, sometimes with a score appended. Typologies for automatic MT evaluation metrics are described in Popović (in this volume), and automatic QA checking tools are covered briefly by Castilho et al. (in this volume).
4 The Next Steps for TQA in Education and Training
Education and training for human and machine TQA today stand out as largely overlooked, but essential, components of translators’ skillsets, regardless of the role in which they seek to work. In this chapter we have discussed diverging viewpoints that range from arguments to include translation technologies and TQA in translation curricula to those who voice concerns that the role of the translator may be reduced and devalued in the face of such technologies.
Resources are of course a central consideration and limitation in both academic and industry settings. Performing TQA seriously costs time and money, and limited motivation to learn about and conduct TQA may also represent a stumbling block. We need to acknowledge that there are barriers to performing TQA and especially to performing it properly (Doherty 2017). In our view, education and training stakeholders should see TQA as a return on investment in terms of providing their students with an advantage in the graduate market and with the ability to change attitudes and inform clients in the future, to the long-term benefit of the profession. TQA can indeed be framed not only as a means to an end, as something that one must do because of external requirements, but also as an end in itself given its usefulness in a wide variety of applications including knowledge of the industry, translation skills, technology training, usage in performance review and progression, hiring, pricing, and improving linguistic abilities, etc. (see Doherty and Kenny 2014; Kenny and Doherty 2014; Gaspari et al. 2015).
In closing this chapter, we believe that the bottom line is that regardless of the debates on translation quality in academia, the industry will continue to have its own TQA metrics and models that will, in turn, evolve as the industry changes, largely dependent on market trends and technological developments. The challenge for lecturers and trainers is to understand the dynamics at play in the choice and use of these TQA metrics and models, so that their students can learn to appreciate their value while also being aware of their limitations and potential pitfalls. Equipped with this knowledge, human translators, especially university-educated ones, can enter the industry with confidence in their value in the face of developing technologies and increasing, some would say aggressive, automation, able to “recognise what they have learned and be able to articulate and evidence it to potential employers” (Higher Education Academy 2012). As such, dedicated and accessible educational resources on this topic are a useful addition to the field for researchers, scholars, student and professional translators and their educators alike.
Notes
- 1.
ASLING (the International Association for Advancement in Language Technology; https://www.asling.org) took over the organisation and management of the long-running Translating and the Computer conference series in 2014 and has been responsible for it since.
- 2.
This has been recognised, to an extent, in the updated European Master’s in Translation Competence Framework 2017, which expects Master’s programme graduates to be able to review translation according to standard or job-specific quality objectives, and to be able to implement process standards (such as ISO 17100).
References
Beeby A, Fernández M, Fox O, Hurtado Albir A, Kozlova I, Kuznik A, Neunzig W, Rodríguez-Inés P, Romero L, Wimmer S, Hurtado Albir A (2009) Results of the validation of the PACTE translation competence model: acceptability and decision making. Across Lang Cult 10(2):207–230
Bowker L, Marshman E (2010) Towards a model of active and situated learning in the teaching of computer-aided translation: introducing the CERTT project. J Trans Stud 13(1/2):199–226
Chan S-W (ed) (2010) Journal of translation studies special issue: The teaching of computer-aided translation 13(1&2). The Chinese University of Hong Kong and The Chinese University Press, Hong Kong
Christensen TP, Schjoldager A (2016) Computer-aided translation tools: the uptake and use by Danish translation service providers. JoSTrans 25:89–105
Delizée A (2011) A global rating scale for the summative assessment of pragmatic translation at Master’s level: an attempt to combine academic and professional criteria. In: Depraetere I (ed) Perspectives on translation quality. Walter de Gruyter, Berlin, pp 9–24
Depraetere I, Vackier T (2011) Comparing formal translation evaluation and meaning-oriented translation evaluation: or how QA tools can(not) help. In: Depraetere I (ed) Perspectives on translation quality. Walter de Gruyter, Berlin, pp 25–50
Doherty S (2016) The impact of translation technologies on the process and product of translation. Int J Commun 10:947–969
Doherty S (2017) Issues in human and automatic translation quality assessment. In: Kenny D (ed) Human issues in translation technology. Routledge, London, pp 131–148
Doherty S, Kenny D (2014) The design and evaluation of a statistical machine translation syllabus for translation students. Interpret TransTrain 8(2):295–315
Doherty S, Moorkens J (2013) Investigating the experience of translation technology labs: pedagogical implications. JoSTrans 19:22–136
Doherty S, Kenny D, Way A (2012) Taking statistical machine translation to the student translator. In: Proceedings of the tenth conference of the Association for Machine Translation in the Americas, San Diego. https://doi.org/10.13140/2.1.2883.0727
EAMT/BCS (2002) Proceedings of the BCS/EAMT workshop on Teaching Machine Translation. Organised by the European Association for Machine Translation in association with the British Computer Society Natural Language Translation Specialist Group. UMIST, Manchester, England, 14–15 November 2002. Available via: http://personalpages.manchester.ac.uk/staff/harold.somers/teachingMT/index.html. Accessed 12 May 2017
EMT Expert Group (2009) Competences for professional translators, experts in multilingual and multimedia communication. European Master’s in Translation (EMT). Available via: https://ec.europa.eu/info/sites/info/files/emt_competences_translators_en.pdf. Accessed 5 Jan 2018
EMT Expert Group (2017) European Master’s in Translation Competence Framework 2017. European Master’s in Translation (EMT). Available via: https://ec.europa.eu/info/sites/info/files/emt_competence_fwk_2017_en_web.pdf. Accessed 9 Feb 2018
Englard M (1958) The end of translators? Linguist Rev 1958(1):26–27
Federico M, Cattelan A, Trombetti M (2012) Measuring user productivity in machine translation enhanced computer assisted translation. In: Proceedings of the tenth biennial conference of the Association for Machine Translation in the Americas (AMTA), San Diego, October 28–November 1 2012
Flanagan M, Christensen TP (2014) Testing post-editing guidelines: how translation trainees interpret them and how to tailor them for translator training purposes. Interpret Trans Train 8(2):257–275
Forcada M (2003) A 45-hour computers in translation course. In: Proceedings of Machine Translation Summit IX, New Orleans, USA, 23–27 September 2003, no page numbers
Forcada ML, Pérez-Ortiz JA, Lewis DR (2001) MT Summit VIII workshop on teaching Machine Translation. Santiago de Compostela. Available via: http://www.eamt.org/events/summitVIII/workshop-papers.html. Accessed 12 May 2017
García I (2011) Translating by post-editing: is it the way forward? Mach Transl 25(3):217–238
Gaspari F, Hutchins J (2007) Online and free! Ten years of online machine translation: origins, developments, current use and future prospects. In: Proceedings of Machine Translation Summit XI, Copenhagen, 10–14 September 2007, pp 199–206
Gaspari F, Almaghout H, Doherty S (2015) A survey of machine translation competences: insights for translation technology educators and practitioners. Perspect Stud Translatol 23(3):333–358
Granell Zafra J (2006) The adoption of computer-aided translation tools by freelance translators in the UK. Dissertation, Loughborough University
Higher Education Academy (2012) A marked improvement: transforming assessment in higher education. Higher Education Academy. Available via: https://www.heacademy.ac.uk/knowledge-hub/marked-improvement. Accessed 22 Mar 2018
Huertas Barros E, Vine J (2017) Current trends on MA translation courses in the UK: changing assessment practices on core translation modules. Interpret Trans Train 12(1):5–24
Karamanis N, Luz S, Doherty G (2011) Translation practice in the workplace: contextual analysis and implications for machine translation. Mach Transl 25(1):35–52
Kenny D (2007) Translation memories and parallel corpora: challenges for the translation trainer. In: Kenny D, Ryou K (eds) Across boundaries: international perspectives on translation. Cambridge Scholars Publishing, Newcastle-upon-Tyne, pp 192–208
Kenny D, Doherty S (2014) Statistical machine translation in the translation curriculum: overcoming obstacles and empowering translators. Interpret Trans Train 8(2):276–294
Kenny D, Way A (2001) Teaching machine translation and translation technology: a contrastive study. In: Proceedings of MT Summit VIII workshop on teaching translation, Santiago de Compostela, Spain, 18 September 2001, pp 13–17
Kingscott G (1990) Session 4: summary of the discussion. In: Proceedings of translating and the Computer 10: the translation environment 10 years on. 10–11 November 1988, London, pp 161–164
Knight K (2003) Teaching statistical machine translation. In: Proceedings of Machine Translation Summit IX, New Orleans, USA, 23–27 September 2003, no page numbers
Koo SL, Kinds H (2000) A quality-assurance model for language projects. In: Sprung RC (ed) Translating into success: cutting-edge strategies for going multilingual in a global age. John Benjamins, Amsterdam, pp 147–157
Koponen M (2015) How to teach machine translation post-editing? Experiences from a post-editing course. In: Proceedings of the 4th workshop on post-editing technology and practice, Miami, USA, 3 November, pp 2–15
Marshman E, Bowker L (2012) Translation technologies as seen through the eyes of educators and students: harmonizing views with the help of a centralized teaching and learning resource. In: Hubscher-Davidson S, Borodo M (eds) Global trends in translator and interpreter training. Bloomsbury, London, pp 69–95
Mitamura T, Nyberg E, Frederking R (2003) Teaching machine translation in a graduate language technologies program. In: Proceedings of Machine Translation Summit IX, New Orleans, USA, 23–27 September 2003, no page numbers
Moorkens J (2017) Under pressure: translation in times of austerity. Perspect Stud Trans Theory Pract 25(3):464–477
Moran J, Lewis D, Saam C (2018) Can user activity data in CAT tools help us measure and improve translator productivity? In: Corpas Pastor G, Durán-Muñoz I (eds) Trends in E-tools and resources for translators and interpreters. Brill, Leiden, pp 137–152
O’Brien S (2012) Translation as human-computer interaction. Transl Spaces 1:101–122
O’Hagan M (2013) The impact of new technologies on translation studies: a technological turn? In: Millán C, Bartrina F (eds) The Routledge handbook of translation studies. Routledge, Abingdon, pp 503–518
Pym A (2003) Redefining translation competence in an electronic age: in defence of a minimalist approach. Meta 48(4):481–497
Pym A (2013) Translation skill-sets in a machine-translation age. Meta 58(3):487–503
Robichaud B, L’Homme M-C (2003) Teaching the automation of the translation process to future translators. In: Proceedings of Machine Translation Summit IX, New Orleans, USA, 23–27 September 2003, no page numbers
Scarpa F, Orlando D (2017) What it takes to do it right: an integrative EMT-based model for legal translation competence. JoSTrans 27:21–42
Sycz-Opoń J, Gałuskina K (2017) Machine translation in the hands of trainee translators: an empirical study. Stud Log Gramm Rhetor 49(1):195–212
Taillefer L (1992) The history of the relationship between machine translation and the translator. In: Proceedings of the 33rd annual conference of the American Translators Association. Learned Information, Medford, pp 161–165
Vertan C, von Hahn W (2003) Specification and evaluation of machine translation toy systems: criteria for laboratory assignments. In: Proceedings of Machine Translation Summit IX, New Orleans, USA, 23–27 September 2003, no page numbers
Wältermann D (1994) Machine translation systems in a translation curriculum. In: Dollerup C, Lindegaard A (eds) Teaching translation and interpreting 2: insights, aims, visions. John Benjamins, Amsterdam, pp 309–317
Way A, Gough N (2003) Teaching and assessing empirical approaches to machine translation. In: Proceedings of Machine Translation Summit IX, New Orleans, USA, 23–27 September 2003, no page numbers
Way A, Hearne M (2011) On the role of translations in state-of-the-art statistical machine translation. Lang Linguist Compass 5(5):227–248
Wiggins D (2011) Blogpost to automated language translation group. Available via: http://www.linkedin.com/groups/Looks-like-licencebased-model-MT-148593.S.74453505?qid=579815d2-fdfd-46bb-ac04-3530d8808772andtrk=group_search_item_list-0-b-ttl. Accessed 12 May 2017
Acknowledgments
This work has been partly supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Doherty, S., Moorkens, J., Gaspari, F., Castilho, S. (2018). On Education and Training in Translation Quality Assessment. In: Moorkens, J., Castilho, S., Gaspari, F., Doherty, S. (eds) Translation Quality Assessment. Machine Translation: Technologies and Applications, vol 1. Springer, Cham. https://doi.org/10.1007/978-3-319-91241-7_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-91241-7_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-91240-0
Online ISBN: 978-3-319-91241-7
eBook Packages: Computer ScienceComputer Science (R0)