Significance of Total Survey Error (TSE) framework in creating and monitoring translation workflows for questionnaires
cApStAn Linguistic Quality Control
We assist organisations in setting up and executing the most suitable translation and Linguistic Quality Assurance (LQA)
In part-1 of this article, we talked about where do linguistic quality assurance (LQA) and linguistic quality control (LQC) fit into the TSE framework? Read before proceeding
In part-2 of this article, we will focus on what is cApStAn’s modular approach to LQA and LQC?
cApStAn’s modular approach to LQA and LQC
A. Pre-translation
“The success or failure of this ask-the-same-question (ASQ) approach is largely determined by the suitability of the source questions for all the cultures for which versions will be produced.” (Harkness, van de Vijver & Johnson, 2003). “Development procedures for source questions must therefore ensure that the questions selected are understood similarly in the various languages and locations of the study.” (Harkness, Edwards, Hansen, Miller & Villar, 2010).
Our?“translatability assessment” (TA) is a tried and tested method to “optimize” the master version of an assessment or survey questionnaire before the actual translation and adaptation process begins. This upstream linguistic quality assurance method is also known as “advance translation” in the Cross-Cultural Survey Guidelines and the Total Survey Error framework. A straightforward expression in English can be problematic to translate, ambiguous, overly complex, or easily misunderstood. This may result in biased responses, unexpected item behaviour, inaccurate data or non-responses in the translated survey questionnaires.
The translatability assessment makes the source material fitter for translation, solves some translation difficulties already before anything is translated, and raises the writers’ awareness of potential hurdles for the adaptation of certain questions into certain languages. It consists of collecting feedback from a pool of linguists, who review the draft version of the master questionnaire in order to identify potential translation and adaptation issues, analyse them, and generalise their feedback into recommendations that would apply to a broad range of cultures and languages.
This is one of our top services (and applied in OECD PISA).
Pre-translation work can include the organisation of?workshops for item-writers/question authors?to raise awareness of translatability issues and to assist them in writing more “translatable content (free of idioms, ambiguities, unnecessary complexities), such as the ones cApStAn held for item writers in OECD PISA.
As part of the translatability assessment feedback, cApStAn suggests translation and adaptation notes in the TA Report. The survey authors review these notes to confirm that they reflect the questions’ constructs and intents.
Translation and Adaptation notes?also referred to as “annotations” in the Cross-Cultural Survey Guidelines, or item-by-item guidelines, can be prepared to help clarify the intended meaning of a source text concept, phrase, or term. They help clarify the intended meaning of concepts, phrases and/or terms in the source text and provide information which allows translators and reconcilers/adjudicators to focus on what is meant in survey measurement terms. Their purpose is to guide translators and reconcilers on specific translation and adaptation issues (Survey Research Center 2016).
Focus groups?and?cognitive interviews?can also be used to gain insights into the local community and experiences of the target population, which researchers alone may not be able to recognize. The practice showed that the source text can benefit from the combination of the two exercises: some potential issues are detected by the translatability assessment, others by the cognitive interviews, and some are detected by both.
B. Translation
This is a vast module, where cApStAn can help determine whether a complex double translation or a straightforward single translation process is required for a given project and what skills translators should have, or can propose hybrid approaches, including man-machine translation.
In case of?double translation, there is reconciliation (which is done by a reconciler, who merges translation 1 and 2 into a final version that takes over the best elements of each). cApStAn has set up precise procedures for (team)?adjudication, including preparation and follow-up. Senior cApStAn staff can assist in moderating adjudication meetings, and in documenting outcomes. For?specialized content?cApStAn may enlist the help of?bilingual subject matter experts, or SMEs, to coordinate their work with our linguists, and document it.
Advanced translation models include?TRAPD (Translation, Review, Adjudication, Pre-testing, and Documentation model (Harkness, 2003). The TRAPD procedure is similar to the double translation and reconciliation model, with the addition of adjudication meetings. Typically, its workflow consists of the following steps:
§? Translation: two independent draft translations are produced. The translators comment on the translation hurdles, doubts and their choices;
§? Reconciliation (referred to as "Review" in TRAPD model): a senior and experienced cApStAn translator merges the two translations into an optimal version that incorporates the best elements of each of the initial translations, or comes up with a third version. The reconciler flags the issues that need to be discussed at the adjudication meeting.
§? Adjudication meeting: the remaining issues are discussed and addressed at a web-based adjudication meeting with the team of the two translators, survey questionnaire authors and/or domain experts (optional), and the adjudicator (the reconciler) finalises the translation with the adjudicated decisions.???
领英推荐
§? Documentation: the whole process (draft translations, exchange of comments between the translators, the reconciler, adjudication meeting, feedback from the pilot test, final translation) is documented.
§? A proof-reader checks the linguistic correctness (spelling, phraseology, grammar), but they refrain from substantial changes.
§? Optionally, a review by experts can be added after the adjudication. Before the proofreading, the adjudicator reflects their feedback in the translation.
To fulfil their roles successfully and grasp the aim of their tasks, the translators and reconcilers/adjudicators attend separate briefing sessions, in the form of webinars, led by cApStAn. The teams are briefed on the background and objectives of the survey, the goals of translation in cross-cultural comparative surveys in general and the source logic and content in particular. Any term base and the translation and adaptation notes are introduced to them, and they are instructed how to use them. The project specific translation approach is discussed with examples from the actual survey materials. The translation environment and project-specific use of the tools are presented, and questions are answered.
Hybrid?man-machine workflows?are those in which one or several neural machine translation (NMT) engines are called up to suggest a translation when the translation memory does not. The translator accepts, edits or rejects the NMT suggestion. Translation quality assurance is then performed by a human.
cApStAn can, in collaboration with the partner organization, embed MT, translation quality estimation (TQE), and possibly light post-editing (LPE) in the workflow. Once implemented, this workflow can lead to generating the content in English and in a second language. The SMEs in the target language will need to validate and possibly edit those items.
C. Post-translation
Linguistic Quality Control?(verification): The verifier’s task is to make sure that the translation matches the source, the translation and adaptation notes and aim at balance between fluency and accuracy. The verifiers compare the translation to the source, sentence by sentence, intervene in the translation as needed within their verification scope, and document the outcome in their monitoring tools, using a frame of defined verification categories to describe their intervention, and briefly reporting on the rationale behind their intervention. cApStAn’s systematic use of a list of categories that describe translation quality and equivalence issues help report on translation quality in a standardized way and generate relevant statistics.
A?systematic review of verifier feedback?will always take place. Verification deliverables combine the verifier’s linguistic expertise & cultural sensitivity and the reviewer’s thoroughness.?Verifier training?is an important part of our LQC process. It can be organised as web-based sessions, or a combination of pre-recorded trainings and live sessions. The trainings are tailored to each project, and to the approach and content specific aspects. The verifiers are briefed on the most important aspects of the survey translation, the background and goals of the particular project, the verification approach in it, the verification procedure and tasks. Particular problems, such as balance between accuracy and fluency, importance of survey specific frameworks and dimensions in the construct are discussed, with hands-on exercises on examples from the survey materials.
References
Biemer, P., Paul, (2010) Total Survey Error: Design, Implementation, and Evaluation,?Public Opinion Quarterly, Volume 74, Issue 5, 2010, Pages 817–848,
Survey Research Center. (2016). Guidelines for Best Practice in Cross-Cultural Surveys. Ann Arbor, MI: Survey Research Center, Institute for Social Research, University of Michigan. Retrieved August, 11, 2022, from?https://ccsg.isr.umich.edu/.