How Logrus IT’s Quality Evaluation Portal Enhances Game Localization Quality: A Case Study
This image has been created using Microsoft Copilot and the XDS Spark official background

How Logrus IT’s Quality Evaluation Portal Enhances Game Localization Quality: A Case Study

Ensuring high-quality game localization is crucial for global success, particularly for AAA and AA games where players invest heavily in their setups and game licenses. These players are often demanding and meticulous. Despite the best efforts of client companies to maintain high standards across all languages—through strict control over voice talent casting, recording quality, and more—traditional measures alone cannot entirely prevent subpar localizations. Various factors can contribute to this issue:

  • Clumsy or out-of-context translation style.
  • Awkward string combinations or concatenations: While each standalone string might be translated correctly, their in-game combination can create ambiguous or even bizarre impressions.
  • Inconsistent terminology: Character or object names vary between scenes or episodes.

Any of these issues can easily ruin gaming experience for one or more markets. However, they are challenging to detect because traditional quality control focuses heavily on technical aspects like casting, recording, formatting, and tag errors, which are easier to check. Translating standalone fragments with limited or no contextual information further complicates the matter.

While it’s impractical to review the entire game screen by screen in live mode, a well-organized spot-check that focuses on the right criteria and reflects user experience and sentiment can greatly enhance localization quality. This approach can prevent potential disasters at a fraction of the cost and within a reasonable timeframe.

Logrus IT ’s Quality Evaluation Portal enables a comprehensive evaluation of game localization quality by emphasizing the overall user experience. Structured feedback allows both publishers and localizers to identify systemic issues, implement corrective measures, improve quality before the game release, and achieve higher user satisfaction across global markets.

Logrus IT’s Quality Evaluation Portal: Enhancing Game Localization Quality - Case Details

Task: The client aimed to evaluate the quality of their game localized into multiple target languages, focusing on player sentiment. They also sought to identify and summarize systemic issues and enhance future localizations through discussions with localizers. The languages managed by our team included Chinese (Simplified and Traditional), German, French, Italian, Japanese, Korean, Spanish (ES), and Portuguese (BR).

Challenge: Due to budget constraints, Logrus IT could only evaluate string translations, with a volume limited to 2000 words per language.

Solution: This task was ideally suited for the Logrus IT Quality Evaluation Portal. The portal already possessed the necessary functionality for running evaluations, including creating customized metrics, and combining holistic evaluations with logging specific issues at the more granular level.

Most importantly, the Logrus IT Quality Evaluation Portal seamlessly integrates complete arbitration functionality. Content creators or localizers can provide feedback on the reviewer’s suggestions or logged issues. Reviewers can then address these comments by either resolving the issues (modifying their evaluation, providing explanations, etc.) or escalating them to the project manager if a particular disagreement cannot be resolved at the localizer/reviewer level.

Most importantly, the Logrus IT Quality Evaluation Portal seamlessly integrates complete arbitration functionality. Content creators or localizers can provide feedback on the reviewer’s suggestions or logged issues. Reviewers can then address these comments by either resolving the issues (modifying their evaluation, providing explanations, etc.) or escalating them to the project manager if a particular disagreement cannot be resolved at the localizer/reviewer level.

Solution: The client provided a randomized, representative selection of localized product strings for each language. Our objectives were to:

  • Create and apply a suitable, yet simple quality metric?focused on the overall perception by the target audience (TA).
  • Generate clear, structured improvement recommendations.
  • Discuss these recommendations with localizers?to reach a consensus on quality improvements for both the current and future projects.

For this project, we selected a relatively “standard” 3D hybrid quality metric that combined two holistic criteria (Informativeness/Relevance and Consistency) with atomistic evaluation and a simple error typology.

It’s important to emphasize that the holistic quality criteria we chose covered areas often overlooked during regular quality checks. These aspects are crucial for user sentiment and perception. We aimed to answer the following questions:

  • How informative is the translation?
  • How relevant is this translation within the game context?
  • Are terms (names, places, objects, actions) localized consistently across the entire volume?

The atomistic evaluation addressed other critical issues, such as incorrect or unintelligible translations, as well as more traditional topics like language, style, locale conventions, technical issues, tone/voice, and terminology.

After finalizing the metric, the Logrus IT project manager created customized guidelines for reviewers. These guidelines explained the project goals and priorities, the metric (including holistic quality scales required for objective evaluation), and detailed steps, rules, and recommendations for using the Logrus IT 's Quality Evaluation Portal.

After defining the scope, metric, and reviewer guidelines, we initiated quality evaluation projects for each language. We maintained close contact with the client and localization provider representatives, who had access to the project on the portal.

Client representatives had full, PM-level access, while localizers could only access their respective languages and were limited to providing comments. (The portal supports multiple roles, each with specific permissions. The PM can also restrict reviewer and/or localizer access to a particular date range to prevent changes after project completion and unauthorized access.)

During the arbitration stage, localizers could access reviewer evaluations, comments, and logged errors, and also clarify or explain certain decisions and/or request grade changes.

Results: Upon project completion, the Logrus IT Team provided the client (and the localizers) with the following:

  • Summarized high-level feedback and concrete recommendations: These were instrumental in improving localization quality and player experience, and were reviewed and agreed upon with the localization teams.
  • Processed statistics for each holistic or atomistic quality evaluation category.
  • All data produced by each reviewer, exported in Excel format: This was available for validation or alternative in-depth analysis.
  • Complete arbitration history (by string/object): This outlined the issues discussed and the final results in each case/thread.

General Feedback:

  1. The combined Quality Evaluation and Arbitration process (involving client representatives, reviewers, and localizers) has proven to be highly effective and beneficial for long-term quality improvement. This conclusion has been acknowledged by all parties involved, including the client and the localization teams.
  2. All parties demonstrated exemplary teamwork and professional behavior, driven by a clear common goal. This was essential for the overall project success, particularly in discussions related to systemic issues and optimal approaches for the future.
  3. Positive results were achieved despite the limited volume reviewed.
  4. The resulting joint recommendations addressed localizer training, extending terminology glossaries, and providing additional guidelines or context for the most error-prone areas.

Wish List: Even more impressive results could be achieved with an in-context review, which would require a slightly larger budget. We have multiple scenarios for this, including script-based evaluation on live localized game builds or screenshot-based evaluation.

You are welcome to try this with your games or localized materials. I will be happy to discuss details with colleagues in the game publishing and localization industry, many of whom I expect to meet at the External Development Summit (XDS) #XDS2024 event that starts in less than two weeks...

Please drop us a note…

#QualityEvaluation #GameLocalization #QualityArbitration #HolisticQuality #HybridQualityEvaluation #MultidimensionalQuality #LQA

I wrote this article myself. I have used Microsoft Copilot to improve style, and then edited the article extensively to make sure it says exactly what I meant :-).


要查看或添加评论,请登录

社区洞察

其他会员也浏览了