FAQs - AI Quality Evaluation
What is AI Quality Evaluation?
AI Quality Evaluation is an AI-powered tool designed to assess the quality of translations and other content. It combines AI efficiency with human expertise to provide accurate and fast evaluations, helping businesses ensure high-quality content and reduce costs.
What is the purpose of AI Quality Evaluation?
The primary purpose of AI Quality Evaluation is to act as a quality checker for translations. It evaluates uploaded content after it has been translated, helping businesses determine if the content meets quality standards. The tool provides a score based on the quality and suggests whether further human review is necessary.
How does AI Quality Evaluation work?
After uploading a document, the content undergoes translation through phases like Leverage TM (Translation Memory) or Machine Translate, followed by AI Quality Evaluation. The AI system reviews the translation and provides a quality score, categorising segments into different levels (Best, Good, Acceptable, Bad, No Score, Untranslated).
What file formats are supported by AI Quality Evaluation?
AI Quality Evaluation supports a variety of file types. Please refer this for the supported file types, https://lingotek.atlassian.net/wiki/x/BoCgwg.
How do I integrate AI Quality Evaluation into my workflow?
To use AI Quality Evaluation, you need a project workflow template that includes this as a system phase. You can create a new template or add the AI evaluation step to an existing workflow. It is recommended to place AI Quality Evaluation after Translation Memory (TM) or Machine Translate phases.
How can I integrate AI Quality Evaluation seamlessly into an existing complex project workflow?
You can add the AI Quality Evaluation phase as a custom phase to any existing workflow template by editing the template under Templates > Workflow Templates. Place it after Leverage TM or Machine Translate phases to ensure translations are processed before evaluation. Make sure the workflow is saved and applied when creating new projects.
What considerations should I make when setting the quality threshold for AI Quality Evaluation?
The default threshold is 100%, meaning only perfect scores are approved automatically. You can adjust this threshold per project under Project Workflow > AI Quality Evaluation to balance automation and manual review workload. Lowering the threshold will approve more segments automatically but may reduce overall quality assurance strictness.
How do I upload a document for quality evaluation?
After creating a project using a template that includes AI Quality Evaluation, you can upload the document to the system. Please ensure the document meets the supported file type requirements.
What are AI tokens, and how are they used?
AI tokens are consumed during the quality evaluation process. The number of tokens required depends on the content’s complexity and the length of the document. The token cost is visible in the document summary, and you will need to purchase tokens to initiate the evaluation.
What is the cost of an AI token?
AI tokens are currently priced at $0.02/token (USD).
How do I purchase AI Quality Evaluation job?
Once the content is uploaded and translated, the system will calculate the token cost for the evaluation. You can purchase the required tokens by clicking the "Purchase AI Quality Evaluation" option, and the evaluation process will start once the purchase is complete.
What happens after purchasing the AI Quality Evaluation job?
Once purchased, the system will process the quality evaluation, and you will receive a quality score represented as a percentage. The score will reflect the overall quality of the translation, and segments will be categorised (Best, Good, Acceptable, Bad, No Score, Untranslated) to give detailed feedback.
Can I customise the workflow after the AI Quality Evaluation phase?
Yes, you can configure additional human review phases after the AI Quality Evaluation if necessary, based on the evaluation results. This flexibility allows you to adapt the review process based on the AI-generated quality score.
How do I interpret the segment categorisation provided by AI Quality Evaluation?
After evaluation, segments are categorised as Best, Good, Acceptable, Bad, No Score, or Untranslated. These categories help reviewers focus their attention efficiently—Bad or Untranslated segments usually require human intervention, while Best or Good can often be approved with minimal checks.
Can segments evaluated by AI Quality Evaluation be edited directly in the AIQE phase?
No, segments are locked and cannot be edited in the AI Quality Evaluation phase. To modify translations, use subsequent review phases. Segments meeting or exceeding the quality threshold are locked in the Workbench with an “AIQE Approved” indicator, simplifying manual review.
If I edit translations after the AI Quality Evaluation phase is complete, how do I maintain up-to-date quality scores?
After making edits, the previously calculated AIQE scores become invalid. To re-evaluate, select the affected targets under Targets > Actions > Purchase AIQE to trigger a fresh AI Quality Evaluation. This ensures scores reflect the latest translation content and helps maintain quality assurance integrity.
What is the recommended workflow order when including AI Quality Evaluation to optimize translation quality and cost?
It’s best practice to run Leverage TM and/or Machine Translate phases before AIQE to ensure translated content is finalized. Position AIQE after these phases in your workflow template to accurately assess machine-generated or leveraged translations. This order prevents unnecessary AIQE token costs on untranslated or draft content.
How many languages does AI Quality Evaluation support?
AI Quality Evaluation supports 156 languages on the Enterprise platform, ensuring a broad range of language pairs for businesses working on multilingual content. Please refer Supported Languages .
What happens if the target language is not supported by AI Quality Evaluation or if the document content is an exact match?
In such cases, the system skips AIQE automatically. Instead of token cost, you will see messages like “Language not supported by AI Quality Evaluation” or “AI Quality Evaluation not needed for exact match” in the Resources column or purchase dialogs. No AI tokens will be consumed, and the evaluation phase is effectively bypassed.
Can I use AI Quality Evaluation for all types of content?
AI Quality Evaluation is designed specifically for translations and content that requires accuracy in language. It works best when integrated into translation workflows, but it can also be used to evaluate other types of content as per the platform's capabilities
How can I access the AI Quality Evaluation feature?
You can access AI Quality Evaluation through the Enterprise platform’s Workflow Templates or by adding it as a custom phase to an existing project. Ensure you have the necessary subscription or plan that supports this feature.
Is there way to see the results of the AI Quality Evaluation?
Yes, after completing the evaluation, the results will be displayed with a quality score and a detailed breakdown of how each segment was evaluated. You can view the categorization of each segment (Best, Good, Acceptable, Bad, No Score, Untranslated) by clicking the score in the Resources column.