Skip to main content

What is LQA? Complete Guide to Linguistic Quality Assurance

KTTC Team1/4/202510 min read
lqaquality-assurancetranslation-qualitylocalizationqa-process

Linguistic Quality Assurance (LQA) is a systematic process for evaluating the quality of translated content. It goes beyond simple proofreading to provide objective, measurable assessments of translation accuracy, fluency, and overall fitness for purpose.

This guide covers everything about LQA: what it is, how it works, and how to implement it in your translation workflow.

What is LQA?

LQA (Linguistic Quality Assurance) is the process of evaluating translated content against defined quality standards. Unlike general proofreading, LQA uses structured methodologies — typically based on error typologies like MQM — to identify, categorize, and score translation errors.

LQA vs. Proofreading vs. Editing

ActivityFocusOutput
ProofreadingFixing surface errorsCorrected text
EditingImproving style and clarityBetter text
LQAEvaluating quality objectivelyQuality score + error report

The key difference: LQA measures quality rather than just fixes problems. That measurement is what makes everything else possible — comparing vendor performance, tracking quality trends over time, giving translators concrete feedback, and verifying SLA compliance.

Why LQA Matters

Without LQA, you're guessing about quality. Here's what's at stake.

Quality Consistency

Left unchecked, quality varies wildly between translators, projects, and languages. One translator might be excellent, another average, and you'd have no objective way to tell. LQA creates a shared standard everyone works against.

Cost Control

Bad translations are expensive — but not in the ways you might expect. Sure, there's rework. But there's also customer complaints, returns, brand erosion, and in regulated industries, potential legal exposure. A single mistranslated medical instruction can cost more than a year's worth of LQA. Catching issues early is just cheaper.

Vendor Management

When you're working with five different agencies across twelve languages, opinions about "good enough" vary. LQA gives you numbers instead of arguments.

Compliance

Healthcare, legal, and finance all require documented quality processes. LQA provides the audit trail regulators want to see.

Continuous Improvement

By tracking error patterns — say, a translator who consistently struggles with passive constructions in German — LQA identifies specific issues you can fix through training or process changes.

The LQA Process

A typical LQA workflow has six steps. Not every organization runs all of them, but this is the full picture.

Step 1: Define Quality Criteria

Before anyone evaluates anything, establish what you're measuring:

  • Error categories to track (accuracy, fluency, terminology, etc.)
  • Severity levels (critical, major, minor)
  • Passing threshold (e.g., MQM score of 95 or above)
  • Sample size (100% review or a representative sample)

Step 2: Select Evaluators

LQA evaluators should be native speakers of the target language, ideally with subject matter expertise. They need training in the LQA methodology, and — this part matters — they should be independent from the original translators. You don't want people grading their own work.

Step 3: Perform Evaluation

Evaluators review translations segment by segment. For each error they find, they record the error type (mistranslation, omission, grammar, etc.), severity (how much impact does it have?), and location (which segment?).

This is the most time-consuming step by far.

Step 4: Calculate Quality Score

Using a scoring model like MQM:

Quality Score = 100 - (Penalty Points / Word Count × 100) 

Where penalty points depend on error severity:

  • Critical: 25 points
  • Major: 5 points
  • Minor: 1 point

Step 5: Generate Reports

LQA reports typically include the overall quality score, error breakdown by category and severity, specific error annotations with examples, and comparison with historical performance.

A good report doesn't just say "this scored 94." It tells you why.

Step 6: Feedback Loop

Share LQA findings with translators. Focus on patterns in error types, give specific examples with correct alternatives, and — people forget this — recognize high-quality work too. LQA shouldn't feel like punishment.

LQA Error Categories

Based on the MQM framework, common LQA error categories include:

Accuracy Errors

Error TypeDescriptionExample
MistranslationMeaning incorrectly conveyed"annual" → "monthly"
OmissionContent missing from translationSkipped sentence
AdditionExtra content not in sourceUnexplained additions
UntranslatedSource text left as-isEnglish term in Spanish text

Fluency Errors

Error TypeDescriptionExample
GrammarGrammatical mistakes"The datas is..."
SpellingMisspelled words"recieve"
PunctuationIncorrect punctuationMissing comma
TypographyFont, spacing issuesDouble spaces

Terminology Errors

Error TypeDescriptionExample
Wrong termIncorrect terminology"mouse" as animal vs. device
InconsistencySame term translated differentlyVarying product names
Unapproved termTerm not in client glossaryUsing alternative without approval

Style Errors

Error TypeDescriptionExample
RegisterWrong formality level"you" vs. formal equivalent
UnidiomaticAwkward phrasingLiteral translation that sounds wrong
Inconsistent styleVarying tone within documentMixing formal and casual

Locale Errors

Error TypeDescriptionExample
Date formatWrong date convention12/31/2025 vs. 31/12/2025
Number formatWrong decimal/thousand separator1.000 vs. 1,000
CurrencyIncorrect currency handlingWrong symbol or format

LQA Severity Levels

Critical Errors

These are the ones that keep localization managers up at night. They cause legal liability, safety risks, financial loss, or severe misunderstanding.

Examples: Medical dosage errors, legal term mistakes, safety instruction omissions.

Typical Penalty: 25 points

Major Errors

Errors that noticeably hurt comprehension, user experience, or professional appearance. A reader would stop and think "that's wrong."

Examples: Wrong meaning, confusing sentence structure, inappropriate tone.

Typical Penalty: 5 points

Minor Errors

Noticeable but don't really affect understanding. The kind of thing a careful reader catches but that doesn't change the message.

Examples: Minor punctuation, slight awkwardness, capitalization.

Typical Penalty: 1 point

LQA Metrics and KPIs

Quality Score (MQM-based)

The primary metric:

Score = 100 - (Total Penalty / Word Count × 100) 

Error Rate

Errors per 1000 words:

Error Rate = (Total Errors / Word Count) × 1000 

Pass Rate

Percentage of translations meeting the quality threshold:

Pass Rate = (Passing Translations / Total Translations) × 100 

Error Distribution

Breakdown of errors by category (accuracy, fluency, etc.), severity (critical, major, minor), and translator or vendor. This is where the real diagnostic value lives.

AI-Powered LQA

AI is changing how LQA gets done. Not replacing it — changing it.

Traditional LQA vs. AI LQA

AspectTraditional LQAAI LQA
SpeedHours per documentMinutes per document
CostHigh (human evaluator time)Lower per evaluation
ConsistencyVaries by evaluatorHighly consistent
ScalabilityLimitedVirtually unlimited
SubtletyExcellentGood and improving

How AI LQA Works

Modern AI LQA tools use large language models to compare source and target texts, identify potential errors, classify them by type and severity, calculate quality scores, and generate detailed reports.

The speed difference is dramatic. What takes a human evaluator half a day, AI can do in minutes. But speed isn't everything.

AI LQA Limitations

AI LQA is powerful, but it's not perfect:

  • It can miss subtle cultural references
  • It struggles with highly creative content (marketing taglines, literary translation)
  • Critical content still needs human eyes
  • Regulated industries can't rely on AI alone for final quality decisions

Honestly, anyone telling you AI can fully replace human LQA evaluators today is overselling. It's a tool, not a replacement.

Best Practice: Hybrid Approach

The smart play is combining both:

  1. AI first — Quick initial assessment at scale
  2. Human review — Verify AI findings, especially critical errors
  3. Random sampling — Human spot-checks on AI-passed content
  4. Continuous calibration — Use human feedback to improve AI accuracy

Implementing LQA in Your Organization

Step 1: Choose Your Framework

Pick an error typology:

  • MQM — Industry standard, highly customizable
  • LISA QA — Legacy but still in use
  • Custom — Based on your specific needs

Most organizations should start with MQM. Why reinvent the wheel?

Step 2: Define Quality Tiers

Not all content needs the same level of scrutiny:

TierContent TypeLQA IntensityPass Threshold
PremiumLegal, medical, marketing100% review98+
StandardBusiness, documentation20% sample95+
BasicInternal, user-generatedAI-only90+

Step 3: Select Tools

LQA tools range from spreadsheets (simple but manual) to dedicated LQA platforms (purpose-built for translation QA) to TMS-integrated solutions to AI-powered platforms like KTTC with automated LQA.

Step 4: Train Your Team

Make sure evaluators understand error category definitions, severity criteria, tool usage, and calibration processes. Don't skip this. Untrained evaluators produce unreliable scores, and unreliable scores are worse than no scores at all.

Step 5: Establish Calibration

Regular calibration keeps evaluators aligned. Have them evaluate the same content independently, compare results, discuss discrepancies, and update guidelines based on what you learn.

FAQ

What does LQA mean in translation?

LQA stands for Linguistic Quality Assurance. It's the process of systematically evaluating translation quality using standardized error categories, severity levels, and scoring systems. LQA provides objective, measurable quality assessments rather than subjective opinions.

What is the difference between LQA and QA?

QA (Quality Assurance) is a broad term covering all quality-related activities. LQA specifically focuses on linguistic aspects of translation quality — accuracy, fluency, terminology, style, and locale conventions. Technical QA might cover formatting, functionality, or user experience issues that aren't linguistic.

How is LQA score calculated?

LQA scores are typically calculated using the MQM (Multidimensional Quality Metrics) model. Errors are identified, categorized by type and severity, and assigned penalty points. The score equals 100 minus the total penalty divided by word count times 100. For example: 100 - (15 penalty points / 1000 words × 100) = 98.5.

What is a good LQA score?

It depends on the content type. Generally: 99-100 is excellent (publishable as-is), 95-98 is good (minor review needed), 90-94 is acceptable (corrections required), and below 90 typically requires significant revision. Critical content like legal or medical documents often requires 98+.

Can AI replace human LQA evaluators?

Not entirely. AI LQA is excellent for initial screening, consistency, and scale, but human evaluators remain essential for subtle judgment, cultural adaptation assessment, and validation of critical content. The best approach right now is a hybrid — AI for speed and coverage, humans for depth and judgment.

What's Next

LQA isn't optional for professional translation. Whether you're using human evaluators, AI tools, or both, the key is having a systematic, measurable process that produces actionable results — not just scores, but insights you can act on.

If you don't have an LQA process today, start small. Pick a framework, define your severity levels, and evaluate a few projects. You'll learn more from those first evaluations than from months of planning.

Ready to implement LQA in your translation workflow? Try KTTC for AI-powered linguistic quality assurance with MQM-based evaluation and detailed error reporting.

We use cookies to improve your experience. Learn more in our Cookie Policy.