Skip to main content

Translation Error Types: Complete Classification Guide for 2025

elena-volkova1/10/202513 min read
translation-errorserror-typologymqmquality-assessmentlqaiso-5060

Understanding translation error types is the foundation of quality assessment in localization. Whether you're a QA manager, translator, or localization engineer, knowing how to identify and classify errors lets you evaluate consistently, give targeted feedback, and drive measurable quality improvement.

This guide covers the full taxonomy of translation errors based on MQM (Multidimensional Quality Metrics) and ISO 5060:2024 — the frameworks used by leading organizations worldwide.

Why Error Classification Matters

Without a shared vocabulary for errors, quality assessment becomes subjective guesswork. One reviewer calls something a "mistake," another calls it an "acceptable variation." Who's right?

BenefitDescription
ConsistencyDifferent evaluators apply the same criteria
ActionabilityTranslators know exactly what to fix
MeasurementQuality scores become meaningful
TrainingError patterns reveal skill gaps
AutomationAI tools can detect specific error types

Classification eliminates ambiguity. That's its job.

The MQM Error Hierarchy

The Multidimensional Quality Metrics (MQM) framework, now formalized in ISO 5060:2024, organizes errors into a hierarchy. The top-level categories:

MQM Error Hierarchy ├── Accuracy ├── Fluency ├── Terminology ├── Style ├── Locale Convention ├── Verity └── Design 

Let's walk through each one.

1. Accuracy Errors

Accuracy errors happen when the translation doesn't faithfully represent the source meaning. These tend to be the most serious because they directly affect whether the message gets across correctly.

1.1 Mistranslation

The translation conveys a different meaning than the source.

Example:

Source (EN): "The product is not available in your region." Translation (DE): "Das Produkt ist in Ihrer Region verfügbar." Error: "not available" → "available" (meaning reversed) Severity: Major 

Mistranslations can be complete (entirely wrong meaning), partial (part of the meaning is wrong), or subtle (a shade of meaning lost). That last category is the hardest to catch — and the hardest to agree on.

1.2 Omission

Required content from the source is missing.

Example:

Source (EN): "Click Save to confirm your changes and exit." Translation (FR): "Cliquez sur Enregistrer pour confirmer vos modifications." Error: "and exit" was omitted Severity: Minor (if UI still works) or Major (if it's a critical instruction) 

1.3 Addition

The translation includes content that isn't in the source.

Example:

Source (EN): "Enter your password." Translation (ES): "Introduzca su contraseña segura." Error: "segura" (secure) was added—not in source Severity: Minor (unless it changes meaning or creates liability) 

1.4 Untranslated

Source language text remains in the translation.

Example:

Source (EN): "Welcome to the Dashboard" Translation (JA): "Welcome to the ダッシュボード" Error: "Welcome to the" left untranslated Severity: Major 

1.5 Over-Translation

Content that should remain in the source language gets translated anyway.

Example:

Source (EN): "Click the OK button." Translation (DE): "Klicken Sie auf die Einverstanden-Schaltfläche." Error: "OK" should remain as "OK" (universal UI term) Severity: Minor 

2. Fluency Errors

Fluency errors affect how natural and correct the target language reads, regardless of the source. A translation can be perfectly accurate and still sound terrible.

2.1 Grammar

Grammatical mistakes in the target language.

Example:

Translation (EN): "The datas is being processed." Error: "datas" should be "data"; subject-verb agreement wrong Severity: Minor 

Common grammar issues: subject-verb agreement, tense consistency, article usage, pronoun reference, preposition errors.

2.2 Spelling

Misspelled words.

Example:

Translation (EN): "Your acount has been updated." Error: "acount" → "account" Severity: Minor 

2.3 Punctuation

Incorrect punctuation marks or usage.

Example:

Translation (DE): "Klicken Sie hier um fortzufahren." Error: Missing comma before "um" (German infinitive clause rule) Severity: Minor 

2.4 Typography

Issues with character representation, spacing, or formatting.

Example:

Translation (FR): "Copyright © 2025. Tous droits réservés." Error: Regular quote instead of French guillemets «» Severity: Minor 

This category includes wrong quotation marks for locale, incorrect spacing (like French non-breaking spaces), character encoding issues, and case errors.

2.5 Unintelligible

Text that simply can't be understood due to severe language errors.

Example:

Translation (EN): "System the for access denied been has." Error: Completely garbled—probably an MT failure Severity: Critical 

When you see this, something has gone very wrong. Usually a machine translation engine misfiring or a corrupted file.

3. Terminology Errors

Terminology errors involve incorrect use of domain-specific or standardized terms.

3.1 Wrong Term

An incorrect term is used for a concept.

Example:

Source (EN): "RAM (Random Access Memory)" Translation (DE): "Arbeitsspeicher (Zufälliger Zugriffsspeicher)" Error: The parenthetical should remain "Random Access Memory" or use standard German abbreviation Severity: Minor to Major (depends on domain) 

3.2 Inconsistent Terminology

The same term gets translated differently within the same document.

Example:

Segment 12: "Dashboard" → "Tableau de bord" Segment 45: "Dashboard" → "Panneau de contrôle" Error: Inconsistent translation of key UI term Severity: Minor 

This one is extremely common. And it's one of the easiest to catch with automated QA tools — which makes it frustrating when it slips through.

3.3 Unapproved Term

A term not in the project glossary is used.

Example:

Glossary specifies: "Server" → "服务器" (fúwùqì) Translation uses: "伺服器" (sìfúqì) Error: Taiwan variant used instead of approved Simplified Chinese term Severity: Minor (unless client-specified) 

4. Style Errors

Style errors occur when the translation doesn't match the required register, tone, or style guidelines.

4.1 Register

The formality level is wrong.

Example:

Style Guide: Use formal "Sie" form Translation (DE): "Du kannst dein Passwort hier ändern." Error: Informal "du" used instead of formal "Sie" Severity: Major (brand voice violation) 

4.2 Unidiomatic

The translation is grammatically fine but sounds unnatural.

Example:

Source (EN): "It's raining cats and dogs." Translation (DE): "Es regnet Katzen und Hunde." Error: Literal translation of idiom—should use German equivalent Severity: Minor 

4.3 Inconsistent Style

Different tone or style within the same document.

Example:

Paragraph 1: Formal technical writing Paragraph 2: Casual, conversational tone Error: Style inconsistency throughout document Severity: Minor 

5. Locale Convention Errors

Locale convention errors involve incorrect adaptation of content for the target region. These are some of the most overlooked errors — and some of the most confusing for end users.

5.1 Date/Time Format

Example:

Source (US): "12/25/2025" Translation (DE): "12/25/2025" Error: Should be "25.12.2025" for German locale Severity: Minor to Major (can cause confusion) 

5.2 Number Format

Example:

Source (US): "1,234.56" Translation (DE): "1,234.56" Error: Should be "1.234,56" for German locale Severity: Major (can cause functional errors) 

A German user seeing "1,234.56" might read it as approximately 1.23, not twelve hundred. That kind of error has real consequences.

5.3 Currency

Example:

Source (US): "$99.99" Translation (DE): "$99.99" Error: Should consider Euro display "99,99 €" or indicate original currency Severity: Major (can affect purchasing decisions) 

5.4 Measurement Units

Example:

Source (US): "10 miles" Translation (DE): "10 Meilen" Error: Should convert to "16 km" for European audience Severity: Minor to Major (depends on context) 

5.5 Address/Phone Format

Example:

Source (US): "(555) 123-4567" Translation (DE): "(555) 123-4567" Error: Should use international format +1 555 123-4567 or adapt to local style Severity: Minor 

6. Verity Errors

Verity errors occur when factual information is incorrect, regardless of the source. These are rare but dangerous.

6.1 Factual Error

Example:

Translation (EN): "Mount Everest, the world's tallest peak at 8,849 meters..." If source had wrong height: Error should be flagged even if translation matches source Severity: Major to Critical (depends on impact) 

6.2 Legal/Compliance

Content that violates legal or regulatory requirements for the target market.

Example:

Translation (DE): "This product cures cancer." Error: Medical claim may violate EU advertising regulations Severity: Critical 

7. Design Errors

Design errors happen when the translation causes visual or functional problems in the final product.

7.1 Truncation

UI Button: [Save Chan...] Error: "Save Changes" truncated—button too small Severity: Major 

7.2 Overlap

Label text runs into adjacent field or image Error: Text expansion not accommodated Severity: Major 

German text is typically 30% longer than English. If nobody accounted for that in the UI, you'll see this a lot.

7.3 Encoding

Display shows: "Café" instead of "Café" Error: UTF-8 encoding issue Severity: Major 

Severity Levels

Each error gets a severity level that determines its impact on the quality score:

Critical (Severity 1)

Errors that demand immediate attention: safety risks (wrong medication dosage), legal liability (compliance violations), complete meaning reversal, offensive or culturally inappropriate content, and system-breaking bugs.

Penalty: 25 points (ISO 5060 default)

Major (Severity 2)

Errors that noticeably impact quality: meaning changes that affect understanding, missing critical information, prominent fluency errors, brand voice violations, and functional issues.

Penalty: 5 points (ISO 5060 default)

Minor (Severity 3)

Errors with limited impact: small fluency issues, minor style deviations, inconsistencies in non-critical terms, and formatting issues in less visible areas.

Penalty: 1 point (ISO 5060 default)

The 25:5:1 ratio is deliberate. A single critical error is weighted the same as 25 minor errors. That reflects reality — one mistranslated drug dosage matters more than 25 comma issues.

Calculating Quality Scores

Using MQM-based scoring:

Quality Score = 100 - (Penalty Points / Word Count × 100) 

Example calculation:

  • Document: 1,000 words
  • Errors found: 1 Critical, 2 Major, 5 Minor
  • Penalties: (1 × 25) + (2 × 5) + (5 × 1) = 40 points
  • Score: 100 - (40 / 1000 × 100) = 100 - 4 = 96

Threshold Guidelines

Quality LevelScore RangeTypical Use Case
Excellent98-100Publication-ready
Good95-97Minor revision needed
Acceptable90-94Requires editing
PoorBelow 90Significant rework needed

Common Error Patterns by Content Type

What you'll find depends heavily on what you're translating.

Marketing Content

Most common errors:

  1. Style/Register (37%)
  2. Terminology inconsistency (24%)
  3. Unidiomatic expressions (19%)
  4. Locale conventions (12%)
  5. Accuracy (8%)

Technical Documentation

Most common errors:

  1. Terminology (42%)
  2. Accuracy - Omission (21%)
  3. Inconsistency (18%)
  4. Fluency - Grammar (11%)
  5. Locale conventions (8%)

Most common errors:

  1. Accuracy - Mistranslation (35%)
  2. Terminology (30%)
  3. Omission (20%)
  4. Verity - Compliance (10%)
  5. Style (5%)

Software UI

Most common errors:

  1. Truncation/Design (28%)
  2. Inconsistent terminology (25%)
  3. Locale conventions (22%)
  4. Untranslated strings (15%)
  5. Accuracy (10%)

Best Practices for Error Classification

1. Use a Standardized Framework

Adopt MQM or ISO 5060 rather than inventing your own taxonomy. This gives you industry-standard reporting, comparability across projects, and tool compatibility.

2. Define Project-Specific Guidelines

While using MQM categories, customize severity definitions for your project:

project_guidelines:critical_conditions:-Anyerrorinsafetywarnings-Legalclaimmistranslation-Brandnameerrorsmajor_conditions:-Meaningchangesinfeaturedescriptions-Missingcall-to-actiontextminor_conditions:-Stylepreferencedeviations-Non-criticalformatting

3. Calibrate Evaluators

Before production evaluation, have multiple evaluators assess the same content. Compare their error identification and severity assignments. Discuss differences and align criteria. Document decisions for future reference.

This step is easy to skip and expensive to skip. Do it.

4. Provide Examples

Build an error database with real examples:

Error TypeSourceTranslationCorrectSeverity
Mistranslation"Disable feature""Aktivieren Sie die Funktion""Deaktivieren Sie die Funktion"Major
Omission"Click Save and Exit""Cliquez sur Enregistrer""Cliquez sur Enregistrer et Quitter"Minor

5. Balance Precision and Practicality

MQM offers 100+ error subtypes, but most projects use 20-30 relevant categories. Start with the 7 top-level categories, add 10-15 subcategories relevant to your content type, and expand based on the error patterns you actually see.

Going too granular too early creates confusion. Going too broad misses useful distinctions. Find the middle ground for your team.

AI-Assisted Error Detection

Modern AI LQA tools can detect many error types automatically:

Error TypeAI Detection Accuracy (2025)
Spelling99%+
Grammar95%+
Terminology (with glossary)90%+
Formatting/Locale95%+
Mistranslation85-90%
Style75-85%
Omission85-90%
Unidiomatic70-80%

AI works best when trained on your domain, provided with glossaries and style guides, and used for initial detection with human validation. The bottom of this table — style and unidiomatic expression detection — is where AI still struggles most. These require judgment about what sounds "right," and that's hard to automate.

FAQ

What is the most common translation error type?

It depends on the content. For technical content, terminology errors dominate (40%+). For marketing, style and register issues lead (35%+). For legal, accuracy errors are the main concern (35%+). Fluency errors (grammar, spelling, punctuation) show up everywhere but are usually minor severity.

How many error categories should I use for LQA?

Start with the 7 MQM top-level categories (Accuracy, Fluency, Terminology, Style, Locale Convention, Verity, Design) and expand to 15-25 subcategories based on your content. Too few categories loses useful information; too many creates inconsistency between evaluators. ISO 5060 recommends this balanced approach.

What's the difference between a major and minor error?

Major errors cause a reader to stop, be confused, or misunderstand. Minor errors are noticeable but don't impede understanding — the message still comes through. Critical errors create safety, legal, or severe functional risks that need immediate correction.

Can the same error be different severities in different projects?

Yes. A terminology inconsistency might be minor for internal documentation but major for customer-facing product UI. Project guidelines should specify severity criteria based on content importance, audience, and risk factors. This is exactly why calibration and documented guidelines matter.

How do I handle errors that fit multiple categories?

Choose the primary impact category. If "Sie" (formal German) is translated as "tu" (informal French), that could be Style (register) or Accuracy (wrong meaning for formal context). If the source required formal address, classify it as Style/Register since that's the root cause. Only count each error once to avoid inflating penalties.

Put It Into Practice

Systematic error classification is how you turn "I think the quality is okay" into "the quality score is 96.2, with terminology consistency as the primary issue." One of those statements you can act on. The other you can't.

Start with the core MQM categories, customize severity levels for your project, and calibrate your team regularly. The framework only works if people apply it consistently.

Ready to implement systematic error classification? Try KTTC for AI-powered LQA with the full MQM error taxonomy and ISO 5060 compliance.

We use cookies to improve your experience. Learn more in our Cookie Policy.