Skip to main content

Translation Error Types: Complete Classification Guide for 2025

elena-volkova1/10/202513 min read
translation-errorserror-typologymqmquality-assessmentlqaiso-5060

Understanding translation error types is fundamental to quality assessment in localization. Whether you're a QA manager, translator, or localization engineer, knowing how to identify and classify errors enables consistent evaluation, targeted feedback, and measurable quality improvement.

This guide covers the complete taxonomy of translation errors based on MQM (Multidimensional Quality Metrics) and ISO 5060:2024 standards—the frameworks used by leading organizations worldwide.

Why Error Classification Matters

Before diving into error types, let's understand why systematic classification is essential:

BenefitDescription
ConsistencyDifferent evaluators apply the same criteria
ActionabilityTranslators know exactly what to fix
MeasurementQuality scores become meaningful
TrainingError patterns identify skill gaps
AutomationAI tools can detect specific error types

Without a standardized approach, one reviewer might call something a "mistake" while another calls it "acceptable variation." Classification eliminates this ambiguity.

The MQM Error Hierarchy

The Multidimensional Quality Metrics (MQM) framework, now formalized in ISO 5060:2024, organizes errors into a hierarchical taxonomy. The top-level categories are:

MQM Error Hierarchy ├── Accuracy ├── Fluency ├── Terminology ├── Style ├── Locale Convention ├── Verity └── Design 

Let's explore each category in detail.

1. Accuracy Errors

Accuracy errors occur when the translation doesn't faithfully represent the source meaning. These are often the most serious errors because they directly affect whether the message is correctly communicated.

1.1 Mistranslation

The translation conveys a different meaning than the source.

Example:

Source (EN): "The product is not available in your region." Translation (DE): "Das Produkt ist in Ihrer Region verfügbar." Error: "not available" → "available" (meaning reversed) Severity: Major 

Mistranslations can be:

  • Complete: Entirely wrong meaning
  • Partial: Part of the meaning is wrong
  • Nuance: Subtle meaning difference

1.2 Omission

Required content from the source is missing in the translation.

Example:

Source (EN): "Click Save to confirm your changes and exit." Translation (FR): "Cliquez sur Enregistrer pour confirmer vos modifications." Error: "and exit" was omitted Severity: Minor (if UI still functions) or Major (if critical instruction) 

1.3 Addition

The translation includes content not present in the source.

Example:

Source (EN): "Enter your password." Translation (ES): "Introduzca su contraseña segura." Error: "segura" (secure) was added—not in source Severity: Minor (unless it changes meaning or creates liability) 

1.4 Untranslated

Source language text remains in the translation.

Example:

Source (EN): "Welcome to the Dashboard" Translation (JA): "Welcome to the ダッシュボード" Error: "Welcome to the" left untranslated Severity: Major 

1.5 Over-Translation

Content that should remain in source language is translated.

Example:

Source (EN): "Click the OK button." Translation (DE): "Klicken Sie auf die Einverstanden-Schaltfläche." Error: "OK" should remain as "OK" (universal UI term) Severity: Minor 

2. Fluency Errors

Fluency errors affect how natural and correct the target language text reads, independent of the source. A translation can be accurate but still have fluency issues.

2.1 Grammar

Grammatical mistakes in the target language.

Example:

Translation (EN): "The datas is being processed." Error: "datas" should be "data"; subject-verb agreement wrong Severity: Minor 

Common grammar issues include:

  • Subject-verb agreement
  • Tense consistency
  • Article usage
  • Pronoun reference
  • Preposition errors

2.2 Spelling

Misspelled words in the translation.

Example:

Translation (EN): "Your acount has been updated." Error: "acount" → "account" Severity: Minor 

2.3 Punctuation

Incorrect punctuation marks or usage.

Example:

Translation (DE): "Klicken Sie hier um fortzufahren." Error: Missing comma before "um" (German infinitive clause rule) Severity: Minor 

2.4 Typography

Issues with character representation, spacing, or formatting.

Example:

Translation (FR): "Copyright © 2025. Tous droits réservés." Error: Regular quote instead of French guillemets «» Severity: Minor 

Includes:

  • Wrong quotation marks for locale
  • Incorrect spacing (e.g., French non-breaking spaces)
  • Character encoding issues
  • Case errors

2.5 Unintelligible

Text that cannot be understood due to severe language errors.

Example:

Translation (EN): "System the for access denied been has." Error: Completely garbled—possibly MT failure Severity: Critical 

3. Terminology Errors

Terminology errors involve the incorrect use of domain-specific or standardized terms.

3.1 Wrong Term

An incorrect term is used for a concept.

Example:

Source (EN): "RAM (Random Access Memory)" Translation (DE): "Arbeitsspeicher (Zufälliger Zugriffsspeicher)" Error: The parenthetical should remain "Random Access Memory" or use standard German abbreviation Severity: Minor to Major (depends on domain) 

3.2 Inconsistent Terminology

The same term is translated differently within the same document.

Example:

Segment 12: "Dashboard" → "Tableau de bord" Segment 45: "Dashboard" → "Panneau de contrôle" Error: Inconsistent translation of key UI term Severity: Minor 

3.3 Unapproved Term

A term not approved in the project glossary is used.

Example:

Glossary specifies: "Server" → "服务器" (fúwùqì) Translation uses: "伺服器" (sìfúqì) Error: Taiwan variant used instead of approved Simplified Chinese term Severity: Minor (unless client-specified) 

4. Style Errors

Style errors occur when the translation doesn't match the required register, tone, or style guidelines.

4.1 Register

The formality level doesn't match requirements.

Example:

Style Guide: Use formal "Sie" form Translation (DE): "Du kannst dein Passwort hier ändern." Error: Informal "du" used instead of formal "Sie" Severity: Major (brand voice violation) 

4.2 Unidiomatic

The translation is grammatically correct but sounds unnatural.

Example:

Source (EN): "It's raining cats and dogs." Translation (DE): "Es regnet Katzen und Hunde." Error: Literal translation of idiom—should use German equivalent Severity: Minor 

4.3 Inconsistent Style

Different style or tone within the same document.

Example:

Paragraph 1: Formal technical writing Paragraph 2: Casual, conversational tone Error: Style inconsistency throughout document Severity: Minor 

5. Locale Convention Errors

Locale convention errors (also called "locale" or "internationalization" errors) involve incorrect adaptation of content for the target region.

5.1 Date/Time Format

Dates or times formatted incorrectly for the locale.

Example:

Source (US): "12/25/2025" Translation (DE): "12/25/2025" Error: Should be "25.12.2025" for German locale Severity: Minor to Major (can cause confusion) 

5.2 Number Format

Numbers formatted incorrectly for the locale.

Example:

Source (US): "1,234.56" Translation (DE): "1,234.56" Error: Should be "1.234,56" for German locale Severity: Major (can cause functional errors) 

5.3 Currency

Currency formatting or conversion issues.

Example:

Source (US): "$99.99" Translation (DE): "$99.99" Error: Should consider Euro display "99,99 €" or indicate original currency Severity: Major (can affect purchasing decisions) 

5.4 Measurement Units

Measurement units not converted or formatted for locale.

Example:

Source (US): "10 miles" Translation (DE): "10 Meilen" Error: Should convert to "16 km" for European audience Severity: Minor to Major (depends on context) 

5.5 Address/Phone Format

Contact information not formatted for locale conventions.

Example:

Source (US): "(555) 123-4567" Translation (DE): "(555) 123-4567" Error: Should use international format +1 555 123-4567 or adapt to local style Severity: Minor 

6. Verity Errors

Verity errors (sometimes called "accuracy to real-world" errors) occur when factual information is incorrect, regardless of the source.

6.1 Factual Error

The translation contains objectively incorrect information.

Example:

Translation (EN): "Mount Everest, the world's tallest peak at 8,849 meters..." If source had wrong height: Error should be flagged even if translation matches source Severity: Major to Critical (depends on impact) 

6.2 Legal/Compliance

Content violates legal or regulatory requirements for the target market.

Example:

Translation (DE): "This product cures cancer." Error: Medical claim may violate EU advertising regulations Severity: Critical 

7. Design Errors

Design errors occur when the translation causes visual or functional problems in the final product.

7.1 Truncation

Text is cut off due to length.

Example:

UI Button: [Save Chan...] Error: "Save Changes" truncated—button too small Severity: Major 

7.2 Overlap

Text overlaps with other elements.

Example:

Label text runs into adjacent field or image Error: Text expansion not accommodated Severity: Major 

7.3 Encoding

Character encoding problems cause display issues.

Example:

Display shows: "Café" instead of "Café" Error: UTF-8 encoding issue Severity: Major 

Severity Levels

Each error is assigned a severity level that determines its impact on the quality score:

Critical (Severity 1)

Errors that must be fixed immediately:

  • Safety risks (e.g., wrong medication dosage)
  • Legal liability (e.g., compliance violations)
  • Complete meaning reversal
  • Offensive or culturally inappropriate content
  • System-breaking bugs

Penalty: 25 points (ISO 5060 default)

Major (Severity 2)

Errors that significantly impact quality:

  • Meaning changes that affect understanding
  • Missing critical information
  • Prominent fluency errors
  • Brand voice violations
  • Functional issues

Penalty: 5 points (ISO 5060 default)

Minor (Severity 3)

Errors with limited impact:

  • Small fluency issues
  • Minor style deviations
  • Inconsistencies in non-critical terms
  • Formatting issues in less visible areas

Penalty: 1 point (ISO 5060 default)

Calculating Quality Scores

Using MQM-based scoring, the quality score is calculated as:

Quality Score = 100 - (Penalty Points / Word Count × 100) 

Example calculation:

  • Document: 1,000 words
  • Errors found: 1 Critical, 2 Major, 5 Minor
  • Penalties: (1 × 25) + (2 × 5) + (5 × 1) = 40 points
  • Score: 100 - (40 / 1000 × 100) = 100 - 4 = 96

Threshold Guidelines

Quality LevelScore RangeTypical Use Case
Excellent98-100Publication-ready
Good95-97Minor revision needed
Acceptable90-94Requires editing
PoorBelow 90Significant rework needed

Common Error Patterns by Content Type

Marketing Content

Most common errors:

  1. Style/Register (37%)
  2. Terminology inconsistency (24%)
  3. Unidiomatic expressions (19%)
  4. Locale conventions (12%)
  5. Accuracy (8%)

Technical Documentation

Most common errors:

  1. Terminology (42%)
  2. Accuracy - Omission (21%)
  3. Inconsistency (18%)
  4. Fluency - Grammar (11%)
  5. Locale conventions (8%)

Most common errors:

  1. Accuracy - Mistranslation (35%)
  2. Terminology (30%)
  3. Omission (20%)
  4. Verity - Compliance (10%)
  5. Style (5%)

Software UI

Most common errors:

  1. Truncation/Design (28%)
  2. Inconsistent terminology (25%)
  3. Locale conventions (22%)
  4. Untranslated strings (15%)
  5. Accuracy (10%)

Best Practices for Error Classification

1. Use a Standardized Framework

Adopt MQM or ISO 5060 taxonomy rather than creating custom error types. This ensures:

  • Industry-standard reporting
  • Comparability across projects
  • Tool compatibility

2. Define Project-Specific Guidelines

While using MQM categories, customize severity definitions for your project:

project_guidelines:critical_conditions:-Anyerrorinsafetywarnings-Legalclaimmistranslation-Brandnameerrorsmajor_conditions:-Meaningchangesinfeaturedescriptions-Missingcall-to-actiontextminor_conditions:-Stylepreferencedeviations-Non-criticalformatting

3. Calibrate Evaluators

Before production evaluation:

  • Have multiple evaluators assess the same content
  • Compare error identification and severity assignment
  • Discuss differences and align criteria
  • Document decisions for future reference

4. Provide Examples

Build an error database with examples:

Error TypeSourceTranslationCorrectSeverity
Mistranslation"Disable feature""Aktivieren Sie die Funktion""Deaktivieren Sie die Funktion"Major
Omission"Click Save and Exit""Cliquez sur Enregistrer""Cliquez sur Enregistrer et Quitter"Minor

5. Balance Precision and Practicality

While MQM offers 100+ error subtypes, most projects use 20-30 relevant categories. Start with:

  • Top-level categories (7)
  • Most common subcategories for your content type (10-15)
  • Expand as needed based on error patterns

AI-Assisted Error Detection

Modern AI LQA tools can automatically detect many error types:

Error TypeAI Detection Accuracy (2025)
Spelling99%+
Grammar95%+
Terminology (with glossary)90%+
Formatting/Locale95%+
Mistranslation85-90%
Style75-85%
Omission85-90%
Unidiomatic70-80%

AI works best when:

  • Trained on your domain
  • Provided with glossaries and style guides
  • Used for initial detection with human validation

FAQ

What is the most common translation error type?

The most common error type varies by content. For technical content, terminology errors are most frequent (40%+). For marketing content, style and register issues dominate (35%+). For legal content, accuracy errors are the primary concern (35%+). Across all content types, fluency errors (grammar, spelling, punctuation) are consistently present but usually classified as minor severity.

How many error categories should I use for LQA?

Start with the 7 MQM top-level categories (Accuracy, Fluency, Terminology, Style, Locale Convention, Verity, Design) and expand to 15-25 subcategories based on your content type. Using too few categories loses valuable information; using too many creates inconsistency between evaluators. ISO 5060 recommends this balanced approach.

What's the difference between a major and minor error?

Major errors significantly impact the user's understanding or experience—they would cause a reader to stop, be confused, or misunderstand the content. Minor errors are noticeable but don't impede comprehension—the message is still understood despite the error. Critical errors create safety, legal, or severe functional risks that require immediate correction.

Can the same error be different severities in different projects?

Yes. A terminology inconsistency might be minor for internal documentation but major for customer-facing product UI. Project guidelines should specify severity criteria based on content importance, audience, and risk factors. This is why calibration and documented guidelines are essential.

How do I handle errors that fit multiple categories?

Choose the primary impact category. If "Sie" (formal German) is translated as "tu" (informal French), this could be Style (register) or Accuracy (wrong meaning for formal context). If the source required formal address, classify as Style/Register since that's the root cause. Only count each error once to avoid inflating penalties.

Conclusion

Systematic error classification is the foundation of effective translation quality management. By using the MQM framework (now standardized in ISO 5060), you can:

  • Ensure consistent evaluation across projects and evaluators
  • Provide actionable feedback that improves translator performance
  • Generate meaningful quality metrics for stakeholder reporting
  • Enable AI-assisted QA with well-defined error categories

Start with the core categories outlined in this guide, customize severity levels for your project needs, and calibrate your team regularly. The result is a quality process that drives measurable improvement.

Ready to implement systematic error classification in your workflow? Try KTTC for AI-powered LQA with full MQM error taxonomy and ISO 5060 compliance.

We use cookies to improve your experience. Learn more in our Cookie Policy.