Papers
Topics
Authors
Recent
Search
2000 character limit reached

Error Taxonomies & Explainability

Updated 20 February 2026
  • The paper introduces a systematic framework leveraging edit-level Shapley attribution to decompose GEC metrics and quantify individual error contributions.
  • It categorizes edits using an automatic error type taxonomy (e.g., ORTH, ADV) to enable nuanced metric bias analysis and granular performance insights.
  • Empirical validation shows approximately 70% alignment with human judgments, highlighting the approach's potential for targeted feedback and refined metric design.

Error taxonomies and explainability constitute core methodologies for advancing transparency and interpretability in automated grammatical error correction (GEC) evaluation. A rigorous approach to error decomposition enables both granular bias analysis of metrics and actionable feedback to users or researchers. Recent work introduces a framework that systematically attributes sentence-level performance gains to distinct atomic edits, leveraging an automatic error type taxonomy and cooperative game theory for edit-level quantification (Goto et al., 2024).

1. Edit-Level Attribution of Sentence Metrics

A principled decomposition of sentence-level GEC metric scores is achieved by representing the correction process as a series of NN atomic edits transforming a source sentence SS into a hypothesis HH. If M()M(\cdot|\cdot) denotes a real-valued sentence-level metric, the total gain ΔM(HS)\Delta M(H|S) is the difference M(HS)M(SS)M(H|S) - M(S|S). The goal is to allocate additive attribution scores ϕi(M)\phi_i(M) to each edit eie_i such that:

ΔM(HS)=i=1Nϕi(M)\Delta M(H|S) = \sum_{i=1}^N \phi_i(M)

Shapley values, originating from cooperative game theory, are adopted to ensure fairness and correctness of contributions. For each edit eie_i, the Shapley value is:

ϕi(M)=UE{ei}U!(NU1)!N![ΔM(SU{ei}S)ΔM(SUS)]\phi_i(M) = \sum_{U \subseteq E \setminus \{e_i\}} \frac{|U|!(N-|U|-1)!}{N!} [\Delta M(S_{U \cup \{e_i\}}|S) - \Delta M(S_U|S)]

where E={e1,,eN}E=\{e_1, \ldots, e_N\} and SUS_U denotes the source sentence with only the edits in subset UU applied. This attribution generalizes to both reference-based and reference-free metrics.

2. Error Type Taxonomies and Edit Labeling

Automatic taxonomies, as provided by ERRANT [Bryant et al., 2017], categorize each atomic edit into an exhaustive and mutually exclusive error type. The schema includes:

  • ORTH (orthographic: casing, whitespace, spelling)
  • MWE (multi-word expressions)
  • NOUN, VERB, ADJ, ADV, PRON, DET, ADP, CONJ (parts of speech)
  • PUNCT, MISC

Each edit eie_i receives a single error label, facilitating aggregation of attribution results by error class. This enables metric developers and analysts to assess how metrics treat different grammatical categories and phenomena on a large scale (Goto et al., 2024).

3. Computation and Practical Algorithm

The procedural pipeline for edit attribution consists of:

  1. Edit extraction (e1,,eNe_1, \ldots, e_N) between SS and HH—typically via ERRANT.
  2. Definition of a set function v(U)=M(SUS)M(SS)v(U) = M(S_U|S) - M(S|S).
  3. For N10N \leq 10, full enumeration of Shapley values is feasible: all sets UE{ei}U \subseteq E \setminus \{e_i\} are considered.
  4. For N>10N > 10, Shapley values are estimated by uniformly sampling TT random permutations and calculating marginal increases for each position.
  5. Optionally, edit-level attributions are L1-normalized for inter-sentence comparability:

ϕinorm(M)=ϕi(M)jϕj(M)\phi_i^{\text{norm}}(M) = \frac{ \phi_i(M) }{ \sum_j |\phi_j(M)| }

This approach is agnostic to the choice of MM, supporting both reference-free (e.g., SOME, IMPARA, GPT-2 PPL) and reference-based metrics.

4. Empirical Validation and Human Alignment

Edit-level attributions derived from Shapley values exhibit approximate 70% alignment with human assessments. Specifically, edits labeled positive/negative by the sign of ϕi(M)\phi_i(M) are compared to their correctness judged against 2–4 human reference corrections. Agreement rates reach 60–70% at zero threshold and 70–80% when considering only high-magnitude attributions (ϕinorm>0.5|\phi_i^{\text{norm}}| > 0.5) (Goto et al., 2024).

Attribution additivity remains robust across granularities: when merging multiple edits, the resulting Shapley attributions are additive (Pearson r>0.95r>0.95, Spearman ρ>0.90\rho>0.90). Baseline heuristics such as naive addition or subtraction of scores tend to display inferior rank and value correlation, particularly for complex edit sequences.

5. Bias Characterization in Metrics via Taxonomies

Aggregated normalized edit-level Shapley attributions expose systematic metric biases by error type. For the metrics SOME, IMPARA, and GPT-2 PPL, orthography (ORTH) consistently receives the lowest average weights (\sim0.05–0.07), while adverbial (ADV) and MWE edits are assigned higher scores (\sim0.15–0.20). Punctuation and minor surface-form edits are nearly ignored by some metrics, indicating reduced sensitivity to superficial corrections. This analytical method enables robust introspection into the implicit priorities encoded by reference-free and reference-based sentence-level metrics.

Error Type SOME IMPARA GPT-2 PPL
ORTH 0.05 0.04 0.07
NOUN 0.12 0.10 0.11
VERB 0.10 0.08 0.09
ADV 0.18 0.15 0.20
MWE 0.14 0.12 0.16

A plausible implication is that observed underweighting of orthographic and surface-form errors reflects a misalignment between current metric incentives and actual human judgment priorities.

6. Implications for Feedback and Metric Design

Decomposing sentence-level scores via edit-level Shapley attribution yields actionable feedback for both end users and system developers. Educational or interactive GEC systems can leverage such attributions to provide targeted, quantitative feedback regarding which corrections contributed meaningfully to fluency or accuracy improvements. For example: “Your use of commas improved fluency by +0.06, but your spelling correction had little impact.” In metric development, this approach signifies a pathway to design new metrics or refine existing ones to more uniformly credit valid edits or calibrate error-type weighting in line with human perception. This suggests that metric taxonomies may require refinement, potentially by introducing a dedicated ‘surface-form’ weighting component.

7. Conclusions

Error taxonomy–grounded edit-level attribution, as instantiated via Shapley values, establishes a transparent mechanism for auditing the compositionality and fairness of GEC evaluation metrics (Goto et al., 2024). The methodology enables both fine-grained interpretability and empirical bias diagnosis, with implications across educational feedback, system improvement, and the foundational design of reference-free sentence evaluation schemes. Potential future advances include taxonomy revision and dynamic weighting, aimed at further harmonizing automated evaluation with human linguistic preferences.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Error Taxonomies and Explainability.