Papers
Topics
Authors
Recent
Search
2000 character limit reached

Quality and Capacity-Aware GQA (QCQA)

Updated 23 January 2026
  • Quality and Capacity-Aware GQA (QCQA) is a framework that jointly optimizes transformer performance by balancing model accuracy with resource usage through specialized query grouping.
  • QCQA enhances visual relationship detection by partitioning predicate classes and queries into groups, enabling multi-assignment and improved recall without increasing inference cost.
  • In large language models, QCQA employs evolutionary search to optimize grouped query attention, substantially reducing KV-cache size while boosting generative accuracy.

Quality and Capacity-Aware GQA (QCQA) encompasses a family of algorithms designed to optimize transformer architectures by exploiting the interplay between “quality” (model accuracy) and “capacity” (resource usage or specialization). QCQA research has advanced two distinct but thematically linked frontiers: (1) training-efficient visual relationship detection in vision transformers, and (2) inference-efficient grouped query attention in LLMs. In both domains, QCQA mechanisms leverage groupwise specialization and quality-driven assignment or grouping to maximize performance–resource efficiency trade-offs.

1. Conceptual Foundations

Quality and Capacity-Aware GQA applies to transformer-based models in two major forms:

  • In visual relationship detection (VRD), QCQA refers to a dual-module assignment method that induces query specialization and multi-assignment of ground truths, overcoming inefficiencies in DETR-style training (Kim et al., 2024).
  • In LLM inference, QCQA denotes an evolutionary algorithm for grouping multi-head attention queries, minimizing KV-cache size subject to proxy model quality constraints (Joshi et al., 2024).

Across both applications, the core paradigm is joint optimization of model “capacity use” (e.g., number of specialized queries or KV-cache size) and output “quality” (e.g., detection recall, generation accuracy).

2. QCQA in Visual Relationship Detection

The QCQA framework for VRD consists of two modules operating synergistically to rectify two limitations of conventional label assignment in DETR-like transformers:

2.1 Groupwise Query Specialization

Let R={1,...,Cp}R = \{1, ..., C_p\} be predicate classes and Q={q1,...,qNq}Q = \{q_1, ..., q_{N_q}\} be decoder queries. QCQA partitions RR into GG disjoint predicate groups G1p,...,GGp\mathcal{G}^p_1, ..., \mathcal{G}^p_G, balanced by total frequency. It then allocates queries into query groups G1q,...,GGq\mathcal{G}^q_1, ..., \mathcal{G}^q_G proportionally, enforcing:

Ggq(s,p,o)D1[pGgp]|\mathcal{G}^q_g| \propto \sum_{(s,p,o)\in\mathcal D} \mathbf{1}[p\in\mathcal{G}^p_g]

At label-prediction matching time, a group-indexed cost Hgroup\mathcal{H}_{\text{group}} is added to the standard Hungarian matching, forbidding cross-group assignments by penalizing them with ++\infty. This ensures queries in group gg only match GTs from the associated predicate group, enforcing specialization and optimal capacity exploitation.

2.2 Quality-Aware Multi-Assignment

Conventional matching permits each GT only one positive assignment, penalizing accurate near-misses. QCQA computes a triplet quality score for candidate (GT, prediction) pairs via fused subject IoU vijsv^s_{ij}, object IoU vijov^o_{ij}, and predicate confidence vijrv^r_{ij}. These are combined:

vij=R(vijs,vijo)+λrelvijrv_{ij} = \mathcal{R}(v^s_{ij}, v^o_{ij}) + \lambda_{rel} v^r_{ij}

For each GT tit_i, the top-kk predictions (or those above a threshold) are assigned as positives, duplicating tit_i in the augmented GT list. A group-constrained Hungarian matcher assigns GTs to predictions, enabling multiple high-quality predictions to receive positive supervision.

2.3 Unified Matcher and Loss

The final QCQA matcher incorporates both modules, yielding the assignment

σ=argminσSNqi=1Nq[Hmatch(ti,t^σ(i))+Hgroup(ti,t^σ(i))]\sigma^* = \arg\min_{\sigma \in \mathfrak{S}_{N_q}} \sum_{i'=1}^{N_q} \left[ \mathcal{H}_{\text{match}}(t'_{i'}, \hat{t}_{\sigma(i')}) + \mathcal{H}_{\text{group}}(t'_{i'}, \hat{t}_{\sigma(i')}) \right]

The loss function includes standard classification and localization terms over all matched pairs.

3. QCQA in LLMs: Grouped Query Attention

In transformer LLMs, excessive memory and time for key/value feature (KV-cache) storage during autoregressive inference limit throughput and sequence length. Multi-Query Attention (MQA) reduces KV-cache by sharing keys/values among all heads, while vanilla Grouped Query Attention (GQA) partitions HH heads into PP contiguous groups with shared K/V per group. However, fixed groupings do not optimally balance cache reduction and generative quality.

3.1 Attention Reformulation

For queries QiQ_i, keys KiK_i, and values ViV_i in head ii:

  • Standard multi-head: Ai=softmax(QiKiT/d)ViA_i = \text{softmax}(Q_i K_i^T / \sqrt{d}) V_i
  • Grouped: For iGji \in G_j (group jj), Ai=softmax(QiKGjT/d)VGjA_i = \text{softmax}(Q_i K_{G_j}^T / \sqrt{d}) V_{G_j}, where KGj,VGjK_{G_j}, V_{G_j} are group-mean pooled weights.

3.2 QCQA Optimization Objective

Given pretrained weights WKi,WViW_{K_i}, W_{V_i}, QCQA seeks groupings {G0,...,GP1}\{G_0, ..., G_{P-1}\} and layerwise grouping selection to minimize:

  • Weight-Sharing Error (WSE):

LWSE=j=0P1iGjWKiWKGj2+WViWVGj2L_{WSE} = \sum_{j=0}^{P-1} \sum_{i \in G_j} \bigl\| W_{K_i} - W_{K_{G_j}} \bigr\|^2 + \bigl\| W_{V_i} - W_{V_{G_j}} \bigr\|^2

  • Normalized KV-cache size: #groups/H\# \text{groups}/H

QCQA treats (LWSE,KV-cache size)(L_{WSE}, \text{KV-cache size}) as a bi-objective optimization and finds Pareto-optimal groupings.

4. Evolutionary Search and Practical Implementation

QCQA solves the grouping problem using a multi-stage evolutionary algorithm (NSGA-II).

  • Search space: Either “arbitrary group size” (QCQA-AC) with integer vector assignments for group IDs, or “equal-size groups” (QCQA-EC) with permutations.
  • Operators: Random initialization, one/two-point crossover (vector split and recombination), mutation (group reassignment or swap), and selection by non-dominated front and crowding distance.
  • Two-stage search: Per-layer group search (QCQAGroups) generates candidate grouping sets, followed by selection of layers to group via a binary vector search (QCQA). Each candidate is efficiently evaluated using only pretrained weights; no forward passes or data are needed.
  • Application: The method modifies the Transformer by replacing WKi,WViW_{K_i}, W_{V_i} with group-mean equivalents in grouped layers, reducing stored K/V per layer. KV-cache allocation is updated, and optional lightweight fine-tuning is performed to recover lost performance.

5. Experimental Analysis

QCQA consistently demonstrates superior trade-offs compared to baselines in both domains.

5.1 Visual Relationship Detection

Empirical results on Visual Genome (VG150, ResNet-101 backbone):

Model R@100 (Base/QCQA) ΔR@100 mR@100 (Base/QCQA) ΔmR@100
HOTR 26.6 / 29.1 +2.0 9.7 / 12.7 +3.0
ISG 32.1 / 36.0 +3.9 8.4 / 14.1 +5.7
  • Ablation (ISG): only specialization, +3.2 R@100/+5.1 mR@100; only multi-assignment, +1.0/+0.8; both (QCQA), +3.9/+5.7.
  • No inference or parameter cost is incurred (Kim et al., 2024).

5.2 LLM Grouped Query Attention

On Llama2-7B with KV-cache at 50% of baseline:

  • No fine-tuning: GQA 24.3% accuracy; QCQA-EC \sim38%; QCQA-AC 44.3% (+20% absolute over GQA).
  • With fine-tuning: GQA 45.01%; QCQA-AC 55.56% (+10.55% absolute).
  • QCQA-AC requires only 60% of the KV-cache to match GQA’s fine-tuned accuracy—i.e., a 40% reduction.
  • Outperforms GQA on HellaSwag, ARC, MNLI, Winogrande, WikiText by 6–22% (NLU tasks, PPL for WikiText) (Joshi et al., 2024).

6. Application Workflow and Guidelines

To deploy QCQA in LLMs:

  1. Choose maximum groups PmaxP_{\text{max}} (e.g., 2, 4, 8, 16).
  2. Extract WKW_K, WVW_V, HH from the base MHA checkpoint.
  3. Run QCQAGroups for each layer (pop_size1100\text{pop\_size}_1\approx100, ngen1=50\text{ngen}_1=50–100) to sample Pareto-efficient groupings.
  4. Run QCQA selection across layers (pop_size2100\text{pop\_size}_2\approx100, ngen2=50\text{ngen}_2=50) to finalize group-pattern.
  5. Update model weights and attention logic for grouped storage/computation.
  6. Optionally fine-tune on a small dataset (recommended hyperparameters: LR=2×1052\times10^{-5}, batch=2, three epochs).
  7. Tune evolutionary parameters as appropriate. Typical defaults: crossover=0.9, mutation=0.1.

Constraints: QCQA is only applicable to pretrained multi-head attention (MHA) models and not as a replacement for from-scratch training. Possible future enhancements include alternative distance metrics, dynamic group sizes, and selective grouping (e.g., only keys or only values) (Joshi et al., 2024).

7. Impact and Future Directions

QCQA establishes that principled, quality- and capacity-aware groupwise structuring—whether through assignment in label matching or KV-cache grouping—substantially improves the efficiency of transformer-based models under resource constraints. In VRD, QCQA induces query specialization and increased positive signal, enhancing recall and mean recall without any runtime or parameter burden. In LLMs, QCQA achieves substantial KV-cache compression and accuracy gains via evolutionary search–based grouping, outperforming ad hoc GQA/MQA approaches. These results generalize across multiple architectures and tasks. Future research may extend QCQA principles to train-from-scratch settings, explore more sophisticated distance measures for grouping, and further optimize group selection per model layer or function. The algorithms are computationally frugal: for a 7B-parameter Llama2, QCQA search completes in hours on CPU, orders of magnitude faster than repeated LLM forward passes (Joshi et al., 2024).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Quality and Capacity-Aware GQA (QCQA).