Multimodal Routing Research Overview
- Multimodal Routing Research is the study of dynamically selecting vision-language models based on multimodal inputs to optimize trade-offs between accuracy and cost.
- VL-RouterBench provides a reproducible pipeline with over 30K samples, 17 models, and extensive task stratification, enabling rigorous performance evaluation.
- Routing algorithms ranging from non-parametric baselines to end-to-end deep learning methods demonstrate clear improvements, though a performance gap remains against the Oracle benchmark.
Multimodal routing research investigates algorithms, systems, and evaluation paradigms for dynamically selecting among multiple vision-LLMs (VLMs) in response to a given multimodal input. This field is motivated by the need to optimize the trade-off between computational cost and task accuracy as the diversity and scale of models and datasets increase. With the emergence of benchmarking suites such as VL-RouterBench, multimodal routing is formalized as a rigorous research area with reproducible pipelines, comprehensive task coverage, and standardized performance metrics (Huang et al., 29 Dec 2025).
1. Construction and Scale of VL-RouterBench
Data for multimodal routing research is constructed from raw inference and scoring logs across a diverse set of VLMs and benchmarks. The VL-RouterBench protocol utilizes VLMEvalKit to collect, for each sample–model pair :
- The image path and corresponding text prompt .
- The model output and a rule-based correctness label
- Token counts , , and per-million-token prices .
These elements produce a quality matrix and a cost matrix with
VL-RouterBench covers:
- 14 datasets spanning 3 task groups;
- samples;
- models (15 open-source, 2 API);
- $519,180$ sample–model pairs;
- A total token volume of $34,494,977$.
The benchmark data is stored in HDF5/CSV formats, with unified parsing and correctness labeling for reproducibility.
2. Task Taxonomy and Dataset Stratification
VL-RouterBench partitions benchmark datasets into three routability groups, reflecting both task diversity and modality challenges:
- General (broad knowledge and robustness):
- MMBench, MMStar, MMMU, RealWorldQA, InfoVQA, HallusionBench.
- STEM (diagrammatic and symbolic reasoning):
- MathVista, MathVision, MathVerse, AI2D.
- Charts & OCR (text-in-image recognition and reasoning):
- TextVQA, ChartQA, DocVQA, OCRBench.
Notable dataset statistics include:
- MMBench: 3,217 questions across 20 skills.
- MMMU: 11.5K college-level questions.
- TextVQA: 45,336 questions on 28,408 images.
- DocVQA: 50,000 questions spanning 12,767 documents.
This stratification enables the assessment of routers' generality and specialization across visually rich, symbolic, and text-centric multimodal inputs.
3. Performance Metrics and Router Ranking Protocol
Evaluation of routers in multimodal routing is guided by three primary metrics:
- Average Accuracy :
with the model selected by router for sample .
- Average Cost (in \$per 10K samples):
- Throughput: thousands of tokens processed per second, averaged over test samples.
- Rank Score (default ), combines normalized cost and accuracy via a weighted harmonic mean:
The standard protocol supports Pareto analysis, favoring routers achieving high accuracy with low cost.
4. Routing Algorithms and Baselines
Three non-parametric baselines are defined:
- Oracle: selects, for each sample, the lowest-cost correct model (or the globally cheapest if none are correct).
- Strongest: single model with highest average accuracy on the training split.
- Cheapest: single model with lowest average cost on the training split.
Feature-level routers employ fixed text and image embeddings combined via mechanisms such as:
- KNN
- PRkNN (pairwise-preference aggregation)
- One-vs-Rest (OVR)
- K-means
- Linear classifier
- Multilayer Perceptron (MLP)
End-to-end routers fine-tune multimodal backbones directly and include:
- CosineCls (contrastive classifier)
- RouterDC (dual-contrastive)
- ZOOTER (reward-guided soft labels)
- VLC (classifier on BERT-style multimodal encoder)
A summary of comparative results at the best operating point (Rank Score maximizing ):
| Router | Avg Acc (%) | Avg Cost (\$/10K) | Rank Score | Throughput (K toks/s) |
|---|---|---|---|---|
| Oracle | 95.60 | 0.37 | 93.68 | – |
| Strongest | 78.01 | 2.72 | 68.88 | – |
| Cheapest | 62.43 | 0.14 | 64.63 | – |
| MLP | 77.49 ± 0.56 | 1.13 ± 0.13 | 74.23 ± 0.22 | 146.71 |
| RouterDC | 77.52 ± 1.04 | 1.04 ± 0.26 | 74.59 ± 1.05 | 6.31 |
| VLC | 78.09 ± 1.17 | 1.23 ± 0.03 | 74.33 ± 0.51 | 6.74 |
MLP and RouterDC achieve the highest learned Rank Scores (~74.2–74.6), outperforming both Strongest and Cheapest, while substantial separation remains from the Oracle.
5. Upper Bounds, Architectural Insights, and Improvement Directions
The Oracle provides an upper bound for realistic routing, attaining 95.60% accuracy at \$0.37 per 10K samples and a Rank Score of 93.68. The leading learned router (RouterDC) demonstrates an accuracy gap ofand a Rank Score gap of$19.09$ points compared to Oracle. This persistent gap, consistent across cost budgets, is illustrated by Pareto frontiers.
Ablation studies indicate:
- Higher-dimensional text+vision embeddings improve routing outcomes.
- The “Normalize + Concat” feature fusion matches or exceeds gated/multimodal bilinear fusion.
- Among backbone architectures for end-to-end routing, LXMERT yields the best Rank Score.
- Closing the Oracle gap likely requires routers to better leverage fine-grained visual information (e.g., spatial layout, OCR region segmentation) and deeper modeling of question structure (e.g., question dependency graphs).
A plausible implication is that architectural innovations emphasizing localized visual cues and structured textual analysis will be necessary for further advances.
6. Reproducibility and Toolchain Ecosystem
VL-RouterBench provides a fully scriptable pipeline, with all code and processed logs intended for open-source release. The pipeline incorporates:
- Data Preparation: Unified log parsing, correctness evaluation, and construction of quality/cost matrices.
- Router Training: Train/dev/test split (7:1:2), soft-label cross-entropy with a cost penalty, and routines supporting both feature-level and end-to-end training regimes.
- Router Evaluation: Automated computation of all primary metrics, Pareto analysis, and visualization utilities.
This infrastructure is modular, enabling straightforward integration of new datasets, VLMs, and routing approaches. The extensibility of VL-RouterBench ensures its continued relevance as the standard empirical platform for multimodal routing evaluation (Huang et al., 29 Dec 2025).