Papers
Topics
Authors
Recent
Search
2000 character limit reached

MA-BBOB Suite: Synthetic Affine BBOB Benchmarks

Updated 5 February 2026
  • MA-BBOB Suite is a generator of synthetic, noiseless black-box optimization functions created by linearly mixing 24 canonical BBOB problems.
  • It uses affine combinations with tunable weights, shifts, and rotations to construct diverse landscapes that smoothly interpolate between standard BBOB features.
  • Its integration with platforms like IOHprofiler supports robust benchmarking, algorithm selection, and automated configuration in optimization studies.

The MA-BBOB Suite is a generator of synthetic, continuous, noiseless black-box optimization test functions designed to extend the well-established BBOB suite through affine mixtures of its 24 base problems. By linearly combining multiple BBOB functions, optionally with shifts, rotations, and normalization, MA-BBOB synthesizes diverse and tunably complex landscapes while preserving relevant landscape characteristics and performance patterns crucial for evaluating optimization algorithms and per-instance algorithm selectors. The suite is directly integrated into platforms such as IOHprofiler and widely adopted for benchmarking, exploratory landscape analysis (ELA), and automated machine learning pipelines.

1. Mathematical Construction and Parameterization

MA-BBOB defines an instance as an affine sum of kk base BBOB functions fi1,...,fikf_{i_1},...,f_{i_k} selected from the canonical collection of 24 distinct, rotated, and shifted functions. The general construction is: F(x)=j=1kαijsij[fij(xδ)fij]F(x) = \sum_{j=1}^k \alpha_{i_j}\, s_{i_j}\, \left[f_{i_j}(x - \delta) - f_{i_j}^*\right] where

  • αij0\alpha_{i_j}\ge0 are normalized weights (jαij=1\sum_j \alpha_{i_j}=1);
  • sijs_{i_j} are per-function scaling factors determined such that the range of log-precision [min(log10(fijfij)),max(log10(fijfij))][\min(\log_{10}(f_{i_j}-f_{i_j}^*)), \max(\log_{10}(f_{i_j}-f_{i_j}^*))] is mapped to a standardized interval, e.g., [8,2][-8,2];
  • δ[5,5]d\delta\in [-5,5]^d is a uniformly random global optimum shift;
  • fij=minxfij(x)f_{i_j}^* = \min_x f_{i_j}(x), ensuring F(x)F(x) attains minimum $0$ at x=δx = \delta (Vermetten et al., 2023, Vermetten et al., 2023, Dietrich et al., 2024, Long et al., 2024).

Weight generation follows a thresholding process: sample wjU(0,1)w_j\sim U(0,1), keep only the mm largest above a random threshold TT (default $0.85$), zero out the rest, and normalize. On average, about $3.6$ weights are nonzero, but the process can be tuned (via TT) to control mixture complexity. For pure random mixtures, weights can be sampled from a Dirichlet distribution, with or without further thresholding.

In some variants, an additional random orthogonal rotation AA and shift tt are injected, yielding

F(x)=i=124wifi(Ax+t)F(x) = \sum_{i=1}^{24} w_i\, f_i(A x + t)

with wΔ23w \in \Delta^{23} (the 24-dimensional simplex) (Long et al., 2024). This augments the diversity of spatial transformations and global optimum locations.

2. Affine Function Mixtures and Landscape Properties

MA-BBOB’s function space is a convex hull over the canonical BBOB landscapes. Varying the weights w=(αi1,...,αik)w = (\alpha_{i_1},...,\alpha_{i_k}) continuously interpolates between base landscapes, realizing smooth gradations in landscape complexity, modality, ruggedness, and global structure. With k=2k=2, the resulting mixed function transitions linearly between two canonical landscapes; for k24k\to24, high-order mixtures concentrate toward centrally blended landscapes that statistically resemble averages over the base suite (Dietrich et al., 2024).

This continuous interpolation guarantees both

  • smooth transitions of ELA feature values: e.g., ruggedness, multimodality, and variable interaction change continuously as mixture weights vary;
  • continuity in algorithmic performance: optimizer rankings, measured by metrics such as area under the convergence curve (AOCC) or area under empirical CDF (AUC), vary predictably along mixture paths, with each optimizer excelling on mixtures weighted toward their preferred components (Vermetten et al., 2023, Vermetten et al., 2023, Dietrich et al., 2024).

An important technical result is that the “mixture envelope” of ELA features covers but does not extend beyond the convex hull of the BBOB base suite in feature space, meaning that MA-BBOB mixtures neither extrapolate outside canonical diversity nor create pathological scenarios absent from the original set (Dietrich et al., 2024).

3. Instance Generation: Algorithms and Practical Implementation

The canonical implementation, integrated into IOHprofiler (including IOHexperimenter; see GitHub: IOHprofiler/IOHexperimenter), follows:

  • Input parameters: desired dimension dd, number of active components kk, random seed, weight threshold TT.
  • Algorithm:
  1. Draw weights wiU(0,1)w_i \sim U(0,1) for each i=1,...,24i=1,...,24.
  2. Compute threshold τ=min(T,3rd-largest{wi})\tau = \min(T, \text{3rd-largest}\{w_i\}); set wimax(wiτ,0)w_i \leftarrow \max(w_i - \tau, 0) and αi=wi/jwj\alpha_i = w_i / \sum_j w_j.
  3. Select base functions with αi>0\alpha_i>0; choose their instances (random seeds) and retrieve scaling factors sis_i.
  4. Sample a random optimum shift δ\delta uniformly in [5,5]d[-5,5]^d.
  5. Compose F(x)F(x) as the affine mixture detailed above (Vermetten et al., 2023, Vermetten et al., 2023).

Pseudocode reflects this process, and parameterization allows for:

  • controlling mixture diversity by varying kk and TT;
  • configuring spatial transformations via AA and tt (rotation and translation).

MA-BBOB is callable from R/Python APIs with suite name "ma_bbob", supports arbitrary numbers of instances (e.g., 1,,10001,\dots,1000), and allows for custom weight or shift settings. The core generation is data-independent and reproducible given seed and parameters (Vermetten et al., 2023).

4. Benchmarking, Exploratory Landscape Analysis, and Empirical Properties

MA-BBOB is specifically designed to diversify the landscape characteristics accessible for benchmarking black-box optimizers and evaluating meta-algorithmic frameworks, such as automated algorithm selection and configuration. Key empirical insights include:

  • Instance Space Coverage: MA-BBOB instances (visualized via UMAP or PCA embeddings of ELA features) densely populate the interior of the convex hull defined by the base 24 BBOB problems, filling gaps in landscape diversity while remaining within the “field” of canonical landscapes (Vermetten et al., 2023, Dietrich et al., 2024).
  • Concentration of Feature Distributions: Violin and boxplots of key ELA features (e.g., ela_meta coefficients, ela_distr skewness) show that multi-component mixtures result in narrower distributions compared to base problems, reflecting increased central concentration in feature-space as kk increases (Dietrich et al., 2024).
  • Preservation of Difficulty Patterns: Algorithmic rankings on MA-BBOB track those seen on BBOB, with CMA-ES-like optimizers remaining superior and local search methods decreasing in relative performance, particularly as the frequency of strictly unimodal instances drops in mixtures (Vermetten et al., 2023, Dietrich et al., 2024).
  • Performance Generalization: When using algorithm selectors, cross-validation within MA-BBOB mixtures (matching feature distributions) achieves significantly higher performance than selectors trained only on the 24 BBOB base problems, indicating that BBOB alone is insufficient for generalizing to affine combinations (Vermetten et al., 2023, Dietrich et al., 2024).

Empirical studies typically use budgets on the order of $2000d$–10000d10\,000d evaluations per run and recommend at least $1000d$ Sobol or LHS samples for ELA estimation (Vermetten et al., 2023).

5. Applications in Automated Algorithm Selection and Configuration

MA-BBOB has become central in the development and evaluation of landscape-aware algorithm selection (AAS) and automated algorithm configuration (AAC) pipelines:

  • Algorithm Selection: MA-BBOB enables extensive train/test splits for data-driven selection models. Studies show that training exclusively on BBOB severely limits selector efficacy when applied to hybrid instances. Optimal selector performance requires training on a large, distribution-matched set of MA-BBOB mixtures (Vermetten et al., 2023, Dietrich et al., 2024).
  • Algorithm Configuration: Dense neural networks trained on ELA features extracted from 10001\,000 MA-BBOB instances (5d and 20d) generalize strongly, identifying CMA-ES hyperparameter configurations that outperform default settings and are competitive with single best solvers on the majority of held-out BBOB base functions. The best generalization is achieved when MA-BBOB and random function generator (RGF) datasets are combined, yielding maximal ELA coverage (Long et al., 2024).
  • Portfolio Complementarity: When optimizing portfolios, MA-BBOB mixtures reveal hidden complementarity not visible when using only base BBOB problems. For instance, virtual best-solver (VBS) improvements over the single best solver (SBS) rise from 0.043 (2d) to 0.083 (5d) AOCC gap depending on the inclusion or exclusion of dominant algorithms (Dietrich et al., 2024).

Key AAS/AAC workflows involve

  • using ELA feature-based models (typically using random forests, XGBoost, or neural networks);
  • optimizing selection models for the percentage of VBS–SBS gap closed;
  • careful selection of training data (distribution-matched sampling outperforms feature-diversity “greedy” sampling except when test data are also diversity-forced).

6. Integration, Practical Usage, and Contamination Mitigation

MA-BBOB is implemented as a dedicated suite in IOHprofiler/IOHexperimenter and is distributed with tools that enable rapid large-scale benchmark generation and logging. Key integration details include:

  • Configuration: Instantiated via suite name, instance number, dimension, weight threshold, and (optionally) random seed and rescaling flag (Vermetten et al., 2023).
  • Logging and Analysis: Standardized output can be processed with IOHanalyzer or COCO tools, utilizing empirical performance profiles and AOCC/AUC metrics (Vermetten et al., 2023).
  • Contamination Avoidance: MA-BBOB hybrids (especially adjacent-pair hybrids hi(x)=0.5fi(x)+0.5fi+1(x)h_i(x)=0.5 f_i(x)+0.5 f_{i+1}(x)) serve as contamination-mitigating testbeds for LLMs or meta-heuristics that might otherwise overfit to known function code. Such hybrids are not present in standard corpora, preserving benchmark validity under modern LLM training regimes (Achtelik et al., 2 Feb 2026).

A NumPy implementation of the adjacent-pair hybrid suite is trivial, allowing direct, transparent reproduction of the benchmark set (Achtelik et al., 2 Feb 2026).

7. Connections to Multi-objective Generalizations and Suite Extensions

The affine-mixture principle underlying MA-BBOB extends naturally to multi-objective suites:

  • The bbob-biobj and related M-objective suites adopt tuple-wise combinations (fi1,fi2,...,fim)(f_{i_1}, f_{i_2}, ..., f_{i_m}) of BBOB functions, stratified by function group (separable, ill-conditioned, etc.), and employ careful instance selection to guarantee Pareto front diversity and non-degeneracy (Brockhoff et al., 2016).
  • Affine mixtures in the single-objective context directly inform the construction of many-objective problem sets, with suite sizes managed by stratified group sampling to avoid combinatorial explosion.

Such approaches facilitate benchmarking, algorithm selection, and reproducibility across the spectrum of continuous optimization, from single to many objectives.


References:

  • (Achtelik et al., 2 Feb 2026) Automatic Design of Optimization Test Problems with LLMs
  • (Vermetten et al., 2023) MA-BBOB: Many-Affine Combinations of BBOB Functions for Evaluating AutoML Approaches in Noiseless Numerical Black-Box Optimization Contexts
  • (Vermetten et al., 2023) MA-BBOB: A Problem Generator for Black-Box Optimization Using Affine Combinations and Shifts
  • (Dietrich et al., 2024) Impact of Training Instance Selection on Automated Algorithm Selection Models for Numerical Black-box Optimization
  • (Long et al., 2024) Landscape-Aware Automated Algorithm Configuration using Multi-output Mixed Regression and Classification
  • (Brockhoff et al., 2016) Using Well-Understood Single-Objective Functions in Multiobjective Black-Box Optimization Test Suites

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to MA-BBOB Suite.