Papers
Topics
Authors
Recent
Search
2000 character limit reached

BBOB Test Suite Benchmarking

Updated 30 January 2026
  • BBOB Test Suite is a rigorously designed collection of optimization problems that evaluates black-box optimizers through varied landscape difficulties.
  • The suite employs systematic random transformations and instance generation to ensure reproducibility and fair comparisons between algorithms.
  • Integration with the COCO platform offers standardized logging and objective normalization to support robust performance indicators in both single- and multi-objective settings.

The BBOB (Black-Box Optimization Benchmarking) test suite is a rigorously designed collection of real-parameter optimization problems for evaluating and comparing the performance of black-box optimizers. Originating within the COCO (COmparing Continuous Optimisers) platform, BBOB and its various extensions—including the bi-objective BBOB-biobj suite—have established themselves as de facto standards in continuous optimization benchmarking, providing systematic coverage of landscape difficulty, instance variability, and reproducibility for both single- and multi-objective optimization research.

1. Foundations: Structure and Purpose of the BBOB Test Suite

The original BBOB suite consists of 24 noiseless, continuous, single-objective test functions fi:RnRf_i:\mathbb{R}^n \to \mathbb{R}, parametrized to create many problem instances through systematic pseudo-random transformations. The design ensures representative coverage of major landscape phenomena, encompassing separability, multimodality, ill-conditioning, plateaus, and weak structure (Brockhoff et al., 2016, Long et al., 2022). Each function is defined on a bounded hypercube (typically [5,5]n[-5,5]^n) and is scalable in dimension, thus emulating a broad range of black-box optimization challenges found in practice.

The COCO platform extends this methodology with standardized logging, strict control over randomization (enabling fair comparison between deterministic and stochastic solvers), and support for multi-objective versions such as the BBOB-biobj suite (Brockhoff et al., 2016).

2. Function Classes, Instance Generation, and Transformations

BBOB functions are systematically categorized according to their difficulty and landscape characteristics:

Group Index Group Name Example Problems Dominant Properties
1 Separable Sphere, Separable Ellipsoid No variable coupling
2 Moderately conditioned/unimodal Attractive Sector, Rosenbrock Mild non-separability, low multimod.
3 Ill-conditioned/unimodal Sharp Ridge, Bent Cigar High conditioning, non-separable
4 Multimodal/moderate structure Rastrigin, Schaffer F7 Many local optima, strong structure
5 Weakly-structured multimodal Gallagher, Katsuura, Lunacek Weak global structure, high ruggedness

Each problem instance is generated via a chain of randomized transformations:

  • Search space translation: x=xxoptx' = x - x_{\textrm{opt}}, with xoptU([4,4]n)x_{\textrm{opt}} \sim U([-4,4]^n).
  • Rotation(s): Orthogonal matrices R1,R2R_1, R_2 are used singly or in sequence, occasionally combined with diagonal conditioning (for ill-conditioned functions).
  • Objective shift and distortion: Additive constants and nonlinear monotone mappings guarantee variation and prevent solver overfitting (Long et al., 2022).

Over 50 instances per function/dimension are recommended to ensure broad statistical representativeness.

3. BBOB-biobj and BBOB-biobj-ext: Multi-Objective Test Suite Design

To benchmark multi-objective black-box solvers, the BBOB-biobj suite combines pairs of single-objective BBOB functions into bi-objective problems:

  • BBOB-biobj: Selects 10 representative single-objective functions (2 from each group), forming all unordered pairs (including self-pairs) for a total of 55 functions.
  • BBOB-biobj-ext: Augments the core set to 92 by including all remaining unordered within-group pairs (excluding Weierstrass due to its non-unique optimum).

Each bi-objective test problem is defined as

Fθ(x)=(fαθ(x),fβθ(x)),F^\theta(x) = (f_\alpha^\theta(x), f_\beta^\theta(x)),

where θ\theta parameterizes independent instances for fαf_\alpha and fβf_\beta (Brockhoff et al., 2016).

The pairing strategy guarantees a balance of separable/non-separable, unimodal/multimodal, and conditioning properties in the multi-objective search space. This results in Pareto fronts of varied shapes (convex, concave, disconnected) and difficulty profiles (Brockhoff et al., 2016).

4. Objective-Space Normalization and Performance Indicators

Algorithm performance in BBOB and BBOB-biobj is always evaluated on the unnormalized, "raw" function values. However, for fair comparison across objectives and consistent computation of quality indicators (e.g., hypervolume), objective normalization is performed post hoc using the problem's ideal and nadir points: zideal=(minxfα(x),minxfβ(x)), znadir=(fα(xβ),fβ(xα)),z_{\rm ideal} = (\min_x f_\alpha(x), \min_x f_\beta(x)), \ z_{\rm nadir} = (f_\alpha(x^*_\beta), f_\beta(x^*_\alpha)), where xγx^*_\gamma denotes the unique minimizer of fγf_\gamma. The corresponding normalization maps the Pareto front to the unit square: f^α(x)=fα(x)zideal,αznadir,αzideal,α, f^β(x)=fβ(x)zideal,βznadir,βzideal,β.\hat f_\alpha(x) = \frac{f_\alpha(x) - z_{\rm ideal,\alpha}}{z_{\rm nadir,\alpha} - z_{\rm ideal,\alpha}}, \ \hat f_\beta(x) = \frac{f_\beta(x) - z_{\rm ideal,\beta}}{z_{\rm nadir,\beta} - z_{\rm ideal,\beta}}. Hypervolume and other indicators are computed in this normalized space (Brockhoff et al., 2016).

The principal performance metric for bi-objective optimization is hypervolume-based runtime: the number of function evaluations required to reach specified target hypervolume indicator values (relative to a reference Pareto set aggregated from multiple solvers). This is typically summarized via ECDFs (empirical cumulative distribution functions) across instances and targets (Brockhoff et al., 2016).

5. Experimental Protocols, Instance Handling, and Solver Comparison

The BBOB and BBOB-biobj protocols mandate that each algorithm is run on a predefined, reproducible set of function instances for each function and dimension (Brockhoff et al., 2016, Long et al., 2022). Instance IDs for bi-objective problems are mapped to unique single-objective instance IDs such that Pareto set degeneracies (e.g., coincident optima) or insufficient ideal–nadir separation are avoided. The standard is to use 15 independent instances per function/dimension (Brockhoff et al., 2016).

  • Stochastic solvers: Run multiple times per instance with different random seeds.
  • Deterministic solvers: Run once per instance; variability arises from the randomized transformation defining each instance.

Performance is reported both as means and quantiles across all runs and instances. Distributional differences (in ERT, hypervolume runtime, etc.) may be further analyzed with nonparametric statistical tests.

6. Rationale, Representativeness, and Suite Design Trade-Offs

The chief rationale for the BBOB(-biobj) suite construction is realism and diversity. Rather than artificially synthesizing Pareto fronts or over-emphasizing particular landscape types (such as perfectly separable or highly symmetric functions), BBOB-biobj pairs well-understood single-objective difficulties to reflect the composition found in real black-box multi-objective engineering problems (Brockhoff et al., 2016).

Key design decisions include:

  • Maintaining group structure: Each difficulty class is equally represented, ensuring balanced algorithm assessment.
  • Limiting suite size: Generating all pairs from 24 functions would yield 300 problems (bi-objective), but restricting to 10 representative functions yields 55 (or 92 in the extended set), supporting manageable yet rigorous experimental workloads.

This approach also readily generalizes to more than two objectives; by sampling tuples of functions according to their group structure, suite cardinality can be controlled even in mm-objective settings (Brockhoff et al., 2016).

7. Practical Impact, Extensions, and Software Implementation

The BBOB and BBOB-biobj test suites are fully integrated into the open-source COCO benchmarking platform (https://github.com/numbbo/coco), exposing each function and instance through a uniform interface. Experimental workflows typically involve iterating over all functions, dimensions, and instances, collecting best-so-far indicators and computing normalized performances for subsequent statistical analysis (Brockhoff et al., 2016).

BBOB(-biobj) underpins a broad range of contemporary benchmarking efforts, including large-scale optimization (via block-structured randomized transformations) (Elhara et al., 2019), benchmarking with noise (Loshchilov et al., 2012), and automated design of optimization heuristics via frameworks such as BLADE (Stein et al., 28 Apr 2025). Recent analyses using item response theory reveal that while BBOB covers a wide range of hard problem instances, some functions contribute little discrimination among algorithms, motivating structurally richer or adaptively defined test suites (Mattos et al., 2021, Skvorc et al., 26 Jan 2026).

The test suite's design allows fair, reproducible performance comparisons, supporting progress in continuous black-box optimization and stimulating the development of more robust, general-purpose optimization heuristics.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to BBOB Test Suite.