Batch Acquisition Function Optimization
- Batch Acquisition Function Optimization is a strategy for selecting multiple query points in parallel to accelerate Bayesian Optimization processes.
- It uses complementary objectives like PI, EI, and LCB to ensure high informativeness and diversity in batch evaluations.
- Frameworks such as MACE show significant speed-ups, achieving up to 74× improvement in unconstrained and 15× in constrained optimization scenarios.
Batch Acquisition Function Optimization is a class of methodologies centered on selecting and optimizing acquisition functions in Bayesian Optimization (BO) for simultaneous, parallel evaluation of multiple query points ("batches") during each optimization round. Batch acquisition is essential in settings with substantial parallel computational resources or expensive function evaluations, such as analog circuit synthesis, where maximizing parallel throughput critically accelerates wall-clock optimization.
1. Parallel Bayesian Optimization and Batching Motivations
In standard sequential BO, the process iterates as follows: fit a Gaussian process (GP) model to known evaluations, maximize a scalar acquisition function (for candidate ), evaluate the expensive black-box function at the chosen point, and update the dataset. This sequential protocol, however, is inefficient when modern simulators or experimental platforms enable concurrent evaluations. Batch acquisition optimization generalizes this protocol by proposing a set of points to be evaluated in parallel at each iteration, aiming to maximize the total information gain while promoting diversity among batch members.
Critical challenges for batch acquisition optimization are:
- C1: Maximizing informativeness—each batch point should provide high expected improvement or information gain.
- C2: Promoting diversity—avoid redundant points, ensure coverage over the plausible solution space.
2. Classical and Multi-objective Acquisition Functions
Central acquisition functions in BO include:
- Probability of Improvement (PI):
where is the current best observed value, and are GP posterior mean and standard deviation respectively, and is the standard normal CDF.
- Expected Improvement (EI):
- Lower Confidence Bound (LCB):
with set according to desired exploration level.
Optimizing any one of these functions independently for batch selection typically produces correlated or spatially redundant points.
3. Multi-objective Acquisition Ensemble (MACE) Framework
The MACE paradigm (Zhang et al., 2021) casts batch acquisition as a multi-objective optimization over several standard acquisition functions (PI, EI, LCB). Rather than scalarizing, MACE treats
as a three-objective minimization problem. At each iteration, MACE applies a multi-objective optimizer (DEMO) to approximate the Pareto front of non-dominated solutions over these objectives. Uniform sampling from effectively generates a batch of diverse, informative points. This approach eliminates the need for manual repulsion terms or weightings and automatically balances exploration vs. exploitation without normalization artifacts.
Algorithmic structure (Unconstrained MACE):
- Initialize with random samples.
- For each BO round :
- Fit GP surrogate to the current dataset.
- Solve the multi-objective subproblem for PI, EI, LCB.
- Sample uniformly at random from the Pareto front.
- Evaluate at these points in parallel; update the dataset.
4. Handling Constraints: Feasibility-First Two-Stage MACE Extension
In constrained optimization, where must satisfy black-box inequality constraints , MACE deploys a two-stage adaptation:
- Stage 1: Focuses exclusively on locating at least one feasible point by optimizing a 3-objective ensemble:
with the product of constraint-satisfaction probabilities (via GP). This expedites finding the feasible region.
- Stage 2: Once feasibility is achieved, a 6-objective ensemble is solved, including PI, EI, LCB and constraint-penalization terms. Candidate points with high constraint violation are pruned.
Uniform sampling from the resulting Pareto set concentrates batch points in feasible, informative regions.
5. Optimization Algorithms and Diversity
The core batch selection in MACE leverages multi-objective evolutionary optimizers (such as DEMO) for Pareto-front approximation. Random sampling from this front naturally yields diversity without explicit penalization, a common difficulty in traditional scalarized batch acquisition methods. No normalization or scale-calibration is required since acquisition functions are dimensionless and equally weighted in the Pareto framework.
For constrained versions, penalization terms and feasibility probabilities are directly embedded in the multi-objective optimization, with aggressive pruning of infeasible candidates to prioritize efficient batch evaluation.
6. Experimental Performance Assessment
Experimental results for analog circuit synthesis demonstrate:
- Unconstrained case: MACE with batch size achieved up to reduction in simulation time relative to differential evolution (DE), with equal or superior final objective values.
- Constrained case: MACE attained up to speedup compared to weighted-EI Bayesian optimization (WEIBO), again with equivalent or improved final solutions.
Batch-size analysis revealed that larger batches increase parallel efficiency and reduce wall-clock time, with negligible quality degradation up to due to Pareto-front sampling robustness.
7. Practical Guidelines and Transferability
- Acquisition function selection: Choose a small, complementary set (e.g., PI, EI, LCB).
- Multi-objective optimizer: Use a lightweight algorithm to approximate Pareto optimality over acquisition values.
- Batch formation: Sample batches of size from the Pareto set, matched to available parallel resources.
- Constrained problems: Apply a two-stage approach, focusing on feasibility identification before full penalized ensemble optimization.
The MACE framework generalizes robustly to other expensive black-box domains beyond analog circuit synthesis. Its multi-objective, Pareto-ensemble principle and feasibility-prioritized adaptation for constraints enable efficient batch Bayesian optimization with minimal manual tuning.
Summary Table: Core Batch Acquisition Mechanisms
| Method | Objective(s) | Diversity Mechanism | Constraint Handling | Optimizer |
|---|---|---|---|---|
| MACE (Zhang et al., 2021) | PI, EI, LCB (multi-objective) | Pareto set sampling | Two-stage, penalized ensemble | DEMO |
| Scalarized BO | Weighted sum/scalarization | Manual repulsion | Scalarized constraints | Various |
Emphasis on multi-objective acquisition and Pareto-based batch selection distinguishes MACE as a principled approach that effectively balances exploration, exploitation, and diversity in parallel Bayesian optimization, with significant empirical speed-ups and generality for both unconstrained and constrained domains (Zhang et al., 2021).