Bare Bones Grey Wolf Optimizer (BBGWO)
- BBGWO is a probabilistic reformulation of the Grey Wolf Optimizer that uses normal sampling based on moment matching to replace multi-stage deterministic updates.
- It simplifies the algorithm while maintaining an effective exploration–exploitation balance, as demonstrated on multiple benchmark optimization tests.
- BBGWO offers enhanced analytical tractability and parameter parsimony, making it suitable for diverse applications in engineering design and global optimization.
The Bare Bones Grey Wolf Optimizer (BBGWO) is a probabilistic reformulation of the standard Grey Wolf Optimizer, designed to retain GWO’s population-based metaheuristic framework while greatly simplifying its core update mechanism. BBGWO replaces the original’s sequence of deterministic vector operations with direct sampling from a normal distribution whose parameters are analytically matched to those of the GWO update, leveraging a bare bones approach first introduced for particle swarm optimization. This yields an algorithm that maintains GWO’s exploration–exploitation balance, has enhanced analytical tractability, and enables explicit stochastic modeling without degradation in empirical search performance (Wang et al., 2021).
1. Foundations and Distinction from Standard GWO
BBGWO is constructed by transferring the bare bones paradigm—sample-based search updates parameterized by distribution moments—into the GWO context. In the canonical GWO, each agent (wolf) in the population updates its position by aggregating three influential leaders, using weighted vector arithmetic involving random coefficients drawn from uniform distributions and a linearly decaying step parameter . For a wolf in dimension , the update is: where
with , , and denoting the three best solutions.
The defining departure in BBGWO is the elimination of the multi-stage arithmetic update. Instead, each coordinate is sampled directly from a normal distribution
where and are derived to exactly match the mean and variance of the one-step GWO update.
2. Mathematical Derivation of the BBGWO Update
The analytical foundation of BBGWO relies on explicit calculation of the distributional characteristics of the GWO’s update rule. For each coordinate, the PDF of is shown to be symmetric about the leader’s position and to have:
Since the actual update is the mean of three such random variables (one for each leader), the combined mean and variance are:
The normal approximation is justified by unimodality and symmetry, supported by the Central Limit Theorem for aggregated updates. In practice, the variance is modulated by the step parameter as: This direct sampling update collapses computational effort to a single stochastic draw per coordinate.
3. Algorithmic Workflow
A detailed pseudocode for BBGWO is presented, distinguishing the method from standard GWO solely by entry (4c) in the main iteration loop:
| Step | Action | Notes |
|---|---|---|
| 1 | Initialize population | Random uniform spread |
| 2 | Evaluate fitness | As per benchmark or application |
| 3 | Identify top three wolves | Sorted by fitness |
| 4 | For to : | Iterative update |
| 4a | Compute for all wolves and dimensions | Mean of leaders |
| 4b | Compute vector for all wolves/dimensions | Variance via derived formula |
| 4c | Sample | Bare bones update step |
| 5 | Evaluate new fitness, update leaders, decay | |
| 6 | Return best solution |
The only modification from the original GWO is substitution of stochastic sampling for the vector update; all other controls (population sizing, step decay, leader evaluation) are retained.
4. Theoretical Properties
Exact derivation demonstrates that the GWO update distribution is symmetric and unimodal, centered at the mean of the leaders. Both the mean and variance are available in closed-form, validating the appropriateness of the moment-matching normal approximation. As the three leading wolves approach one another ( common point), the update variance contracts to zero, ensuring the method inherently supports exploitation as convergence proceeds. While the framework furnishes a concrete probabilistic model for the search process, no mathematically rigorous proof of global convergence is supplied; the analysis focuses on stochastic properties and moment matching (Wang et al., 2021).
5. Empirical Evaluation and Performance Comparison
BBGWO’s performance is quantitatively assessed using 12 classical test functions (Sphere, Schwefel 2.22, Rosenbrock, Step, Rastrigin, Ackley, Griewank, Levy, Alpine, Dixon–Price, Michalewicz, Schwefel 2.25), each evaluated over 30 independent trials. Standard parameterization involves a population size , iterations, and linear annealing of the variance-scaling parameter from 2 to 0.
Metrics reported are:
- Average best-fitness across runs
- Variance of best-fitness
- Success rate, defined as number of runs achieving
Empirical findings show that, for all functions on which GWO attains the global optimum consistently, BBGWO matches its success rate and output statistics. On more difficult benchmarks (e.g., Step and Dixon–Price), BBGWO exhibits failure rates closely akin to GWO. Convergence analysis and statistical tabulation affirm that BBGWO's search dynamics and solution quality are “nearly identical” to GWO (Wang et al., 2021).
6. Analytical Advantages, Limitations, and Use Cases
BBGWO confers several benefits:
- Simplicity: The multi-stage and arithmetic complexity of GWO is replaced by a single normal sampling step per component.
- Analytical tractability: The approach provides explicit, closed-form characterization of the search dynamics.
- Parameter parsimony: The method inherits the low parameter count of GWO, introducing no additional tunables.
- Performance preservation: Empirical evaluation confirms parity with GWO across standard global optimization tests.
Limitations are primarily associated with the modeling simplification:
- Approximation error: The Central Limit/Gaussian assumption may introduce errors when the top three leaders are widely dispersed.
- Lack of guaranteed convergence: As is standard for metaheuristics, BBGWO does not furnish formal global convergence proofs, relying instead on exact moment matching.
Potential applications encompass any continuous global optimization context where GWO has demonstrated efficacy, including economic dispatch in power systems, image thresholding, clustering, and engineering design. BBGWO serves dual roles as both a direct optimizer and as a theoretical vehicle for rigorous stochastic analysis of GWO dynamics (Wang et al., 2021).