Papers
Topics
Authors
Recent
Search
2000 character limit reached

Bare Bones Grey Wolf Optimizer (BBGWO)

Updated 10 February 2026
  • BBGWO is a probabilistic reformulation of the Grey Wolf Optimizer that uses normal sampling based on moment matching to replace multi-stage deterministic updates.
  • It simplifies the algorithm while maintaining an effective exploration–exploitation balance, as demonstrated on multiple benchmark optimization tests.
  • BBGWO offers enhanced analytical tractability and parameter parsimony, making it suitable for diverse applications in engineering design and global optimization.

The Bare Bones Grey Wolf Optimizer (BBGWO) is a probabilistic reformulation of the standard Grey Wolf Optimizer, designed to retain GWO’s population-based metaheuristic framework while greatly simplifying its core update mechanism. BBGWO replaces the original’s sequence of deterministic vector operations with direct sampling from a normal distribution whose parameters are analytically matched to those of the GWO update, leveraging a bare bones approach first introduced for particle swarm optimization. This yields an algorithm that maintains GWO’s exploration–exploitation balance, has enhanced analytical tractability, and enables explicit stochastic modeling without degradation in empirical search performance (Wang et al., 2021).

1. Foundations and Distinction from Standard GWO

BBGWO is constructed by transferring the bare bones paradigm—sample-based search updates parameterized by distribution moments—into the GWO context. In the canonical GWO, each agent (wolf) in the population updates its position by aggregating three influential leaders, using weighted vector arithmetic involving random coefficients drawn from uniform distributions and a linearly decaying step parameter aa. For a wolf XiX_i in dimension jj, the update is: Xij(t+1)=13k=13Xkj(t)X_{ij}(t+1) = \frac{1}{3} \sum_{k=1}^3 X'_{kj}(t) where

Xkj(t)=Pkj(t)AkjCkj(Pkj(t)Xij(t))X'_{kj}(t) = P_{kj}(t) - A_{kj} C_{kj} (P_{kj}(t) - X_{ij}(t))

with AkjU[a,a]A_{kj}\sim U[-a, a], CkjU[0,2]C_{kj} \sim U[0, 2], and PkP_{k} denoting the three best solutions.

The defining departure in BBGWO is the elimination of the multi-stage arithmetic update. Instead, each coordinate Xij(t+1)X_{ij}(t+1) is sampled directly from a normal distribution

Xij(t+1)N(μij,σij2)X_{ij}(t+1) \sim \mathcal{N}(\mu_{ij}, \sigma_{ij}^2)

where μij\mu_{ij} and σij2\sigma_{ij}^2 are derived to exactly match the mean and variance of the one-step GWO update.

2. Mathematical Derivation of the BBGWO Update

The analytical foundation of BBGWO relies on explicit calculation of the distributional characteristics of the GWO’s update rule. For each coordinate, the PDF of XkjX'_{kj} is shown to be symmetric about the leader’s position and to have: E[Xkj]=Pkj\mathbb{E}[X'_{kj}] = P_{kj}

Var[Xkj]=23((XijPkj)2+2Pkj)\mathrm{Var}[X'_{kj}] = \frac{2}{3} \left( (X_{ij}-P_{kj})^2 + 2P_{kj} \right)

Since the actual update is the mean of three such random variables (one for each leader), the combined mean and variance are: E[Xij(t+1)]=13k=13Pkj\mathbb{E}[X_{ij}(t+1)] = \frac{1}{3} \sum_{k=1}^3 P_{kj}

Var[Xij(t+1)]=227k=13((XijPkj)2+2Pkj)\mathrm{Var}[X_{ij}(t+1)] = \frac{2}{27} \sum_{k=1}^3 \left( (X_{ij} - P_{kj})^2 + 2P_{kj} \right)

The normal approximation is justified by unimodality and symmetry, supported by the Central Limit Theorem for aggregated updates. In practice, the variance is modulated by the step parameter aa as: σij=a33k=13((XijPkj)2+2Pkj)\sigma_{ij} = \frac{a}{3\sqrt{3}} \sqrt{ \sum_{k=1}^3 \left( (X_{ij}-P_{kj})^2 + 2P_{kj} \right) } This direct sampling update collapses computational effort to a single stochastic draw per coordinate.

3. Algorithmic Workflow

A detailed pseudocode for BBGWO is presented, distinguishing the method from standard GWO solely by entry (4c) in the main iteration loop:

Step Action Notes
1 Initialize population {Xi}i=1N\{X_i\}_{i=1}^N Random uniform spread
2 Evaluate fitness f(Xi)f(X_i) As per benchmark or application
3 Identify top three wolves {P1,P2,P3}\{P_1, P_2, P_3\} Sorted by fitness
4 For t=1t = 1 to TT: Iterative update
4a Compute μij\mu_{ij} for all wolves and dimensions Mean of leaders
4b Compute σij\sigma_{ij} vector for all wolves/dimensions Variance via derived formula
4c Sample Xij(t+1)N(μij,σij2)X_{ij}(t+1) \sim \mathcal{N}(\mu_{ij}, \sigma_{ij}^2) Bare bones update step
5 Evaluate new fitness, update leaders, decay aa a=2(1t/T)a = 2(1-t/T)
6 Return best solution P1P_1

The only modification from the original GWO is substitution of stochastic sampling for the vector update; all other controls (population sizing, step decay, leader evaluation) are retained.

4. Theoretical Properties

Exact derivation demonstrates that the GWO update distribution is symmetric and unimodal, centered at the mean of the leaders. Both the mean and variance are available in closed-form, validating the appropriateness of the moment-matching normal approximation. As the three leading wolves approach one another (P1,P2,P3P_1, P_2, P_3 \rightarrow common point), the update variance σij2\sigma_{ij}^2 contracts to zero, ensuring the method inherently supports exploitation as convergence proceeds. While the framework furnishes a concrete probabilistic model for the search process, no mathematically rigorous proof of global convergence is supplied; the analysis focuses on stochastic properties and moment matching (Wang et al., 2021).

5. Empirical Evaluation and Performance Comparison

BBGWO’s performance is quantitatively assessed using 12 classical test functions (Sphere, Schwefel 2.22, Rosenbrock, Step, Rastrigin, Ackley, Griewank, Levy, Alpine, Dixon–Price, Michalewicz, Schwefel 2.25), each evaluated over 30 independent trials. Standard parameterization involves a population size N=20N=20, T=500T=500 iterations, and linear annealing of the variance-scaling parameter aa from 2 to 0.

Metrics reported are:

  • Average best-fitness across runs
  • Variance of best-fitness
  • Success rate, defined as number of runs achieving ffoundfmin<103|f_{\mathrm{found}} - f_{\min}| < 10^{-3}

Empirical findings show that, for all functions on which GWO attains the global optimum consistently, BBGWO matches its success rate and output statistics. On more difficult benchmarks (e.g., Step and Dixon–Price), BBGWO exhibits failure rates closely akin to GWO. Convergence analysis and statistical tabulation affirm that BBGWO's search dynamics and solution quality are “nearly identical” to GWO (Wang et al., 2021).

6. Analytical Advantages, Limitations, and Use Cases

BBGWO confers several benefits:

  • Simplicity: The multi-stage and arithmetic complexity of GWO is replaced by a single normal sampling step per component.
  • Analytical tractability: The approach provides explicit, closed-form characterization of the search dynamics.
  • Parameter parsimony: The method inherits the low parameter count of GWO, introducing no additional tunables.
  • Performance preservation: Empirical evaluation confirms parity with GWO across standard global optimization tests.

Limitations are primarily associated with the modeling simplification:

  • Approximation error: The Central Limit/Gaussian assumption may introduce errors when the top three leaders are widely dispersed.
  • Lack of guaranteed convergence: As is standard for metaheuristics, BBGWO does not furnish formal global convergence proofs, relying instead on exact moment matching.

Potential applications encompass any continuous global optimization context where GWO has demonstrated efficacy, including economic dispatch in power systems, image thresholding, clustering, and engineering design. BBGWO serves dual roles as both a direct optimizer and as a theoretical vehicle for rigorous stochastic analysis of GWO dynamics (Wang et al., 2021).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Bare Bones GWO (BBGWO).