Papers
Topics
Authors
Recent
Search
2000 character limit reached

Bi-Parametric Parallel Weierstrass Scheme

Updated 27 January 2026
  • The paper introduces a bi-parametric approach that refines the classical Weierstrass method by integrating adaptive Newton corrections for solving nonlinear equations.
  • It achieves genuine third-order convergence through step-log contraction profiling and ensemble-based, training-free parameter optimization.
  • The scheme significantly reduces iterations and CPU time by leveraging parallel processing and systematic stability metrics in multicore environments.

The bi-parametric parallel Weierstrass-type scheme is a class of iterative algorithms designed for high-efficiency, robust solution of systems of nonlinear equations—particularly the simultaneous localization of all distinct roots of a complex polynomial. By combining advanced correction strategies (both Weierstrass and Newton-type) and principled, lightweight parameter tuning via direct finite-time contraction analysis, these schemes provide genuine third-order convergence and are highly suitable for multicore and parallel computing environments. Integration of two parameters, typically denoted α\alpha and β\beta, enables fine control of algorithmic stability and rate, with systematic, training-free optimization of these parameters via step-log contraction profiling over randomized launch ensembles (Shams et al., 20 Jan 2026).

1. Algorithm Definition and Update Structure

Let f:CCf:\mathbb{C}\to\mathbb{C} be a polynomial of degree nn; the goal is to compute all simple roots ζ1,,ζn\zeta_1,\dots,\zeta_n in parallel. At each iteration hh, the current root approximations are stored in

x[h]=(x1[h],x2[h],,xn[h]).\mathbf x^{[h]} = (x_1^{[h]}, x_2^{[h]}, \dots, x_n^{[h]})^\top.

Two real parameters, α\alpha and β\beta, govern the predictor-corrector steps. The precise iterative updates of the SAB[3] scheme are:

  • Predictor (Weierstrass–Newton fractional correction):

zi[h]=xi[h]f(xi[h])f(xi[h])11+αf(xi[h])1+βf(xi[h]),i=1,,nz_i^{[h]} = x_i^{[h]} - \frac{f(x_i^{[h]})}{f'(x_i^{[h]})} \cdot \frac{1}{1 + \frac{\alpha f(x_i^{[h]})}{1 + \beta f(x_i^{[h]})}}, \quad i=1,\dots,n

  • Corrector (Weierstrass parallel product):

xi[h+1]=xi[h]f(xi[h])j=1,  jin(xi[h]zj[h])x_i^{[h+1]} = x_i^{[h]} - \frac{f(x_i^{[h]})}{\prod_{j=1, \; j\ne i}^{n}(x_i^{[h]} - z_j^{[h]})}

Equivalently, in operator form: x[h+1]=x[h]DW(x[h],z[h])1f(x[h])\mathbf x^{[h+1]} = \mathbf x^{[h]} - D_W(\mathbf x^{[h]}, \mathbf z^{[h]})^{-1} f(\mathbf x^{[h]}) where DWD_W is the diagonal Weierstrass-denominator operator.

The parameters α\alpha and β\beta determine the modification to the classical Newton step, tuning both the local stability and asymptotic convergence properties (Shams et al., 20 Jan 2026). No restrictions on their values are required except for smoothness and proximity of initial guesses to actual roots.

2. Convergence Theory

The SAB[3] algorithm achieves genuine third-order convergence under standard smoothness conditions. The main theorem states:

  • If fC3f\in C^3 and all ζi\zeta_i are simple roots, and the initial guesses x[0]\mathbf x^{[0]} are sufficiently close to ζ\boldsymbol\zeta, then

εi[h+1]=O(ε[h]3)\varepsilon_i^{[h+1]} = O\left( \|\boldsymbol\varepsilon^{[h]}\|^3 \right)

where ε[h]=x[h]ζ\boldsymbol\varepsilon^{[h]} = \mathbf x^{[h]} - \boldsymbol\zeta and i=1,,ni=1,\dots,n. Thus, for a constant CC,

ε[h+1]Cε[h]3\|\boldsymbol\varepsilon^{[h+1]}\| \le C\,\|\boldsymbol\varepsilon^{[h]}\|^3

Locally, the predictor error satisfies zi[h]ζi=O(εi2)z_i^{[h]}-\zeta_i=O(\varepsilon_i^2), and the corrector denominator is well behaved due to distinctness of roots. Empirical tests confirm robust third-order convergence in practice for suitable (α,β)(\alpha, \beta) (Shams et al., 20 Jan 2026).

3. Direct Finite-Time Contraction Profiling

The step-log contraction profiling methodology enables efficient, reproducible parameter tuning without reliance on analytical, problem-dependent diagnostics. For each iteration,

  • Step vector and norm:

sh=x[h+1]x[h],      sh=sh2\mathbf s_h = \mathbf x^{[h+1]} - \mathbf x^{[h]}, \;\;\; s_h = \|\mathbf s_h\|_2

  • Step-log ratio:

g(h)=log(sh+1+εsh+ε),ε>0g(h) = \log\left(\frac{s_{h+1}+\varepsilon}{s_h+\varepsilon}\right), \quad \varepsilon>0

When g(h)<0g(h)<0, the scheme is transiently contracting. Over a fixed window WW, the contraction profile is

λW(t)=1Wh=tW+1tg(h),t=W,,K1\lambda_W(t) = \frac{1}{W} \sum_{h=t-W+1}^{t} g(h), \quad t = W,\dots,K-1

Aggregating these profiles over NensN_{\rm ens} randomized micro-launch ensembles yields

λˉW(t)=1Nensr=1NensλW(r)(t)\bar\lambda_W(t) = \frac{1}{N_{\rm ens}} \sum_{r=1}^{N_{\rm ens}} \lambda_W^{(r)}(t)

This approach enables scalable assessment of contraction/expansion behavior and is independent of specific root locations or system structure.

4. Profile-Based Stability Metrics and Tuning Framework

Two scalar profile metrics are extracted for ranking (α,β)(\alpha, \beta) candidates:

  • Stability Minimum (SminS_{\min}):

Smin=max{0,  ymintmin}S_{\min} = \max\{0,\; -y_{\min}\cdot t_{\min}\}

where yminy_{\min} and tmint_{\min} denote the minimum and its location in λˉW(t)\bar\lambda_W(t).

  • Stability Moment (SmomS_{\mathrm{mom}}):

M0=t(λˉW(t))+,tˉ=tt(λˉW(t))+M0,Smom=M0/tˉM_0 = \sum_t (-\bar\lambda_W(t))_+,\quad \bar t = \frac{\sum_t t(-\bar\lambda_W(t))_+}{M_0},\quad S_{\mathrm{mom}} = M_0/\bar t

Large SmomS_{\mathrm{mom}} values correspond to strong, early contraction and are empirically predictive of global robustness.

The recommended training-free parameter selection workflow is:

  1. Scan a uniform grid over (α,β)(\alpha,\beta).
  2. Launch NensN_{\rm ens} randomized trials per grid-point.
  3. Evaluate SmomS_{\mathrm{mom}} and SminS_{\min}.
  4. Select optimal (α,β)(\alpha^*,\beta^*) via maximization (Shams et al., 20 Jan 2026).

This framework is “embarrassingly parallel” and requires no analytic knowledge of the underlying nonlinear equation.

5. Comparative Numerical Performance

Extensive experiments confirm the practical gains of the SAB[3] scheme and step-log tuning. For three classes of nonlinear test problems (high-degree polynomials, enzyme kinetics, and transcendental polynomial-exponential equations), key observations include:

  • Reduction in iteration counts to tight tolerances (x[k+1]x[k]<1030\|x^{[k+1]}-x^{[k]}\| < 10^{-30}) from 100\sim100 (classical) to 13\sim13 (optimized SAB[3]).
  • Corresponding CPU time decreases by factors of $20$–$30$, even accounting for initial profiling overhead.
  • Convergence success rates across all roots improve from partial/divergent to 100%100\%.
  • Empirical convergence order stabilizes at $3$, whereas suboptimal parameters display irregular or order $2$ behavior (Shams et al., 20 Jan 2026).

Heatmaps of SminS_{\min} and SmomS_{\mathrm{mom}} reveal wide basins of high performance, aiding robust deployment across diverse nonlinear systems.

6. Parallelization and Multicore Deployment

Key practical guidelines for multicore implementation:

  • Window W10W\approx10 and stabilization ε1016\varepsilon\approx10^{-16} are typical choices.
  • Ensemble sizes Nens=30N_{\rm ens}=30–$50$ balance cost and statistical robustness.
  • Grid scan resolution of 50×5050\times50 over (α,β)(\alpha,\beta) reliably identifies large stable regions.
  • All outer loops (parameter grid and micro-launches) admit parallelization across CPUs or GPUs.
  • After optimal parameter selection, the SAB[3] algorithm can be deployed for production runs without further tuning.

The method is training-free, reproducible and empirically insensitive to stochasticity in the problem data (Shams et al., 20 Jan 2026).

7. Context and Relationship to Parallel Weierstrass Samplers

Bi-parametric parallel Weierstrass-type schemes constitute a methodological generalization of parallel root-finding algorithms inspired by statistical Weierstrass samplers for merging independent posterior draws in subset-based parallel MCMC. In the Bayesian context, Weierstrass transforms provide bounded, kernel-smooth approximations to posterior densities, with error quantified in terms of smoothing parameters and the shape of subset posteriors (Wang et al., 2013). The SAB[3] scheme diverges in application—focusing on root-localization rather than distributional approximation—but retains the central principle of parallel correction using fractions and products derived from independently computed local information.

A plausible implication is the broader utility of bi-parametric and kernel-transformed updates in parallel scientific computing, facilitating scalable, stable problem solution in the presence of high-dimensional and heterogeneous constraints.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Bi-Parametric Parallel Weierstrass-Type Scheme.