Papers
Topics
Authors
Recent
Search
2000 character limit reached

Cascaded PI Controllers Tuning

Updated 7 February 2026
  • Cascaded PI controllers are control architectures that nest inner and outer PI loops to precisely manage variables like speed and position.
  • They use performance metrics such as overshoot, settling time, and ITAE to formulate a scalar cost function for tuning.
  • Data-driven tuning via Bayesian optimization reduces iterations and outperforms classical methods in stability and error reduction.

A cascaded PI controller is a control scheme that serially nests two proportional-integral (PI) loops, typically implemented as an inner loop governing a system variable closer to actuation (e.g., speed) and an outer loop regulating a higher-level target (e.g., position). This architecture is routinely applied in mechanical systems where fast stabilization of an inner variable (such as velocity) supports precise outer-loop objectives (such as position tracking). Efficient automated tuning for these controllers is critical, especially after maintenance or changes in system dynamics, as suboptimal gain selection degrades overall system performance (Khosravi et al., 2020).

1. Cascaded PI Control Architecture

A canonical cascaded PI configuration comprises two primary control loops:

  • Outer Loop: Regulates the primary reference input (e.g., position) via error e2(t)=rp(t)y(t)e_2(t) = r_p(t) - y(t) using a PI controller C2(s)=Kp2+Ki2/sC_2(s) = K_{p2} + K_{i2}/s, producing a reference for the inner loop.
  • Inner Loop: Receives the set-point from the outer loop (commonly a speed reference), computes e1(t)=vref,2(t)y˙(t)e_1(t) = v_\mathrm{ref,2}(t) - \dot{y}(t), and applies a PI controller C1(s)=Kp1+Ki1/sC_1(s) = K_{p1} + K_{i1}/s to generate the control input to the plant G(s)G(s).

The respective closed-loop transfer functions are:

  • Inner loop: T1(s)=C1(s)G(s)1+C1(s)G(s)T_1(s) = \frac{C_1(s) G(s)}{1 + C_1(s) G(s)}
  • Outer loop: T2(s)=C2(s)T1(s)1+C2(s)T1(s)T_2(s) = \frac{C_2(s) T_1(s)}{1 + C_2(s) T_1(s)}

Standard implementations may simplify the structure; for example, the experimental realization used a pure-proportional outer loop (Ki2=0K_{i2} = 0), but the same principles extend to full PI outer controllers.

2. Performance Metrics and Cost Function Formulation

Tuning cascaded PI controllers requires quantitative performance evaluation at each candidate gain vector θ=(Kp1,Ki1,Kp2,Ki2)\theta = (K_{p1}, K_{i1}, K_{p2}, K_{i2}). The approach utilizes step-response-based metrics:

  • Absolute maximum overshoot Mp(θ)M_p(\theta)
  • Settling time Ts(θ)T_s(\theta) (to a ±2%\pm2\% band)
  • Infinity-norm of tracking error e\|e\|_\infty
  • Integral of time-weighted absolute error ITAE(θ)=0Tte(t)dt\mathsf{ITAE}(\theta) = \int_0^T t\,|e(t)|dt

A custom scalar cost function J(θ)J(\theta) forms a weighted sum

J(θ)=k=1Kwkφk(θ)J(\theta) = \sum_{k=1}^K w_k\,\varphi_k(\theta)

with wkw_k reflecting the relative importance of each indicator (see Table 2 in (Khosravi et al., 2020)). The sequential tuning process first minimizes J1(Kp1,Ki1)J_1(K_{p1}, K_{i1}) for the inner loop, then minimizes J2(Kp2,Ki2)J_2(K_{p2}, K_{i2}) for the outer loop with the inner gains fixed.

3. Data-Driven Tuning via Bayesian Optimization

To automate and expedite gain selection, Bayesian optimization (BO) is employed:

  • The cost landscape J(θ)J(\theta) is modeled as a Gaussian process (GP) surrogate,

J()GP(0,k(,))J(\cdot) \sim \mathcal{GP}(0, k(\cdot, \cdot))

with a squared-exponential kernel and hyperparameters (signal variance σf2\sigma_f^2, length-scales {j2}\{\ell_j^2\}, noise variance σn2\sigma_n^2) estimated by maximizing the GP marginal likelihood.

  • The BO procedure iteratively proposes gain settings by minimizing a Lower Confidence Bound (LCB) acquisition function,

θn+1=argminθΘ[μn(θ)βnσn(θ)]\theta_{n+1} = \arg\min_{\theta \in \Theta} \big[\mu_n(\theta) - \beta_n \sigma_n(\theta)\big]

with μn\mu_n and σn\sigma_n being the GP posterior mean and standard deviation, and βn\beta_n controlling the exploration-exploitation trade-off.

Termination criteria include repeated incumbent minima or a preset maximum number of iterations.

4. Sequential BO Algorithm for Cascaded PI Tuning

The full procedure, summarized for one loop, is as follows:

  1. Define the feasible domain Θ\Theta for (Kp,Ki)(K_p, K_i), cost function J()J(\cdot), maximal iterations NmaxN_\mathrm{max}, and initial sample size N0N_0.
  2. Sample N0N_0 initial gain vectors, obtain corresponding JJ from experiments, and store in dataset DD.
  3. Fit the GP surrogate to DD.
  4. Iterate:
    • Compute GP posterior (μ,σ)(\mu, \sigma).
    • Select next θ\theta via LCB minimization.
    • Evaluate J(θ)J(\theta), augment DD.
    • Update GP.
    • Terminate by stabilization or reaching NmaxN_\mathrm{max}.
  5. Return minimizer θ\theta^* of JJ recorded in DD.

In cascaded systems, this process first optimizes (Kp1,Ki1)(K_{p1}, K_{i1}) (inner), then (Kp2,Ki2)(K_{p2}, K_{i2}) (outer).

5. Empirical Comparison of Tuning Methods

Evaluations on a linear axis drive compared BO-based tuning to classical methods such as Ziegler–Nichols, relay autotuning, ITAE-optimal tuning, and exhaustive grid search.

Method Kp1K_{p1} (speed) Ki1K_{i1} (speed) Kp2K_{p2} (position)
Ziegler–Nichols 0.18 510 392
ITAE tuning 0.11 420 255
Relay tuning 0.05 130 115
Exhaustive grid 0.36 130 225
Sequential BO 0.37 130 225

Bayesian optimization required 20–30 iterations for the inner loop and 3–6 iterations for the outer loop. BO tuning reduced speed-loop overshoot from approximately 12% (Ziegler–Nichols) to less than 1%, shortened settling time by about 30%, limited position-loop overshoot to under 2% (compared to >>10% for classical rules), and lowered steady-state error by approximately 50%. Step responses (see Figure A in (Khosravi et al., 2020)) show that BO yields the fastest, least-oscillatory trajectories.

6. Influence of Initial Experimental Design

The number of initial random samples N0N_0 strongly affects BO convergence. A tabulation of its effect:

Loop N0N_0 (train) BO iterations θ\theta^*
Speed 50 19 (0.37, 130)
Speed 30 27 (0.345, 130)
Speed 20 44 (0.36, 110)
Position 15 3 225
Position 10 6 240
Position 7 5 210

A larger initial design provides a better prior, reducing the number of subsequent BO iterations, but entails higher up-front experimental cost. In practice, N030N_0 \approx 30 provides balanced performance, leading to convergence within a few dozen experiments overall.

7. Guidelines and Practical Considerations

  • Select Θ\Theta (gain search space) to exclude grossly unstable combinations via basic loop-shaping or grid-testing.
  • Choose N020N_0 \approx 20–$40$ samples to capture global cost structure.
  • Use βn\beta_n increasing slowly, encouraging exploration initially and exploitation later in BO cycles.
  • Terminate optimization when the same lowest-JJ gain is repeated three times or upon reaching maximum iterations.

This data-driven, sequential Bayesian optimization procedure yields fully automated, data-efficient tuning of cascaded PI controllers that surpasses classical tuning in both speed and accuracy, requiring only a limited number of experimental trials (Khosravi et al., 2020).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Cascaded PI Controllers.