Cascaded PI Controllers Tuning
- Cascaded PI controllers are control architectures that nest inner and outer PI loops to precisely manage variables like speed and position.
- They use performance metrics such as overshoot, settling time, and ITAE to formulate a scalar cost function for tuning.
- Data-driven tuning via Bayesian optimization reduces iterations and outperforms classical methods in stability and error reduction.
A cascaded PI controller is a control scheme that serially nests two proportional-integral (PI) loops, typically implemented as an inner loop governing a system variable closer to actuation (e.g., speed) and an outer loop regulating a higher-level target (e.g., position). This architecture is routinely applied in mechanical systems where fast stabilization of an inner variable (such as velocity) supports precise outer-loop objectives (such as position tracking). Efficient automated tuning for these controllers is critical, especially after maintenance or changes in system dynamics, as suboptimal gain selection degrades overall system performance (Khosravi et al., 2020).
1. Cascaded PI Control Architecture
A canonical cascaded PI configuration comprises two primary control loops:
- Outer Loop: Regulates the primary reference input (e.g., position) via error using a PI controller , producing a reference for the inner loop.
- Inner Loop: Receives the set-point from the outer loop (commonly a speed reference), computes , and applies a PI controller to generate the control input to the plant .
The respective closed-loop transfer functions are:
- Inner loop:
- Outer loop:
Standard implementations may simplify the structure; for example, the experimental realization used a pure-proportional outer loop (), but the same principles extend to full PI outer controllers.
2. Performance Metrics and Cost Function Formulation
Tuning cascaded PI controllers requires quantitative performance evaluation at each candidate gain vector . The approach utilizes step-response-based metrics:
- Absolute maximum overshoot
- Settling time (to a band)
- Infinity-norm of tracking error
- Integral of time-weighted absolute error
A custom scalar cost function forms a weighted sum
with reflecting the relative importance of each indicator (see Table 2 in (Khosravi et al., 2020)). The sequential tuning process first minimizes for the inner loop, then minimizes for the outer loop with the inner gains fixed.
3. Data-Driven Tuning via Bayesian Optimization
To automate and expedite gain selection, Bayesian optimization (BO) is employed:
- The cost landscape is modeled as a Gaussian process (GP) surrogate,
with a squared-exponential kernel and hyperparameters (signal variance , length-scales , noise variance ) estimated by maximizing the GP marginal likelihood.
- The BO procedure iteratively proposes gain settings by minimizing a Lower Confidence Bound (LCB) acquisition function,
with and being the GP posterior mean and standard deviation, and controlling the exploration-exploitation trade-off.
Termination criteria include repeated incumbent minima or a preset maximum number of iterations.
4. Sequential BO Algorithm for Cascaded PI Tuning
The full procedure, summarized for one loop, is as follows:
- Define the feasible domain for , cost function , maximal iterations , and initial sample size .
- Sample initial gain vectors, obtain corresponding from experiments, and store in dataset .
- Fit the GP surrogate to .
- Iterate:
- Compute GP posterior .
- Select next via LCB minimization.
- Evaluate , augment .
- Update GP.
- Terminate by stabilization or reaching .
- Return minimizer of recorded in .
In cascaded systems, this process first optimizes (inner), then (outer).
5. Empirical Comparison of Tuning Methods
Evaluations on a linear axis drive compared BO-based tuning to classical methods such as Ziegler–Nichols, relay autotuning, ITAE-optimal tuning, and exhaustive grid search.
| Method | (speed) | (speed) | (position) |
|---|---|---|---|
| Ziegler–Nichols | 0.18 | 510 | 392 |
| ITAE tuning | 0.11 | 420 | 255 |
| Relay tuning | 0.05 | 130 | 115 |
| Exhaustive grid | 0.36 | 130 | 225 |
| Sequential BO | 0.37 | 130 | 225 |
Bayesian optimization required 20–30 iterations for the inner loop and 3–6 iterations for the outer loop. BO tuning reduced speed-loop overshoot from approximately 12% (Ziegler–Nichols) to less than 1%, shortened settling time by about 30%, limited position-loop overshoot to under 2% (compared to 10% for classical rules), and lowered steady-state error by approximately 50%. Step responses (see Figure A in (Khosravi et al., 2020)) show that BO yields the fastest, least-oscillatory trajectories.
6. Influence of Initial Experimental Design
The number of initial random samples strongly affects BO convergence. A tabulation of its effect:
| Loop | (train) | BO iterations | |
|---|---|---|---|
| Speed | 50 | 19 | (0.37, 130) |
| Speed | 30 | 27 | (0.345, 130) |
| Speed | 20 | 44 | (0.36, 110) |
| Position | 15 | 3 | 225 |
| Position | 10 | 6 | 240 |
| Position | 7 | 5 | 210 |
A larger initial design provides a better prior, reducing the number of subsequent BO iterations, but entails higher up-front experimental cost. In practice, provides balanced performance, leading to convergence within a few dozen experiments overall.
7. Guidelines and Practical Considerations
- Select (gain search space) to exclude grossly unstable combinations via basic loop-shaping or grid-testing.
- Choose –$40$ samples to capture global cost structure.
- Use increasing slowly, encouraging exploration initially and exploitation later in BO cycles.
- Terminate optimization when the same lowest- gain is repeated three times or upon reaching maximum iterations.
This data-driven, sequential Bayesian optimization procedure yields fully automated, data-efficient tuning of cascaded PI controllers that surpasses classical tuning in both speed and accuracy, requiring only a limited number of experimental trials (Khosravi et al., 2020).