Papers
Topics
Authors
Recent
Search
2000 character limit reached

Feedback Linearization Guidance for Interception

Updated 4 February 2026
  • The paper presents a novel feedback linearization guidance law that transforms nonlinear engagement dynamics into a linearized system to guarantee interception.
  • It employs both range-based and LOS-rate-based input-output feedback strategies along with fuzzy blending to mitigate singularities and actuator limits.
  • Monte Carlo simulations validate the approach by demonstrating lower miss distances and failure rates compared to conventional proportional guidance.

A feedback linearization-based guidance law for guaranteed interception designates a class of nonlinear control algorithms for pursuer-evader engagements, structured to formally guarantee interception regardless of adversarial target maneuvers. These approaches employ input-output feedback linearization (IOL) to transform the nonlinear, coupled vehicle–target dynamics into a linearized form with respect to a measured guidance output (such as range or line-of-sight (LOS) rate), allowing the closed-loop performance to be shaped using linear control design. The paradigm is robustified by blending with conventional proportional guidance and deploying corrections to handle singularities and divergence phenomena in specific engagement geometries. Systematic Monte Carlo studies validate such laws for practical interception with actuator limitations (&&&0&&&).

1. Pursuer–Evader Engagement Modeling

The standard engagement scenario is modeled via planar point-mass dynamics for both vehicles. For pursuer (“p”) and evader (“e”), the longitudinal and lateral motion equations are

V˙i=TiDimigsinγi, γ˙i=1Vi(nz,i+gcosγi),i{p,e},\begin{aligned} \dot V_i &= \frac{T_i - D_i}{m_i} - g\sin\gamma_i, \ \dot\gamma_i &= -\frac{1}{V_i}(n_{z,i} + g\cos\gamma_i),\quad i\in\{p,e\}, \end{aligned}

where TiT_i and DiD_i represent thrust and drag, mim_i mass, gg gravitational acceleration, γi\gamma_i flight-path angle, and nz,in_{z,i} the normal acceleration input (unz,pu\equiv n_{z,p} for the pursuer).

Relative geometry is encoded via the range RR and LOS angle ψ\psi with respect to a global engagement frame,

R˙=Vpcos(ψγp)Vecos(ψγe),ψ˙=1R[Vpsin(ψγp)Vesin(ψγe)].\dot R = V_p\cos(\psi-\gamma_p) - V_e\cos(\psi-\gamma_e),\qquad \dot\psi = \frac{1}{R}\big[V_p\sin(\psi-\gamma_p)-V_e\sin(\psi-\gamma_e)\big].

This nonlinear plant is denoted compactly as x˙=f(x,w)+g(x)u\dot x = f(x,w) + g(x)u, with x=[R,ψ,Vp,γp]x = [R,\,\psi,\,V_p,\,\gamma_p]^\top, w=[Ve,γe]w = [V_e,\,\gamma_e]^\top.

2. Input–Output Feedback Linearization and Guidance Law Derivation

Two principal IOL-based guidance laws are formulated depending on output selection:

(a) Range-based IOL Law

Selecting the plant output as y=h(x)=Ry = h(x) = R, the system exhibits a relative degree of two (rel.deg(R,u)=2\mathrm{rel.\,deg}(R,u)=2). Coordinates ξ1=R\xi_1 = R, ξ2=R˙\xi_2 = \dot R are introduced. The second Lie derivative is computed as

α(x,w)=Lf2h(x)=[Vpsin(ψγp)Vesin(ψγe)]2R+cos(ψγp)DpTpmp+gcosγpsin(ψγp),\alpha(x,w) = L_f^2 h(x) = \frac{[V_p\sin(\psi-\gamma_p)-V_e\sin(\psi-\gamma_e)]^2}{R} + \cos(\psi-\gamma_p)\frac{D_p-T_p}{m_p} + g\cos\gamma_p\sin(\psi-\gamma_p),

with

β(x)=LgLfh(x)=sin(ψγp).\beta(x) = L_gL_f h(x) = \sin(\psi-\gamma_p).

The resulting IOL command follows

u=1β(x)[α(x,w)+v],u = \frac{1}{\beta(x)}\big[-\alpha(x,w)+v\big],

where vv is chosen (e.g., v=kRRv=-k_R R) so that the closed-loop range dynamics R¨+kRR=0\ddot R + k_R R = 0 are Lyapunov-stabilized.

(b) LOS-Rate-based IOL Law

Taking output y=ψ˙y = \dot \psi, the system has relative degree one. Here, ξ=ψ˙\xi = \dot \psi and

α(x,w)=Lfh(x),β(x)=Lgh(x)=cos(ψγp)R.\alpha(x,w) = L_f h(x),\qquad \beta(x) = L_g h(x) = \frac{\cos(\psi-\gamma_p)}{R}.

The IOL law is

u=1β(x)[α(x,w)+v],u = \frac{1}{\beta(x)}\big[-\alpha(x,w)+v\big],

with v=kψ˙ψ˙v = -k_{\dot\psi}\dot\psi yielding exponentially decaying LOS rate (ψ˙0\dot\psi \to 0).

3. Handling Singularities and Pathological Behaviors

The range-based law’s denominator β(x)=sin(ψγp)\beta(x)=\sin(\psi-\gamma_p) vanishes in tail-chase or head-on geometries (ψγp=nπ\psi-\gamma_p = n\pi), causing unbounded commands. A Takagi–Sugeno fuzzy blending scheme is deployed, smoothly interpolating between IOL law and classical proportional guidance (PG): u=σ(sin(ψγp))uIOL+[1σ(sin(ψγp))]uPG,u = \sigma(\sin(\psi-\gamma_p))\,u_{\mathrm{IOL}} + [1-\sigma(\sin(\psi-\gamma_p))]\,u_{\mathrm{PG}}, where σ(s)\sigma(s) transitions rapidly from near 1 (s0.1|s| \gg 0.1) to 0 (s0.1|s| \leq 0.1). The PG command is

uPG=λVpR[Vpsin(ψγp)Vesin(ψγe)]gcosγp.u_{\mathrm{PG}} = -\lambda\,\frac{V_p}{R}[V_p\sin(\psi-\gamma_p)-V_e\sin(\psi-\gamma_e)] - g\cos\gamma_p.

For LOS-based IOL, although denominators avoid singularity for finite RR, guidance action may diverge (“off-axis”) in certain angle regimes and fail to force R0R \rightarrow 0. This is resolved by incorporating a sign-correction function: C(ψ,γp)=sign(cos(ψγp)),C(\psi, \gamma_p) = \mathrm{sign}(\cos(\psi-\gamma_p)), such that

u=C(ψ,γp)β(x)1[α(x,w)+v].u = C(\psi,\gamma_p)\,\beta(x)^{-1}\big[-\alpha(x,w) + v\big].

This convention recovers correct pursuit sense and enforces closure.

4. Formal Interception Guarantees

For the closed-loop R¨+kRR=0\ddot R + k_R R = 0 (range-based IOL), range is guaranteed to reach zero (formally, single crossing), provided R(0)>0R(0) > 0 and R˙(0)<0\dot R(0) < 0. Fuzzy blending preserves this guarantee: away from singularities, IOL dominates; near singularity, PG (which itself ensures closure in missile guidance) prevails. For LOS-based IOL with correction, ψ˙0\dot\psi \to 0 and the sign flip secures dR/dt<0dR/dt < 0 until R=0R=0. Thus, the composite laws ensure R(t)R(t) crosses zero in finite time (Dorsey et al., 9 Sep 2025).

5. Monte Carlo Evaluation and Comparative Performance

Large-scale Monte Carlo simulations (10,000 runs per scenario) evidence that LOS-based IOL with correction attains the lowest average miss distances and failure rates across diverse engagement settings:

Scenario LOS-IOL (m, %fail) Range-IOL (m, %fail) PG (m, %fail)
Rear-aspect 0.79, 0.04% 1.35, 1.52% 0.81, 0.05%
Head-on 2.75, 0.15% 9.29, 19.1% 8.44, 1.34%
Head-on, evasive evader (10g pull random) 0.93, 0.0% 14.6, 42.8% 1.90, 7.47%

In all cases, the LOS-IOL corrected law showed excellent robustness under pursuer acceleration limits and initial state variability (Dorsey et al., 9 Sep 2025).

6. Practical Control Synthesis and Implementation

The synthesized nonlinear controllers are executed as follows. For range-based IOL with blending: u=σ(sin(ψγp))α(x,w)kRRsin(ψγp)+[1σ()][λVpR(Vpsin(ψγp)Vesin(ψγe))gcosγp],u = \sigma(\sin(\psi-\gamma_p))\,\frac{-\alpha(x,w)-k_R R}{\sin(\psi-\gamma_p)} + [1-\sigma(\cdot)]\,\Bigl[ -\lambda\,\frac{V_p}{R}\big(V_p\sin(\psi-\gamma_p)-V_e\sin(\psi-\gamma_e)\big) - g\cos\gamma_p \Bigr], and for LOS-corrected IOL: u=C(ψ,γp)α(x,w)kψ˙ψ˙β(x),u = C(\psi,\gamma_p)\frac{-\alpha(x,w)-k_{\dot\psi}\dot\psi}{\beta(x)}, with u|u| saturated to umaxu_{\max}.

Design coefficients (kRk_R, λ\lambda, kψ˙k_{\dot\psi}) are tuned for a balance between convergence speed and saturation avoidance, typically kR[0.1,1.0]s2k_R \in [0.1, 1.0]\,\mathrm s^{-2}, λ3\lambda \approx 3, blending threshold sin(ψγp)0.1|\sin(\psi-\gamma_p)|\lesssim 0.1, kψ˙[0.5,2.0]s1k_{\dot\psi} \in [0.5, 2.0]\,\mathrm s^{-1}.

7. Relationship to Broader Guidance Law Literature

The feedback linearization-based approach contextualizes within a larger body of nonlinear pursuit-evasion and missile guidance literature. For instance, feedback strategies for hypersonic pursuit under one-dimensional evader constraints have been developed using linear quadratic differential game (LQDG) formulations, leveraging trajectory linearization and Riccati-based feedback designs to enable tractable onboard guidance for highly nonlinear systems (Lee et al., 2021). In all cases, augmentation with blending or correction schemes emerges as essential for transforming theoretical feedback designs into practical laws with enforceable control limits and formal capture assurances.

8. Design Guidance and Recommendations

  • Range-based IOL is effective except near singular geometries; always blend with PG in these domains.
  • LOS-based IOL, with sign correction, is preferable for head-on or highly off-axis scenarios and demonstrates superior robustness in Monte Carlo evaluation.
  • Gains should be selected to yield fast response but not to induce frequent control saturation; monitor the fuzzy blending variable (σ\sigma) to ensure handoff to PG is timely.
  • Saturate all control commands to reflect actual actuator constraints and system limits.
  • For operational implementation, rigorous simulation across engagement envelopes is mandatory to guarantee formal interception under real-world uncertainties.

References:

Dorsey and Goel, "Feedback Linearization-based Guidance Law for Guaranteed Interception" (Dorsey et al., 9 Sep 2025); Ostrowski et al., "Feedback Strategies for Hypersonic Pursuit of a Ground Evader" (Lee et al., 2021).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Feedback Linearization-based Guidance Law for Guaranteed Interception.