Papers
Topics
Authors
Recent
Search
2000 character limit reached

EKF-Inspired State Estimation

Updated 2 February 2026
  • EKF-inspired state estimation techniques extend classical Kalman filters to nonlinear, uncertain, and distributed systems using recursive local linearization and Bayesian updates.
  • These methods integrate geometric interpretations, adaptive tuning, and robust estimation strategies, enhancing convergence and stability across a variety of applications.
  • Empirical studies show that such techniques achieve lower RMSE and reduced computational overhead compared to alternative filters like particle or unscented Kalman filters.

Extended Kalman Filter Inspired State Estimation

The extended Kalman filter (EKF) and its conceptual descendants constitute a class of state estimation methodologies predicated on recursive local linearization, explicit mean–covariance propagation, and measurement fusion that generalize the classic linear Kalman filter to nonlinear, continuous–discrete, and even infinite-dimensional systems. EKF-inspired state estimation encompasses both canonical EKF architectures as well as informed modifications and extensions—such as distributed, geometric, robust, and learning-aided algorithms—that preserve the structure of Bayesian moment propagation but adapt to nonlinearity, model uncertainty, partial observability, system geometry, or domain-specific constraints. The theoretical, computational, and empirical behavior of these algorithms has been characterized for a wide range of applications, with documented performance on complex simulations and real-world systems (Nielsen et al., 2022, Varley et al., 23 Sep 2025, He et al., 2018, Ollivier, 2019, Im, 2024).

1. Continuous–Discrete EKF: Theoretical Foundations and Algorithm

The canonical EKF for continuous–discrete nonlinear stochastic systems operates on models of the form: dx(t)=f(t,x(t),u(t),d(t),θ)dt+σ(t,x(t),u(t),d(t),θ)dω(t),d x(t) = f\bigl(t,x(t),u(t),d(t),\theta\bigr)\,dt + \sigma\bigl(t,x(t),u(t),d(t),\theta\bigr)\,d\omega(t), with discrete-time measurements

y(tk)=h(tk,x(tk),θ)+v(tk),y(t_k) = h\bigl(t_k,x(t_k),\theta\bigr) + v(t_k),

where process noise dω(t)d\omega(t) is Wiener, v(tk)v(t_k) is Gaussian, and x0x_0 is Gaussian-initialized (Nielsen et al., 2022). The EKF executes:

  1. Linearization: Along the most recent state estimate x^k(t)\hat x_k(t), Jacobians Ak(t)=f/xA_k(t) = \partial f/\partial x, Ck+1=h/xC_{k+1} = \partial h/\partial x are evaluated.
  2. Continuous-Time Propagation: The mean and covariance are propagated on [tk,tk+1][t_k, t_{k+1}] via explicit integration of

ddtx^k(t)=f(t,x^k(t)),ddtPk(t)=Ak(t)Pk(t)+Pk(t)Ak(t)+Σk(t)Σk(t),\frac{d}{dt}\hat x_k(t) = f\bigl(t, \hat x_k(t)\bigr), \quad \frac{d}{dt}P_k(t) = A_k(t)P_k(t) + P_k(t)A_k(t)^\top + \Sigma_k(t)\Sigma_k(t)^\top,

where Σk(t)\Sigma_k(t) collects diffusion coefficients.

  1. Discrete-Time Update: Upon measurement arrival, the innovation ek+1=yk+1h(tk+1,x^k+1k)e_{k+1} = y_{k+1} - h(t_{k+1}, \hat x_{k+1|k}) is formed. The Kalman gain and Joseph-stabilized covariance update are

Kk+1=Pk+1kCk+1Re,k+11,Pk+1k+1=(IKk+1Ck+1)Pk+1k(IKk+1Ck+1)+Kk+1RKk+1.K_{k+1} = P_{k+1|k}C_{k+1}^\top R_{e,k+1}^{-1}, \quad P_{k+1|k+1} = (I-K_{k+1}C_{k+1})P_{k+1|k}(I-K_{k+1}C_{k+1})^\top + K_{k+1}RK_{k+1}^\top.

This approach, amenable to explicit numerical solvers due to non-stiffness, underpins the practical success of EKF-like estimators for nonlinear SDEs with discretely sampled measurements (Nielsen et al., 2022).

2. Geometric and Information-Theoretic Interpretations

The EKF has a rigorous information-geometric interpretation. Natural gradient descent on trajectory space—with the Fisher information matrix as the metric—recovers the standard EKF equations when process noise is set to “pure fading memory” form QtPtt1Q_t \propto P_{t|t-1} (Ollivier, 2019). The parameterization of the entire system trajectory (not just an instantaneous state) allows the update to be recast as

st=stt1+ηtJt1slnpt(yts),s_t = s_{t|t-1} + \eta_t J_t^{-1} \nabla_s \ln p_t(y_t|s),

where JtJ_t is the Fisher information. This geometric connection confers invariant properties under chart changes and clarifies the role of process noise as an online learning rate or forgetting factor, unifying EKF and information-theoretic optimization frameworks (Ollivier, 2019).

3. Generalizations: Infinite-Dimensional and Distributed EKF

Infinite-Dimensional Measurements

For systems with finite-dimensional states and infinite-dimensional (field-valued) measurements, as arise in vision-based localization or PDE observer problems, the EKF's moment updates extend to operator equations in Hilbert space (Varley et al., 23 Sep 2025). The measurement Jacobian Hk(ξ)H_k(\xi) at spatial location ξ\xi corresponds to the image gradient, yielding pointwise state–measurement gains and updates involving integrals over the measurement domain. Empirical results confirm that this formulation tolerates dense, high-dimensional measurement spaces and realizes substantial accuracy improvements over feature-based pipelines in vision state estimation (Varley et al., 23 Sep 2025).

Distributed State Estimation

In networked systems with nonlinear-unmodeled (or uncertain) plant terms F(xk,k)F(x_k, k), an extended-state distributed Kalman filter (ESDKF) is formed by augmenting the state with FF, linearizing the extended dynamics, and executing local Kalman updates and graph-based fusion steps (He et al., 2018). This construction provides credible real-time bounds on estimation error and guarantees global consistency and boundedness under collective observability and time-varying network connectivity (He et al., 2018).

4. Practical Extensions: Robustness, Tuning, and Algorithmic Variants

EKF-inspired techniques have spawned a range of practically motivated augmentations:

  • Joseph Form Update: To preserve numerical stability and positive-definiteness of covariances, the Joseph stabilized covariance update is essential in finite precision implementations (Nielsen et al., 2022).
  • Adaptive Tuning: Adaptive schemes empirically tabulate process and measurement noise covariances as functions of operating condition and update them online, yielding dramatic improvements—for instance, up to 85% RMSE reduction in battery SOC estimation under realistic drive cycles (Knox et al., 2023).
  • Robust Filtering: Integration of robust M-estimators (e.g., Huber cost with IRLS minimization) into the EKF update step yields enhanced resilience to outliers and leverage points in measurement data, with empirically validated improvements in dynamic state estimation for power systems (Netto et al., 2021).
  • Algorithmic Variants: The error-state (ESKF), iterated (IEKF), and iterated error-state (IESKF) Kalman filters improve convergence, numerical stability, and handling of strong nonlinearities via repeated linearization or error propagation in tangent spaces, especially in SLAM and inertial navigation contexts (Im, 2024).

5. Numerical Experiments and Comparative Performance

Applied to the modified four-tank benchmark—comprising both ODE and SDE disturbance subsystems—the continuous–discrete EKF achieves sub-3% mean absolute percentage error (MAPE) for plant states and 15.7% for unmeasured disturbances, operating at modest computational cost (time update 0.309 s, measurement update 0.0122 s over 120 steps). In contrast, the unscented Kalman filter (UKF), ensemble Kalman filter (EnKF), and particle filter (PF) produce comparable or marginally better errors but with substantially higher computational time (e.g., PF with 1000 particles achieves state MAPE 2.40% at much greater cost). The Joseph update improves robustness by preventing covariance loss of positive semidefiniteness during floating-point computation (Nielsen et al., 2022).

6. Limitations and Theoretical Guarantees

EKF-inspired estimators inherit the limitations of local linearization: performance and stability degrade under strong nonlinearity or insufficient excitation. The asymptotic error performance and confidence in the estimated covariance depend on the validity of the linearized model, noise assumptions, and proper numerical implementation. However, under mild structural observability and boundedness, theoretical guarantees are available for stability, bounded covariance, and convergence of the extended/filtering state in distributed and adaptive contexts (Nielsen et al., 2022, He et al., 2018, Knox et al., 2023). Extensions to infinite-dimensional and geometric settings have justified and generalized the EKF update step, leveraging the underlying structure of the system state space (Varley et al., 23 Sep 2025, Ollivier, 2019).


Key References:

  • "State Estimation Methods for Continuous-Discrete Nonlinear Systems involving Stochastic Differential Equations" (Nielsen et al., 2022)
  • "The Extended Kalman Filter is a Natural Gradient Descent in Trajectory Space" (Ollivier, 2019)
  • "An Extended Kalman Filter for Systems with Infinite-Dimensional Measurements" (Varley et al., 23 Sep 2025)
  • "Distributed Kalman Filter for A Class of Nonlinear Uncertain Systems: An Extended State Method" (He et al., 2018)
  • "Notes on Kalman Filter (KF, EKF, ESKF, IEKF, IESKF)" (Im, 2024)
  • "Advancing state estimation for lithium-ion batteries with hysteresis: systematic extended Kalman filter tuning" (Knox et al., 2023)
  • "A robust extended Kalman filter for power system dynamic state estimation using PMU measurements" (Netto et al., 2021)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Extended Kalman Filter Inspired State Estimation.