Papers
Topics
Authors
Recent
Search
2000 character limit reached

Stealthy Coverage Control for UAV 3D Recon

Updated 7 February 2026
  • Stealthy coverage control is a semi-autonomous paradigm that blends human teleoperation with UAV optimization to achieve precise 3D reconstruction.
  • It utilizes a nested-loop quadratic programming approach to balance operator commands and information gain while enforcing motion stealth constraints.
  • Empirical results demonstrate improved reconstruction completeness, reduced RMSE, and lower operator workload in complex simulated environments.

Stealthy coverage control is a semi-autonomous image sampling paradigm designed to enhance real-time 3D reconstruction by fusing human operator navigation with autonomous coverage optimization, subject to motion “stealthiness” constraints. It specifically addresses the challenge of reconstructing complex structures where the spatially-varying image sampling density required for accurate modeling is not known a priori, leveraging the operator’s situational reasoning while retaining autonomous efficiency (Terunuma et al., 31 Jan 2026).

1. Mathematical Formulation and Problem Setting

The workspace DR3D \subset \mathbb{R}^3 contains the objects to be reconstructed. A single UAV (quadrotor) equipped with an RGB–depth camera has state x(t)R3x(t) \in \mathbb{R}^3 (position) and R(t)SO(3)R(t) \in SO(3) (orientation). The operator provides commanded velocity vh(t)R3v_h(t) \in \mathbb{R}^3 and yaw rate ωh(t)R\omega_h(t) \in \mathbb{R}; the system discretizes DD into voxels {cj}j=1M\{c_j\}_{j=1}^M, each with a visibility probability pj(t)p_j(t).

The information gain for a new observation from (x,R)(x,R) is defined as: I(x,R)=j=1M[1pj(t)]ρj(x,R),I(x, R) = \sum_{j=1}^M [1 - p_j(t)] \rho_j(x, R), where ρj(x,R)[0,1]\rho_j(x, R) \in [0,1] indicates whether voxel jj is visible from (x,R)(x, R).

The control objective at each time step Δt\Delta t is to compute u(t)=[v(t);ω(t)]u(t) = [v(t); \omega(t)] that

  • Remains close to operator intent,
  • Maximizes information gain,
  • Enforces UAV dynamics and the stealth constraint.

The core optimization is: u=argminu=[v;ω]  12vvhWv2+12ωωhWω2 λI(x+vΔt,Reω^Δt) s.t.        x(t+Δt)=x(t)+vΔt,R(t+Δt)=R(t)eω^Δt, vvmax,ωωmax, hstealth(x,R)0,\begin{aligned} u^\star = \arg\min_{u=[v;\omega]} \; & \frac{1}{2} \|v-v_h\|^2_{W_v} + \frac{1}{2}\|\omega - \omega_h\|^2_{W_\omega} \ & -\lambda I\big(x + v\Delta t,\, R e^{\hat\omega\Delta t}\big) \ \text{s.t.} \;\;\;\; & x(t+\Delta t) = x(t) + v\Delta t,\, R(t+\Delta t) = R(t) e^{\hat\omega\Delta t}, \ & \|v\| \leq v_{\max},\, |\omega| \leq \omega_{\max}, \ & h_{\rm stealth}(x,R) \geq 0, \end{aligned} with Wv,Wω0W_v, W_\omega \succ 0 weighting human input tracking; λ>0\lambda>0 trades off exploration vs. teleoperation. The "stealth" constraint hstealthh_{\rm stealth} (see Section 4) shapes the allowed set of UAV maneuvers.

2. Human Intention Modeling and Situational Recognition

Human intention is encoded in the commanded (vh,ωh)(v_h, \omega_h). To ensure stable and intuitive interface dynamics between human and UAV, the system incorporates a passivity-based control interface as in [Atman et al. 2019], stabilizing the coupled arm-eye and drone system.

Situational recognition is formalized by a mapping

Ψ:{current 3D map}{vh,ωh},\Psi: \{\text{current 3D map}\} \rightarrow \{v_h, \omega_h\},

where Ψ\Psi functions as a state machine that reacts to coverage progress (e.g., when I(x,R)I(x, R) drops below a threshold) by suggesting new semantic waypoints or override velocities to the operator. This mechanism allows adaptive macro-navigation through complex environments using high-level human guidance.

3. Algorithm Structure and Control Decoupling

Stealthy coverage control employs a nested-loop architecture:

  • The outer loop filters and applies operator commands (vh,ωh)(v_h, \omega_h) at low gain for safety.
  • The inner loop formulates and solves the QP for uu^* as described above, combining operator intent, current map information gain, and the stealth constraint.

Pseudocode outline (per Δt\Delta t cycle):

1
2
3
4
5
6
7
8
1: measure current state (x,R) and map {p_j}
2: read human command (v_h, ω_h)
3: form predicted information-gain function I(x+vΔt, R·e^{ωΔt})
4: solve QP: minimize ½‖v−v_h‖² + ½‖ω−ω_h‖² − λ I subject to dynamics & stealth
5: apply control u* = [v*; ω*] to UAV
6: update occupancy probabilities p_j using new image
7: if ∑_j (1−p_j) small ⇒ trigger next waypoint via Ψ
8: repeat

This structure decouples the human’s macro-level “where to go next” from the UAV’s micro-level “how to sample optimally and stealthily,” allowing simultaneous exploitation of human insight and autonomous local optimization.

4. Fundamental Equations and Stealth Constraint

The information-gain metric (voxelized) is

I(x,R)=j=1M[1pj]ρj(x,R).I(x, R) = \sum_{j=1}^M [1-p_j] \rho_j(x, R).

An alternative view-entropy metric is

H(x,R)=pixels iqi(x,R)logqi(x,R),H(x, R) = -\sum_{\text{pixels } i} q_i(x, R) \log q_i(x, R),

where qiq_i is each pixel’s normalized expected entropy.

The stealth constraint is

hstealth(x,R)=αR˙γxI(x,R)0,h_{\rm stealth}(x, R) = \alpha - \|\dot R\| - \gamma \|\nabla_x I(x, R)\| \geq 0,

with α\alpha, γ\gamma tunable parameters, limiting aggressive rotations and rapid movement in directions of high information gain. This constraint "smooths" UAV motion, suppressing conspicuously active maneuvers during coverage.

A linearized closed-form feedback law for the combined control can be written as: [vomega]=[Wv10 0Wω1](Wuh+λ(x,R)I(x,R))Γ(x,R)ηstealth\begin{bmatrix}v\\omega\end{bmatrix} = \begin{bmatrix} W_v^{-1} & 0 \ 0 & W_\omega^{-1} \end{bmatrix} \left( W u_h + \lambda \nabla_{(x,R)}I(x,R) \right) - \Gamma(x,R)\eta_{\rm stealth} where Γηstealth\Gamma\eta_{\rm stealth} is a barrier function term ensuring feasibility under hstealth0h_{\rm stealth} \geq 0.

5. Simulation Protocol and Performance Metrics

The evaluation environment is a 10m×10m×5m10\,\mathrm{m} \times 10\,\mathrm{m} \times 5\,\mathrm{m} simulated warehouse with four known-geometry objects. System parameters include Δt=0.1\Delta t = 0.1\,s, vmax=1.2v_{\max} = 1.2\,m/s, ωmax=30\omega_{\max} = 30^\circ/s, λ=2.0\lambda = 2.0, α=0.1\alpha = 0.1, and γ=0.5\gamma = 0.5.

Three primary performance metrics are used:

  • Reconstruction completeness C=#visible voxels#total surface voxelsC = \frac{\# \text{visible voxels}}{\#\text{total surface voxels}}
  • Reconstruction accuracy ERMSEE_{\rm RMSE} (point-to-mesh RMSE in meters)
  • Human workload (integral of uhu\|u_h - u\|)

The protocol consists of 10 trials (random seeds), each with 300\,s simulated teleoperation, comparing:

  • Stealthy coverage control (“SCC”)
  • Standard human + coverage control baseline (no stealth QP constraint)

6. Quantitative Results

Empirical findings are summarized as follows (mean ±\pm standard deviation, n=10n=10):

Method Completeness CC [\%] RMSE ERMSEE_{\rm RMSE} [m]
Baseline 78.3±4.778.3 \pm 4.7 0.054±0.0090.054 \pm 0.009
SCC (ours) 91.1±2.8\mathbf{91.1 \pm 2.8} 0.032±0.005\mathbf{0.032 \pm 0.005}

A paired tt-test yields p<0.01p < 0.01 on both metrics, indicating statistical significance. Additionally, the average deviation between operator and applied controls (uhu\|u_h - u\|) is decreased by 35%, reflecting a reduction in operator workload (Terunuma et al., 31 Jan 2026).

7. Contributions and Future Outlook

Stealthy coverage control introduces a unified quadratic programming framework that synergistically blends human teleoperation, real-time 3D reconstruction coverage, and a novel stealth constraint. The modular human-in-the-loop approach ensures stability and safety via passivity, while a situational recognition module triggers automatic waypoint generation, facilitating efficient and adaptive coverage. The central mechanism is the decoupling of human-directed macro-navigation from stealthy and information-driven micro-sampling, implemented via null-space/QP projection.

Simulations demonstrate that stealthy coverage control achieves over 15% improvement in reconstruction completeness, halves RMSE, and significantly lowers operator workload, with results robust to random seed variations. The methodology is extensible to multi-UAV systems or advanced human-assistant paradigms, maintaining the foundational stealth property. These findings substantiate stealthy coverage control as an effective integration of human expertise and autonomous active sensing in real-time 3D reconstruction for constrained or complex settings (Terunuma et al., 31 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Stealthy Coverage Control.