Stealthy Coverage Control for UAV 3D Recon
- Stealthy coverage control is a semi-autonomous paradigm that blends human teleoperation with UAV optimization to achieve precise 3D reconstruction.
- It utilizes a nested-loop quadratic programming approach to balance operator commands and information gain while enforcing motion stealth constraints.
- Empirical results demonstrate improved reconstruction completeness, reduced RMSE, and lower operator workload in complex simulated environments.
Stealthy coverage control is a semi-autonomous image sampling paradigm designed to enhance real-time 3D reconstruction by fusing human operator navigation with autonomous coverage optimization, subject to motion “stealthiness” constraints. It specifically addresses the challenge of reconstructing complex structures where the spatially-varying image sampling density required for accurate modeling is not known a priori, leveraging the operator’s situational reasoning while retaining autonomous efficiency (Terunuma et al., 31 Jan 2026).
1. Mathematical Formulation and Problem Setting
The workspace contains the objects to be reconstructed. A single UAV (quadrotor) equipped with an RGB–depth camera has state (position) and (orientation). The operator provides commanded velocity and yaw rate ; the system discretizes into voxels , each with a visibility probability .
The information gain for a new observation from is defined as: where indicates whether voxel is visible from .
The control objective at each time step is to compute that
- Remains close to operator intent,
- Maximizes information gain,
- Enforces UAV dynamics and the stealth constraint.
The core optimization is: with weighting human input tracking; trades off exploration vs. teleoperation. The "stealth" constraint (see Section 4) shapes the allowed set of UAV maneuvers.
2. Human Intention Modeling and Situational Recognition
Human intention is encoded in the commanded . To ensure stable and intuitive interface dynamics between human and UAV, the system incorporates a passivity-based control interface as in [Atman et al. 2019], stabilizing the coupled arm-eye and drone system.
Situational recognition is formalized by a mapping
where functions as a state machine that reacts to coverage progress (e.g., when drops below a threshold) by suggesting new semantic waypoints or override velocities to the operator. This mechanism allows adaptive macro-navigation through complex environments using high-level human guidance.
3. Algorithm Structure and Control Decoupling
Stealthy coverage control employs a nested-loop architecture:
- The outer loop filters and applies operator commands at low gain for safety.
- The inner loop formulates and solves the QP for as described above, combining operator intent, current map information gain, and the stealth constraint.
Pseudocode outline (per cycle):
1 2 3 4 5 6 7 8 |
1: measure current state (x,R) and map {p_j}
2: read human command (v_h, ω_h)
3: form predicted information-gain function I(x+vΔt, R·e^{ωΔt})
4: solve QP: minimize ½‖v−v_h‖² + ½‖ω−ω_h‖² − λ I subject to dynamics & stealth
5: apply control u* = [v*; ω*] to UAV
6: update occupancy probabilities p_j using new image
7: if ∑_j (1−p_j) small ⇒ trigger next waypoint via Ψ
8: repeat |
This structure decouples the human’s macro-level “where to go next” from the UAV’s micro-level “how to sample optimally and stealthily,” allowing simultaneous exploitation of human insight and autonomous local optimization.
4. Fundamental Equations and Stealth Constraint
The information-gain metric (voxelized) is
An alternative view-entropy metric is
where is each pixel’s normalized expected entropy.
The stealth constraint is
with , tunable parameters, limiting aggressive rotations and rapid movement in directions of high information gain. This constraint "smooths" UAV motion, suppressing conspicuously active maneuvers during coverage.
A linearized closed-form feedback law for the combined control can be written as: where is a barrier function term ensuring feasibility under .
5. Simulation Protocol and Performance Metrics
The evaluation environment is a simulated warehouse with four known-geometry objects. System parameters include \,s, \,m/s, /s, , , and .
Three primary performance metrics are used:
- Reconstruction completeness
- Reconstruction accuracy (point-to-mesh RMSE in meters)
- Human workload (integral of )
The protocol consists of 10 trials (random seeds), each with 300\,s simulated teleoperation, comparing:
- Stealthy coverage control (“SCC”)
- Standard human + coverage control baseline (no stealth QP constraint)
6. Quantitative Results
Empirical findings are summarized as follows (mean standard deviation, ):
| Method | Completeness [\%] | RMSE [m] |
|---|---|---|
| Baseline | ||
| SCC (ours) |
A paired -test yields on both metrics, indicating statistical significance. Additionally, the average deviation between operator and applied controls () is decreased by 35%, reflecting a reduction in operator workload (Terunuma et al., 31 Jan 2026).
7. Contributions and Future Outlook
Stealthy coverage control introduces a unified quadratic programming framework that synergistically blends human teleoperation, real-time 3D reconstruction coverage, and a novel stealth constraint. The modular human-in-the-loop approach ensures stability and safety via passivity, while a situational recognition module triggers automatic waypoint generation, facilitating efficient and adaptive coverage. The central mechanism is the decoupling of human-directed macro-navigation from stealthy and information-driven micro-sampling, implemented via null-space/QP projection.
Simulations demonstrate that stealthy coverage control achieves over 15% improvement in reconstruction completeness, halves RMSE, and significantly lowers operator workload, with results robust to random seed variations. The methodology is extensible to multi-UAV systems or advanced human-assistant paradigms, maintaining the foundational stealth property. These findings substantiate stealthy coverage control as an effective integration of human expertise and autonomous active sensing in real-time 3D reconstruction for constrained or complex settings (Terunuma et al., 31 Jan 2026).