Papers
Topics
Authors
Recent
Search
2000 character limit reached

Video See-Through Display: Hybrid AR/VR Analytics

Updated 9 February 2026
  • Video See-Through Display is a technology that fuses virtual images and physical spaces using AR/VR interfaces and situated brushing and linking.
  • It employs hybrid systems that register digital data with real-world referents, enabling interactive selection and real-time visual feedback.
  • Empirical studies show that tailored highlight techniques enhance accuracy and reduce cognitive workload in spatial analytics tasks.

Situated brushing and linking is an extension of the classic brushing-and-linking paradigm in visual analytics, where coordinate selections propagate between linked 2D views, into hybrid environments that incorporate both virtual visualizations and real-world physical referents. This approach enables analysts to interactively filter or select data in a situated digital view (such as a handheld scatterplot rendered on a tablet) and have these selections immediately reflected as attention-guiding highlights or annotations on corresponding objects within the surrounding environment—whether virtual as in VR, or physical as seen through AR. Recent work systematically investigates the technical implementation, attention-guidance strategies, and empirical evaluation of highlighting techniques that make situated brushing and linking effective in complex 3D or real-world settings (Doerr et al., 2024, Quijano-Chavez et al., 2 Feb 2026).

1. Conceptual Foundations and Formalization

The conceptual core of brushing and linking involves a direct mapping between data-driven selections in one view (the brush) and visual indication of related elements in another (the link). In classic 2D analytics, this is realized by manipulating selection state lattices across coordinated multiple views, often using explicit selection propagation based on data keys.

"Linked visualisations via Galois dependencies" formalizes this process via bidirectional dynamic program slicing techniques grounded in Galois connection theory (Perera et al., 2021). In this construction:

  • The selection state for each view is modeled as an element in a lattice Sel(v), supporting meet/join.
  • Brushing invokes a backward slice from view 1 to shared data (evalBwd), which then propagates forward into view 2 using a De Morgan dual (¬f₂(¬d)), ensuring that links highlight exactly those elements whose rendering depends on the brushed data subset.
  • This formalism ensures round-trip soundness via the Galois connection: backward-over-forward overapproximates, forward-over-backward underapproximates.

Situated brushing and linking adopts this formal propagation model but adapts its endpoints: one view remains an abstract data visualization, while the other is anchored to physical or spatial referents.

2. System Architectures and Interaction Models

Situated brushing and linking systems typically feature:

  • A situated overview visualization (e.g., scatterplot, bar chart) displayed in a movable or handheld digital medium (such as a tablet in VR/AR).
  • An interaction modality for brushing, primarily implemented via rectangle dragging or sequence tapping to select mark subsets.
  • Linkage from selected marks to physical referents—e.g., products on supermarket shelves—using spatially registered highlighting overlays.

Recent implementations, built in Unity (using DXR or equivalent AR/VR backends), demonstrate this in both synthetic (VR) and real (AR) supermarket layouts (Doerr et al., 2024, Quijano-Chavez et al., 2 Feb 2026). The system captures selected object IDs upon brushing, then applies a highlighting overlay, controlled through the world-space registration information between the visualization and the environment.

The interaction protocol supports single-selection (“find the one product...”), multi-selection (range filter yielding multiple items), and spatial judgment ("are the brushed products located in region X?"), allowing for rigorous evaluation of attention-guidance mechanisms.

3. Visual Highlighting Techniques for Situated Linkage

Effective situated brushing and linking requires perceptually robust visual highlighting, particularly for guiding user attention over distances, through clutter, and across view frusta. Techniques evaluated in recent empirical work include:

Technique Visual Mechanism Key Implementation Details
Color Cheating Solid color (e.g., yellow) fill of object Alpha blend: Cout=αChighlight+(1α)CorigC_\text{out} = \alpha C_\text{highlight} + (1-\alpha)C_\text{orig}, α=1\alpha=1
Outline Coloured contour around object silhouette Screen-space polygon dilation (e.g., 2 px), preserves label/texture
Link (3D beam/Bézier curve) Curved spline from brush to object Catmull–Rom or cubic Bézier, variable thickness, fades outside FOV
Animated Outline Outline with hue oscillating over time H(t)=H1+(H2H1)12[1+sin(2πt/T)]H(t) = H_1 + (H_2 - H_1) · \frac{1}{2}[1 + \sin(2\pi t/T)]
Animated Arrow Repeated “flying” 3D arrows from user to target Pulsing tetrahedra, one per target, disappear/re-spawn cycle
Outline + Linking Combo of animated outline and spatial link Outline plus Bézier link, adapts to in/out FOV

Color Cheating maximizes saliency at the expense of identity; Outline preserves visual identity but may suffer from low contrast; 3D Links generalize spatial guidance, especially out of view; Animated Outlines and Arrows introduce temporal contrast or motion.

Techniques must be tuned for context: in AR, where physical scene cues are robust, simple outlines or color overlays suffice; in VR, links/animated outlines mitigate deficiencies in synthetic rendering.

4. Empirical Evaluation and Comparative Findings

Systematic user studies in both virtual and physical supermarket environments (VR: N=20–40; AR: N=40) analyzed highlighting techniques under single/multi-selection and spatial-judgment tasks using performance metrics including:

  • Linking time (TlinkT_\text{link}): time from brush completion to successful object localization.
  • Error count (EE): number of incorrect physical referents selected.
  • Euclidean error distance (DD): spatial distance between selected and correct item.
  • Subjective measures: NASA-TLX (cognitive workload), UMUX-Lite (usability), IPQ (spatial presence).

Key findings (Doerr et al., 2024, Quijano-Chavez et al., 2 Feb 2026):

  • In VR, Color and Link yielded fastest TlinkT_\text{link}; Arrow was slowest (3–4 s longer), and Outline degraded when objects lay outside field of view.
  • Color minimized errors; Link/Arrow incurred triple the error rate, with error distances for Color/Outline >1 m (often due to mis-brushing), while Link/Arrow errors were mostly <0.5 m (confusion among similar items).
  • In AR, linking time was significantly lower than VR (2.16-2.16 s), and error rate was halved (10.4% AR vs. 21.3% VR).
  • Outlines and Links were most preferred in AR, but in VR links became essential for out-of-view guidance.
  • Animated outlines achieved the lowest error rates, but introduced latency due to color transitions.
  • Subjective workload and enjoyment scores reflected these findings: Color and Link were rated lower in cognitive and physical demand; Arrow incurred most frustration.

Qualitative analysis highlights that high-contrast, static cues expedite search, while animated or motion-based cues may introduce clutter or require “waiting” strategies among users.

5. Design Guidelines and Technical Prescriptions

Synthesizing these empirical studies, several design principles emerge for situated brushing and linking in analytical systems (Doerr et al., 2024, Quijano-Chavez et al., 2 Feb 2026):

  1. Use static, high-contrast coloring when rapid, accurate target acquisition is paramount; de-emphasize label legibility if necessary.
  2. Employ outline contours—adjusting width/brightness—as a non-obstructive highlight for safety-critical or identity-sensitive contexts.
  3. Utilize 3D linking beams for out-of-view/occluded targets, dynamically adjusting link thickness with distance (t(d)d0.5t(d) \propto d^{0.5}) to preserve peripheral saliency.
  4. Avoid persistent animated cues (arrows) for multi-target scenarios; if used, throttle respawn to control clutter.
  5. Consider hybrid overlays (static outline plus on-demand link) for complex or high-target-count tasks.
  6. Dynamically switch highlighting modalities based on referent count, distance, and user workload.
  7. Implement recenter functions for links to avoid self-occlusion or overlap with visualization anchors.

In AR, prioritize hue-contrast between overlays and real-world textures; in VR, compensate for reduced visual realism with animated or composite cues.

6. Open Challenges and Theoretical Developments

The formal bidirectional slicing model by (Perera et al., 2021) establishes that brushing and linking can be precisely captured with selection lattices and Galois connections, including support for De Morgan–dual linking and negation. However, several limitations persist:

  • Only first-order, effect-free data are supported; higher-order constructs (closures), mutable state, and path-condition reasoning are not addressed.
  • In situated settings, challenges remain in spatial registration fidelity, highlight occlusion management, and perceptual saliency under varying lighting/backgrounds.
  • Experimentally, results emphasize the importance of physical context (AR vs. VR) and the nuanced performance of techniques under diverse cognitive and spatial loads.

A plausible implication is that next-generation situated analytics will require adaptive, perception-driven attention-guidance strategies—potentially combining formal view-coordination semantics with perceptually optimized rendering pipelines and user modeling (Quijano-Chavez et al., 2 Feb 2026, Doerr et al., 2024).

7. Implications and Future Directions

Situated brushing and linking operationalizes a "physical-world second" approach to coordinated multiple views, translating abstract data interaction into tangible spatial feedback. Emerging design prescriptions advocate for pragmatically blending visualization techniques from desktop analytics with spatially-aware overlays and dynamically adaptive cues, guided by empirical evaluation under real and simulated conditions.

Research directions include: refining formal selection propagation for mutable/interactive environments, further leveraging real-world cues in AR to minimize cognitive load, developing scalable hybrid overlays, and generalizing perceptual adaptation algorithms to support diverse domains beyond retail analytics. These efforts will underpin situated analytics systems that enable seamless, accurate, and efficient navigation between digital abstractions and their real-world instantiations (Doerr et al., 2024, Quijano-Chavez et al., 2 Feb 2026, Perera et al., 2021).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (3)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Video See-Through Display.