Papers
Topics
Authors
Recent
Search
2000 character limit reached

SRL Proxemics: Spatially-Aware Robotic Limbs

Updated 7 February 2026
  • SRL Proxemics is a framework defining near-body zones and segment-specific robotic behaviors to enhance safety and trust.
  • It employs quantitative zone models and autonomy taxonomies to regulate hand, wrist, elbow, and shoulder actions relative to user proximity.
  • Empirical studies indicate that spatially calibrated, legible behaviors lead to improved user trust, comfort, and embodiment in collaborative tasks.

Supernumerary Robotic Limbs (SRLs) operate in near-body peripersonal space, presenting a unique set of challenges for embodied AI, human augmentation, and real-time interaction. SRL proxemics defines the spatial, behavioral, and autonomy guidelines governing how SRL segments coordinate with human users in these intimate spaces. Unlike traditional approaches that treat SRL autonomy as monolithic, SRL proxemics decomposes behavior into zone- and segment-specific rules that directly mediate perceived safety, trust, and embodiment (Zhou et al., 31 Jan 2026). This article reviews the theoretical foundations, quantitative frameworks, control architecture, empirical findings, and representative applications of SRL proxemics in human–robot interaction.

1. Motivation and Conceptual Foundations

SRLs are wearable robots (e.g., extra limbs backpack-mounted) that support task augmentation by operating within an individual's peripersonal (near-body) space. The proximity of SRLs to sensitive body zones (face, neck, torso, upper limbs) means that purely functional safety mechanisms (collision avoidance, torque limits) are insufficient: unanticipated or poorly signaled motions induce distrust, startle responses, and reduced comfort. Standard control taxonomies—manual, shared, or full autonomy—lack spatial granularity, failing to account for the fact that user comfort and consent are highly dependent on where the SRL is operating relative to the body (Zhou et al., 31 Jan 2026).

SRL proxemics draws from classic proxemics research (Hall, 1966), which establishes that humans segment interaction space into concentric zones of differing sensitivity. The SRL proxemics framework extends this to the robot–human context, introducing user-derived zone boundaries and segment-wise autonomy policies.

2. Quantitative Zone Models

SRL proxemics defines three concentric “trust” zones, parameterized as Euclidean distances between SRL end-effectors and reference body landmarks:

  • Critical Zone (ZCZ_C): Encompasses the head, neck, and frontal torso midline. Requires fast reactions and explicit user consent. Quantitatively: ZC={p:D(p,head/torso midline)dC}Z_C = \{p : D(p, \text{head/torso midline}) \leq d_C \}, dC0.7md_C \sim 0.7\,\text{m} (one arm's length).
  • Supervisory Zone (ZSZ_S): Covers the upper torso, shoulders, and upper arms. Allows buffered operation with signaling or cues. ZS={p:dC<D(p,upper torso/shoulder)dS}Z_S = \{p : d_C < D(p, \text{upper torso/shoulder}) \leq d_S \}, dS0.1d_S \sim 0.10.15m0.15\,\text{m} (palm/fist width).
  • Utilitarian Zone (ZUZ_U): Includes hands, forearms, lateral extremities, supporting task-relevant close contact. ZU={p:D(p,hand/forearm)dU}Z_U = \{p : D(p, \text{hand/forearm}) \leq d_U \}, dU0d_U \approx 0 (contact).

Zones are hierarchically ordered: ZUZSZCZ_U \subset Z_S \subset Z_C (decreasing sensitivity) (Zhou et al., 31 Jan 2026).

3. Segment-Level Taxonomy and Zone–Autonomy Matrix

SRLs are decomposed into four main kinematic segments, each assigned autonomy constraints in each spatial zone:

Segment ZCZ_C Policy ZSZ_S Policy ZUZ_U Policy
Hand/End-Effector Confirmation only Guard-railed autonomy High (self-aligned)
Wrist Locked Mid-level/micro adjust Mid-level/micro adjust
Elbow Reflex only Reflex only (no plan) Reflex only
Shoulder/Base Manual (consent) Allowed (side/rear + cue) Allowed (side/rear + cue)

For each (zone, segment) pair, autonomy is not fixed but instead transitions between locked/manual (high consent), through buffered reactive autonomy, to high autonomy for task-prioritized regions (hands) in utilitarian areas. This non-monolithic approach ensures autonomy matches user-expectation and context (Zhou et al., 31 Jan 2026).

4. Formal Policy Specification and Control Rules

Coordination rules are synthesized into a three-tiered architecture. Policies are encoded either as discrete if–then guards or as soft potentials:

Zone-Centric Entry and Path Rules

Policy Aspect Critical (ZCZ_C) Supervisory (ZSZ_S) Utilitarian (ZUZ_U)
Safe Distance D>dCD > d_C; frontal cone off-limits D>dSD > d_S or immediate cancel required Contact permitted
Entry Approach Curved, non-frontal + pre-cue Side/rear + move buffer Decisive curved path
Crossing Midline Prohibited w/o consent Smooth path; audio ping; confirmation Announce once if necessary

Segment-Specific Example (Entry/Handover)

Segment Entry Handover Phase Idle/Hover
Hand Pause → present → pause; tilt away Autonomous grasp + user-cancel Hold object; no empty drift
Wrist Micro-adjust then lock Align then hold Freeze orientation quickly
Elbow Corrective reflex only No sweeping motions “No-go” zone enforced
Shoulder Rear/side reposition + cue Posture support Hold background pose

Policies for the critical zone require explicit user confirmation (e.g., “if pZCp \in Z_C and segment = Hand then require user_confirm before v>0v > 0”). Alternatively, repulsive soft potentials can be used (e.g., UC(p)U_C(p) diverges for D<dCD < d_C) (Zhou et al., 31 Jan 2026).

5. Empirical Validation and User Study Findings

A Wizard-of-Oz experiment (n = 18) compared high-autonomy standardized control versus participant-defined rules (PDR). Key quantitative findings:

  • Arousal (SCR) During Approach/Entry: Approach phase triggered highest arousal (Friedman χ2(3)=11.53\chi^2(3)=11.53, p=.009p=.009). High-autonomy anchor yielded higher median normalized SCR ($0.85$) than PDR ($0.25$), Wilcoxon Z=3.15Z=-3.15, p=.002p=.002.
  • Trust (Jian et al.): Capacity Trust median in PDR $5.42$ vs. High-Auto $4.18$ (Z=3.74Z=3.74, p<.05p<.05). Median Distrust lower for PDR ($2.06$ vs. $2.96$).
  • Godspeed Subscales: Perceived Intelligence higher for high-autonomy ($3.70$) vs. PDR ($3.20$); Perceived Safety higher in PDR ($3.33$ vs. $3.00$).
  • Embodiment: Agency and Ownership subscales higher in PDR (all p<.05p < .05).

These results indicate that increasing autonomy alone does not ensure perceived safety or trust; users favor spatially calibrated and legible behaviors (Zhou et al., 31 Jan 2026).

6. Representative Application Scenarios

  • Cross-Body Handover: SRL plans a curved trajectory at DdCD \sim d_C, inserts a 500 ms pre-motion pause at the ZCZ_C boundary, requires explicit user voice-confirm before entering critical space, executes high-autonomy grasp release, with elbow locked to reflex-only.
  • Stabilization/Assistance: SRLs enter ZSZ_S with side path and soft auditory cue, hand autonomously positions for bowl stabilization, and turn-taking enforced for dual-limb tasks. SCR-based overlays can trigger retreat if arousal threshold exceeded.
  • Rule Authoring: Users demonstrate preferred idle pose, SRL records end-effector state at D=dSD=d_S, maps as no-hover “Idle” waypoint. Segmental autonomy is mapped per mode.

These scenarios translate the framework's policies into concrete real-time controllers, motion-planning constraints, and shared-autonomy regimes (Zhou et al., 31 Jan 2026).

SRL proxemics is tightly linked to proxemics research in social robotics, agent–group interaction, and multi-agent spatial relationship learning. Group-agent interaction studies employ zone-based spatial definitions with objective spatial tracking and subjective bonding metrics, often using open-source measurement toolkits (e.g., Group Perception Canvas, Group-Proximity-Annotation-Tool) and recommend zone-based engagement thresholds (e.g., maintain 60–70% time in “personal” zone; limit “intimate” intrusions to <10%) (Müller et al., 13 Jun 2025). Human–robot interaction studies confirm that human-scale proximal zone definitions should be maintained even for small robots (Lehmann et al., 2020), and that proxemic priors can directly regularize joint spatial and physical reasoning in 3D vision models with clear quantitative benefits (Huang et al., 2024). In social scene analysis, wearable sensor-based LSTM architectures leveraging proxemic signals reach high accuracy (AUC = 0.975) for group and conversational role detection (Rosatelli et al., 2019).

Summary

SRL proxemics introduces a rigorously validated spatial–segmental framework for near-body interaction. It comprises quantitative zone models, segment-wise autonomy taxonomies, formally encoded coordination rules, and real-time empirical validation—delivering improved trust, safety, and embodiment for SRL users in complex collaborative scenarios (Zhou et al., 31 Jan 2026). The framework is compatible with current advances in spatial tracking, behavioral annotation, and shared autonomy models across HRI, embodied AI, and collaborative robotics.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to SRL Proxemics.