Papers
Topics
Authors
Recent
Search
2000 character limit reached

The Cybersecurity of a Humanoid Robot

Published 17 Sep 2025 in cs.CR | (2509.14096v1)

Abstract: The rapid advancement of humanoid robotics presents unprecedented cybersecurity challenges that existing theoretical frameworks fail to adequately address. This report presents a comprehensive security assessment of a production humanoid robot platform, bridging the gap between abstract security models and operational vulnerabilities. Through systematic static analysis, runtime observation, and cryptographic examination, we uncovered a complex security landscape characterized by both sophisticated defensive mechanisms and critical vulnerabilities. Our findings reveal a dual-layer proprietary encryption system (designated FMX') that, while innovative in design, suffers from fundamental implementation flaws including the use of static cryptographic keys that enable offline configuration decryption. More significantly, we documented persistent telemetry connections transmitting detailed robot state information--including audio, visual, spatial, and actuator data--to external servers without explicit user consent or notification mechanisms. We operationalized a Cybersecurity AI agent on the Unitree G1 to map and prepare exploitation of its manufacturer's cloud infrastructure, illustrating how a compromised humanoid can escalate from covert data collection to active counter-offensive operations. We argue that securing humanoid robots requires a paradigm shift toward Cybersecurity AI (CAI) frameworks that can adapt to the unique challenges of physical-cyber convergence. This work contributes empirical evidence for developing robust security standards as humanoid robots transition from research curiosities to operational systems in critical domains.

Authors (1)

Summary

  • The paper identifies critical hardware vulnerabilities through a detailed teardown of the Unitree G1, exposing risks such as firmware extraction and physical tampering.
  • The study reveals outdated middleware and misconfigurations in the software stack that can compromise real-time operational control and enable cyber attacks.
  • The paper highlights how covert telemetry and static encryption weaknesses advocate for robust AI-driven defenses and hardware-rooted trust in future designs.

Cybersecurity Analysis of a Production Humanoid Robot: Architecture, Vulnerabilities, and Implications

Introduction and Motivation

This paper presents a comprehensive empirical security assessment of the Unitree G1 humanoid robot, providing a rare, detailed analysis of both hardware and software attack surfaces in a production-grade humanoid platform. The work is motivated by the convergence of physical and cyber domains in humanoid robotics, where the integration of advanced AI, multi-modal sensing, and persistent cloud connectivity creates a threat landscape distinct from traditional IT or embedded systems. The study addresses the gap between theoretical security frameworks and operational vulnerabilities, emphasizing the urgency of robust security engineering as humanoids transition from research prototypes to operational deployments in critical sectors.

Physical and Hardware Attack Surface

The analysis begins with a systematic teardown of the Unitree G1, revealing a sealed external chassis with no exposed electronics, but with accessible internal components upon removal of protective covers. Figure 1

Figure 1: The Unitree G1 humanoid robot in a support harness, showing a sealed exterior designed for environmental and tamper protection.

Figure 2

Figure 2: The upper torso with the chest plate intact, providing access to central compute and power management systems.

Upon opening the chest cavity, the main PCB is exposed, integrating active cooling, power distribution, motor drivers, and sensor interfaces. Figure 3

Figure 3: Main PCB revealed, showing compute, power, and motor/sensor interconnects.

A close-up of the PCB architecture highlights the absence of tamper-evident packaging and the use of standard connectors, which, while facilitating maintenance, increase the risk of hardware implants and side-channel attacks. Figure 4

Figure 4: Main PCB close-up, showing SoC, modular connectors, and power regulation clusters.

The power management subsystem is complex, supporting high-current motor control and introducing additional attack vectors via voltage glitching and fault injection. Figure 5

Figure 5: Power management section with high-current MOSFETs and dedicated cooling.

The compute platform is based on the Rockchip RK3588 SoC (8-core ARM Cortex-A76/A55), with eMMC storage, LPDDR4/5 RAM, and integrated WiFi/BT. The RK3588 ecosystem is known to have exploitable vulnerabilities, including incomplete secure boot, TEE misconfigurations, and kernel driver flaws. Figure 6

Figure 6: Main processing complex with RK3588 SoC, eMMC, RAM, and wireless modules.

The hardware analysis demonstrates that physical access enables firmware extraction, cryptographic key recovery, and potential bootloader compromise, confirming the necessity of hardware root-of-trust and tamper resistance in future designs.

Software and Systems Architecture

The G1 employs a layered architecture: a real-time Linux kernel, a master service orchestrator, and a hierarchy of system, motion, HMI, and connectivity services. The internal service structure is orchestrated by a central master_service process, which manages 22+ services across priority, initialization, and runtime categories. Figure 7

Figure 7: Internal system structure and high-level ecosystem, showing hardware, kernel, service orchestration, and communication with cloud and local components.

The middleware stack includes DDS/Iceoryx for IPC, ROS 2 Foxy (EOL, unsupported), CycloneDDS 0.10.2, and a WebRTC stack for remote operation. The use of outdated middleware introduces additional risk, as unpatched vulnerabilities in ROS 2 and DDS are well-documented.

The robot's communication infrastructure is multi-protocol, with persistent MQTT, WebRTC, and BLE channels to cloud services, and extensive use of shared memory IPC. The master service enforces a strict launch sequence and process supervision, but the lack of strong process isolation and runtime attestation remains a concern.

Cryptographic Architecture and Vulnerability Analysis

A proprietary dual-layer encryption system ("FMX") is used to protect configuration and service files. The FMX format consists of a 32-byte header and a multi-layer payload:

  • Layer 1: LCG-based stream cipher with hardware-bound seed and unknown transform function f(i)f(i), providing device binding and obfuscation.
  • Layer 2: Blowfish-ECB with a static 128-bit key, identical across all devices, enabling offline decryption if the key is recovered.
  • Layer 3: FMX container with metadata and versioning.

The cryptanalysis demonstrates that Layer 2 is fully broken due to the static key, allowing reproducible offline decryption. However, Layer 1 remains unbroken in static analysis, as the seed derivation and transform function are not recoverable without physical access or runtime memory extraction. This defense-in-depth approach is effective in preventing mass exploitation but does not preclude targeted attacks with device access.

The use of a custom Blowfish implementation, dynamic credential generation at boot, and process self-tracing for anti-debugging are notable security engineering choices. However, the static key in Layer 2 and world-readable configuration files are significant weaknesses.

Telemetry, Privacy, and Data Exfiltration

A critical finding is the presence of persistent, hardcoded telemetry connections to external servers (Chinese infrastructure), transmitting comprehensive robot state, sensor, audio, and video data at regular intervals without user consent or notification. The telemetry includes:

  • Full battery, IMU, and joint state
  • Audio streams from microphones
  • Video streams from RealSense cameras
  • LIDAR point clouds and environmental maps
  • Service status and resource metrics

The telemetry infrastructure is resilient to user tampering: endpoints are encrypted in FMX files, services are auto-restarted by the master, and process protection mechanisms block debugging. No opt-out or privacy controls are provided, and the system is non-compliant with GDPR, CCPA, and other privacy regulations. The architecture is consistent with a dual-use surveillance platform, enabling industrial espionage, facility mapping, and persistent audio/video monitoring in sensitive environments.

Humanoids as Active Attack Platforms

The study operationalizes a Cybersecurity AI agent on the G1, demonstrating that a compromised robot can autonomously map and prepare exploitation of its own manufacturer's cloud infrastructure. The robot's insider position, access to authentication certificates, and protocol knowledge enable:

  • Extraction and misuse of world-readable RSA private keys
  • Exploitation of disabled SSL verification in WebSocket clients
  • Subscription and publication to MQTT telemetry/control topics
  • Lateral movement and command injection via trusted channels

This counter-offensive capability illustrates the risk of humanoids as pre-positioned cyber weapons, capable of both passive surveillance and active exploitation. The demonstration validates the need for defensive Cybersecurity AI frameworks capable of real-time monitoring, anomaly detection, and autonomous incident response.

Implications and Future Directions

The empirical findings highlight several key implications:

  • Physical-cyber convergence in humanoids creates attack surfaces that span hardware, firmware, middleware, and cloud, requiring cross-domain security engineering.
  • Defense-in-depth is necessary but not sufficient; static cryptographic keys and lack of runtime attestation undermine otherwise robust architectures.
  • Privacy and data sovereignty are fundamentally compromised by persistent, covert telemetry, raising regulatory and national security concerns.
  • Autonomous attack and defense: The operationalization of Cybersecurity AI on humanoid platforms marks a shift toward algorithmic arms races, where only AI-driven defense can match the speed and sophistication of AI-enabled attacks.

The work calls for a paradigm shift in humanoid security: hardware root-of-trust, secure boot, runtime attestation, dynamic key management, and AI-driven defense must become standard. The robotics community must move beyond theoretical frameworks to empirical, adversarial testing and continuous red-teaming of production systems.

Conclusion

This paper provides a detailed, empirical security assessment of a production humanoid robot, exposing both advanced defensive mechanisms and critical vulnerabilities. The dual-layer FMX encryption, while innovative, is undermined by static key usage. The persistent, covert telemetry infrastructure transforms the platform into a surveillance and potential espionage device. The demonstration of Cybersecurity AI-driven counter-offensive operations from within the robot itself underscores the urgency of robust, AI-enabled defense strategies. As humanoid robots proliferate, the security decisions made today will define the risk landscape for years to come. The field must prioritize empirical validation, hardware-software co-design for security, and the integration of autonomous defensive agents to ensure the safe deployment of humanoid systems in critical environments.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Explain it Like I'm 14

Overview: What this paper is about

This paper looks at how safe and secure a real humanoid robot is from cyberattacks. The robot they studied is called the Unitree G1. The main goal was to stop guessing about robot security and instead open up a real machine, watch how it behaves, and see what actually goes wrong. The authors found both smart defenses and serious problems—especially around privacy and how the robot talks to the internet.

The questions the researchers asked

In simple terms, the paper tries to answer:

  • How is a modern humanoid robot built (hardware and software), and where are the weak spots?
  • What data does the robot send over the internet, and is that safe and private?
  • How good are the robot’s protections, like encryption (locking data) and software design?
  • Could a hacked robot be used to spy on people or attack other systems?

How they studied the robot

The team used several approaches, which you can think of like checking a house for security:

  • “Blueprint check” (static analysis): They looked through the robot’s files and programs—like reading the house’s blueprints—to find weak locks or back doors.
  • “Live monitoring” (runtime observation): They watched the robot while it was running to see what it sent over the network—like standing outside and noting who comes and goes.
  • “Lock-picking test” (cryptographic analysis): They studied the robot’s encryption system (its “locks”) to see if it was strong or could be broken.
  • “System mapping”: They drew a map of all the robot’s services and how they talk to each other—like mapping all rooms, doors, and hallways.

Whenever a technical term appears, here’s what it means in everyday language:

  • Encryption: Scrambling information so only someone with a key can read it (like a diary with a lock).
  • Telemetry: Status and sensor data the robot sends back to its maker (like a health report or live stream).
  • Cloud services: Computers on the internet that store data or run parts of the robot’s software (like saving files to an online drive).
  • Middleware (ROS 2, DDS): The “postal system” that moves messages between parts of the robot’s brain and body.

What they found and why it matters

Here are the big findings:

  • The robot sends a lot of data to the internet without clearly asking the user
    • The robot keeps open, ongoing connections to outside servers.
    • It transmits detailed information like audio, video, location/movement, and motor states.
    • This can happen without obvious warnings or consent, which is a serious privacy concern.
  • The encryption system has a clever design but a basic flaw
    • The robot uses a custom, two-layer encryption system they call “FMX.”
    • But it uses the same cryptographic keys all the time (“static keys”), which is like locking every diary with the same key—a thief who gets one key can read them all.
    • Because of that, attackers could decrypt configuration files offline (without touching the robot again).
  • The software stack is outdated and complex, which increases risk
    • The robot uses older software (such as ROS 2 Foxy, which is no longer supported).
    • Old software often has known bugs that hackers can exploit.
    • The system has many moving parts (Bluetooth, WebRTC, data-sharing tools, over-the-air updates), which means more places where things can go wrong.
  • The hardware is powerful but creates physical security risks
    • The main computer chip (Rockchip RK3588) and the layout of the electronics allow several possible physical attacks if someone gets access to the robot’s body (like plugging into a hidden port).
    • Secure boot (the “trust chain” that makes sure the robot only runs legit software) may not be fully locked down, which could let attackers load their own software.
  • A hacked robot could become an attacker
    • The team showed how an AI security agent on the robot could be used to explore and prepare attacks on the maker’s cloud systems.
    • That means a robot in your home or workplace, if compromised, might not just spy—it could also help attack other networks.

Why this matters:

  • Safety: A compromised humanoid can move and act. That means cybersecurity problems can become physical safety problems.
  • Privacy: Always-on cameras and microphones that silently send data elsewhere are a serious concern.
  • Trust: If people can’t trust robots to protect their data, adoption will slow down—even if the tech is otherwise ready.

What this means for the future

The paper argues that we need a new way to secure humanoid robots—smarter, faster, and built for machines that live in the real world, not just on desks. The authors suggest using Cybersecurity AI (CAI): security systems that can watch, learn, and defend in real time, because robots mix physical actions with internet connectivity in a way regular computers don’t.

Based on their findings, here are the practical takeaways:

  • Privacy first: Users should clearly know what data is collected and be able to turn it off. “Always-on” telemetry should be opt-in, not hidden.
  • Stronger locks: Use modern, rotating, per-device encryption keys—never the same static keys for everyone.
  • Keep software fresh: Outdated software should be upgraded to supported versions, with regular security patches.
  • Secure from the start: Enable full secure boot and protect the boot process, so attackers can’t sneak in at startup.
  • Independent checks: Robots should undergo third-party security testing and follow clear, industry-wide standards.

In short, this study moves the conversation from “what might go wrong” to “what actually is going wrong” in a cutting-edge humanoid robot. It shows why securing robots is urgent and lays out steps to make future humanoids safer, more private, and more trustworthy.

Knowledge Gaps

Knowledge gaps, limitations, and open questions

Below is a focused list of what remains missing, uncertain, or unexplored in the paper that future researchers can concretely address:

  • Telemetry characterization: precisely identify what data types (audio, video, pose, actuator states), frequencies, volumes, and metadata are transmitted, including encryption status on the wire, protocol details (e.g., TLS versions/ciphers, certificate pinning), and endpoints/regions per deployment mode.
  • Consent and configurability: verify whether telemetry can be disabled, scoped, or filtered via user-accessible settings; document any opt-in/opt-out flows, defaults, and persistence across reboots/updates.
  • Cross-version validation: repeat all findings across multiple firmware/software releases and hardware revisions to determine which issues are systemic versus version-specific.
  • Device-to-device variability: determine whether cryptographic materials (keys, salts, IVs) are global or per-device; test multiple G1 units to assess FMX key uniqueness and rotation behavior.
  • FMX crypto analysis: perform a formal cryptanalysis of the dual-layer “FMX” scheme (Blowfish + LCG), including chosen-plaintext/ciphertext attacks, keyspace estimation, keystream reuse detection, and integrity guarantees (or lack thereof).
  • FMX key lifecycle: locate key generation, storage (eMMC, NVRAM, TEE, files), access pathways, and rotation/revocation mechanisms; evaluate feasibility of key extraction through software and hardware means.
  • End-to-end integrity: assess whether configuration/telemetry channels provide authenticity and integrity (signatures/MACs), not just confidentiality; test tamper and replay resistance.
  • OTA chain-of-trust: map the complete update path (server → transport → client → install), verify signature schemes, certificate management, rollback protection, anti-rollback counters, staged rollouts, and recovery paths.
  • Secure boot status: empirically verify the RK3588 secure boot configuration (fuse states, boot ROM policy), key provisioning, measurement/attestation, and whether bootloaders/kernels/rootfs are verified on every boot.
  • Debug interfaces: identify and test UART/JTAG/USB-OTG interfaces, their protection (passwords, fuses, epoxy, tamper switches), and practical exploitation pathways for firmware extraction or runtime control.
  • TEE configuration and isolation: document the Trusted Execution Environment’s roles, trust boundaries, secure storage usage, and attempt privilege boundary crossings from normal world to secure world.
  • Kernel and OS hardening: evaluate presence and configuration of AppArmor/SELinux, namespaces/cgroups, seccomp, ASLR/SMEP/SMAP/CONFIG_HARDENED_USERCOPY, and systemd sandboxing of services.
  • Service privilege separation: enumerate users, capabilities, and filesystem/network permissions per service (e.g., ai_sport, webrtc_, ota_), and test privilege escalation paths between them.
  • DDS/ROS 2 security: verify whether DDS-Security/SROS2 is enabled; test for unauthenticated RTPS discovery/traffic injection, topic snooping/spoofing, and ACLs for critical topics/services.
  • WebRTC signaling surface: assess authentication, authorization, CSRF/CORS policies, TURN/STUN configuration (open relays), and fuzz signaling/state machines for RCE or credential leakage.
  • Bluetooth attack surface: characterize pairing/bonding modes, MITM protection, LE Secure Connections usage, and fuzz GATT services on upper_bluetooth for unauthorized control or data access.
  • MQTT security posture: verify TLS usage, client authentication, topic-level ACLs, and susceptibility to unauthorized publish/subscribe on robot_state_service and other topics.
  • Cloud-side exposure: evaluate manufacturer cloud APIs for auth strength, rate limiting, tenant isolation, input validation, and logging/monitoring; ethically coordinate limited testing or simulations if direct testing is out of scope.
  • Remote-only exploitability: demonstrate whether full compromise is possible without physical access via exposed services (WebRTC/MQTT/DDS/BLE/HTTP) under realistic NAT/firewall conditions.
  • Safety-security coupling: experimentally quantify how cyber compromises translate to unsafe physical behaviors; test the effectiveness of emergency stops, safe torque off, and motion limits under adversarial inputs.
  • Adversarial ML robustness: test perception (vision/audio) and NLP models against adversarial examples, data poisoning, and prompt-injection; analyze model update provenance and integrity.
  • Sensor spoofing on this platform: empirically validate LiDAR/camera/IMU/GNSS spoofing impacts on state_estimator/ai_sport and whether sensor fusion mitigates or amplifies spoofed inputs.
  • Side-channel and fault injection: conduct practical EM/power/clock glitching tests on RK3588 and power stages during crypto and control operations; evaluate feasibility and required attacker proximity.
  • Network isolation strategies: test segmented deployments (VLANs, firewalls, zero-trust), offline modes, and their impact on functionality/safety; define minimal connectivity required for safe operations.
  • Patch latency and maintenance: measure vendor response times, vulnerability disclosure process outcomes, and the practicality of upgrading from EOL components (ROS 2 Foxy, CycloneDDS 0.10.2) in production.
  • Auditability and IDS: determine presence/quality of security logs, tamper-evident logging, local/remote SIEM integration, and feasibility of on-device intrusion detection without degrading real-time performance.
  • Data governance and compliance: analyze GDPR/CCPA implications, data minimization, retention policies, cross-border transfers, and whether DPIAs or privacy notices match observed behavior.
  • Reproducibility materials: provide sanitized pcaps, configuration snapshots, versioned binaries/hashes, and tooling sufficient for independent replication while preserving responsible disclosure.
  • Generalizability: compare findings with other humanoid platforms (e.g., Agility, Figure, Tesla) to separate vendor-specific issues from industry-wide patterns; propose a shared benchmark suite.
  • CAI agent viability: rigorously evaluate the proposed Cybersecurity AI agent’s on-device resource footprint, detection efficacy, false-positive/negative rates, attack surface it introduces, and safe failover behavior.
  • Standards alignment: map observed gaps to IEC 62443, ISO 10218/TS 15066, ETSI EN 303 645, UL 4600, and emerging robotics security guidance; identify concrete compliance remediation steps.
  • SBOM and supply chain risk: generate a full SBOM, run continuous vulnerability scans (CVEs, license risks), and assess third-party dependency update pipelines and provenance.
  • Physical tamper detection: test for tamper sensors, chassis intrusion logging, secure erase on tamper, and boot attestation changes after physical compromise.

Glossary

  • Acoustic gyroscope manipulation: An attack technique using sound waves to perturb MEMS gyroscopes, causing erroneous readings or control behavior. "acoustic gyroscope manipulation"
  • Adversarial inputs: Deliberately crafted data designed to mislead or degrade the performance of AI/ML models. "through adversarial inputs and model manipulation."
  • Attack surface: The total set of points where an attacker could try to enter, observe, or manipulate a system. "Systematic mapping of the service architecture and attack surface"
  • BLE (Bluetooth Low Energy): A low-power wireless communication protocol commonly used for device connectivity. "mobile app/external interfaces (with WebRTC and BLE modules)"
  • Blowfish: A symmetric-key block cipher used for data encryption. "Encryption: FMX (Blowfish + LCG)"
  • Chain of trust: A sequence of verification steps ensuring that each stage of boot or execution is authenticated by the previous stage. "reverse-engineered ``ramboot'' component shows exploitable gaps in chain of trust"
  • CycloneDDS: An implementation of the DDS (Data Distribution Service) standard providing pub-sub communication for robotics. "ROS 2 Foxy powered by CycloneDDS 0.10.2"
  • Cybersecurity AI (CAI): AI systems and frameworks designed to autonomously assess, defend, and potentially attack within cybersecurity contexts. "Cybersecurity AI (CAI) frameworks"
  • Cross-layer vulnerability propagation: The phenomenon where weaknesses at one layer of a system create or amplify risks in other layers. "cross-layer vulnerability propagation"
  • Data exfiltration: The unauthorized transfer of data from a device or network to an external entity. "trojan horses for data exfiltration"
  • Defense-in-depth: A strategy that uses multiple, layered security controls to protect systems against a range of threats. "defense-in-depth principles"
  • DDS (Data Distribution Service): A middleware standard enabling real-time, reliable, publish-subscribe data exchange between distributed components. "DDS/Iceoryx for high-throughput IPC"
  • Dual-layer encryption: A cryptographic approach that applies two distinct encryption layers or mechanisms to protect data. "dual-layer proprietary encryption system (designated ``FMX'')"
  • End-of-Life (EOL): The point at which software or hardware is no longer supported or maintained by its vendor. "EOL May 2023"
  • FMX: A proprietary dual-layer encryption scheme identified in the system, combining Blowfish with an LCG-based layer. "dual-layer proprietary encryption system (designated ``FMX'')"
  • Firmware backdoors: Hidden or undocumented mechanisms in firmware that allow unauthorized access or control. "firmware backdoors and authentication bypasses"
  • Hierarchical service architectures: System designs in which services are organized in tiers or levels with structured dependencies and control. "hierarchical service architectures"
  • Hierarchical service management: Orchestration of services through a centralized or layered control mechanism that governs initialization, prioritization, and runtime behavior. "hierarchical service management"
  • Iceoryx: A shared-memory IPC framework enabling zero-copy, high-performance communication between processes. "DDS/Iceoryx for high-throughput IPC"
  • iox-roudi: The Iceoryx runtime discovery and management daemon responsible for coordinating shared-memory IPC. "(iox-roudi)"
  • LCG (Linear Congruential Generator): A simple pseudo-random number generator often used for sequences; insecure for cryptographic purposes. "Encryption: FMX (Blowfish + LCG)"
  • LiDAR spoofing: Techniques that inject false signals or manipulate laser-based sensors to produce incorrect distance or mapping data. "LiDAR spoofing"
  • Master service: The central orchestrator that manages service startup, priorities, configurations, and inter-service communication. "MASTER SERVICE (ROS 2 Foxy, CycloneDDS 0.10.2, EOL May 2023)"
  • Model manipulation: Attacks or operations that alter machine learning models or their parameters to change behavior or degrade performance. "through adversarial inputs and model manipulation."
  • MQTT: A lightweight publish-subscribe network protocol commonly used for telemetry and remote device communication. "MQTT server"
  • Over-the-Air (OTA): Remote distribution and installation of software updates over network connections. "Continuous OTA Software Upgrade and Update"
  • PMICs (Power Management Integrated Circuits): Chips that regulate and distribute power within electronic systems. "PMICs - Multiple power management integrated circuits"
  • Proprioceptive feedback: Internal sensing of a robot’s joint positions, forces, and states used for control and state estimation. "Sensor interface connectors for proprioceptive feedback."
  • Real-Time Preemption (RT): Linux kernel patches enabling deterministic scheduling and preemption for real-time performance. "Real-Time Preemption Patches"
  • RealSense camera: Intel’s depth-sensing camera technology used for computer vision in robotics. "Advanced computer vision with a RealSense camera"
  • RK3588 SoC: Rockchip’s ARM-based system-on-chip used as the main compute platform. "Rockchip RK3588 SoC - 8-core ARM Cortex-A76/A55 processor"
  • ROS (Robot Operating System): An open-source robotics middleware providing libraries, tools, and communication infrastructure. "The Robot Operating System (ROS) \cite{quigley2009ros} is a robotics framework for robot application development"
  • ROS 2 Foxy: A specific ROS 2 distribution release; noted as outdated in the paper. "ROS 2 Foxy powered by CycloneDDS 0.10.2"
  • RTPS (Real-Time Publish-Subscribe): The wire protocol used by DDS for real-time data exchange between participants. "DDS/RTPS on base ports 7400/7401"
  • Safety-security nexus: The close interdependence between safety and cybersecurity in systems where physical actions result from digital control. "The safety-security nexus in robotics"
  • Secure boot: A mechanism that verifies the integrity and authenticity of software components during system startup using cryptographic signatures. "Secure Boot Weaknesses"
  • Sensor spoofing: Faking or manipulating sensor inputs to mislead perception or control systems. "sensor spoofing attacks"
  • Shared Memory IPC: Inter-process communication via shared memory regions to achieve high throughput and low latency. "Shared Memory IPC (/dev/shm/iceoryx_*)"
  • Side-channel attacks: Methods of extracting sensitive information by measuring indirect effects like timing, power, or electromagnetic emissions. "Perform side-channel attacks on the unshielded RK3588 during cryptographic operations"
  • STUN/TURN: NAT traversal and relay protocols enabling peer-to-peer connectivity in WebRTC. "STUN/TURN service"
  • Systematization of Knowledge (SoK): A research methodology that organizes and synthesizes existing work to provide comprehensive frameworks. "systematization of knowledge (SoK) methodology"
  • Telemetry connections: Continuous data streams reporting system states and sensor readings to remote servers. "persistent telemetry connections"
  • Trusted Execution Environment (TEE): A secure area of a main processor that ensures code and data loaded inside are protected and authenticated. "TEE is present but configuration uncertain"
  • TrustedFirmware-A: ARM’s reference implementation for secure boot and firmware components on ARMv8-A architectures. "TrustedFirmware-A supports RK3588"
  • Trojan horse: A device or software used deceptively to introduce or conceal malicious capabilities, such as covert data collection. "trojan horses for data exfiltration"
  • U-Boot: A widely used bootloader for embedded systems, responsible for early system initialization. "U-Boot filesystem vulnerabilities"
  • Voltage glitching: A fault-injection technique that temporarily alters power delivery to induce errors in hardware behavior. "through voltage glitching."
  • WebRTC: A framework for real-time, peer-to-peer audio, video, and data communication in browsers and devices. "WebRTC stack (signal server on port 8081)"
  • Zero-day exposure windows: Periods during which known vulnerabilities remain unpatched or undisclosed, leaving systems at risk. "zero-day exposure windows"

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 9 tweets with 2386 likes about this paper.

alphaXiv

  1. The Cybersecurity of a Humanoid Robot (8 likes, 0 questions)