Exploiting Local Observations for Robust Robot Learning
Abstract: While many robotic tasks can be addressed through either centralized single-agent control with full state observation or decentralized multi-agent control, clear criteria for selecting the optimal approach are lacking. This paper presents a comprehensive investigation into how multi-agent reinforcement learning (MARL) with local observations can enhance robustness in complex robotic systems compared to traditional centralized control methods. We provide both theoretical analysis and empirical validation demonstrating that in certain tasks, decentralized MARL controllers can achieve performance comparable to centralized approaches while offering superior robustness against perturbations and agent failures. Our theoretical contributions include an analytical proof of equivalence between SARL and MARL under full observability conditions, identifying observability as the key distinguishing factor, and derivation of performance degradation bounds for locally observable policies under external perturbations. Empirical validation on standard MARL benchmarks confirms that locally observable MARL maintains competitive performance despite limited observations. Real-world experiments with a mobile manipulation robot demonstrate that our decentralized MARL controllers exhibit significantly improved robustness to both agent malfunctions and environmental disturbances compared to centralized baselines. This systematic investigation provides crucial insights for designing robust and generalizable control strategies in complex robotic systems, establishing MARL with local observations as a viable alternative to traditional centralized control paradigms.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.