Papers
Topics
Authors
Recent
Search
2000 character limit reached

Inference of Human's Observation Strategy for Monitoring Robot's Behavior based on a Game-Theoretic Model of Trust

Published 1 Mar 2019 in cs.AI and cs.GT | (1903.00111v4)

Abstract: We consider scenarios where a worker robot, who may be unaware of the human's exact expectations, may have the incentive to deviate from a preferred plan (e.g. safe but costly) when a human supervisor is not monitoring it. On the other hand, continuous monitoring of the robot's behavior is often difficult for humans because it costs them valuable resources (e.g., time, cognitive overload, etc.). Thus, to optimize the cost of monitoring while ensuring the robots follow the {\em safe} behavior and to assist the human to deal with the possible unsafe robots, we model this problem in a game-theoretic framework of trust. In settings where the human does not initially trust the robot, pure-strategy Nash Equilibrium provides a useful policy for the human. Unfortunately, we show the formulated game often lacks a pure strategy Nash equilibrium. Thus, we define the concept of a trust boundary over the mixed strategy space of the human and show that it helps to discover optimal monitoring strategies. We conduct humans subject studies that demonstrate (1) the need for coming up with optimal monitoring strategies, and (2) the benefits of using strategies suggested by our approach.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.