Papers
Topics
Authors
Recent
Search
2000 character limit reached

Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to the iCub's answers

Published 13 Oct 2015 in cs.RO, cs.CY, and cs.HC | (1510.03678v1)

Abstract: To investigate the functional and social acceptance of a humanoid robot, we carried out an experimental study with 56 adult participants and the iCub robot. Trust in the robot has been considered as a main indicator of acceptance in decision-making tasks characterized by perceptual uncertainty (e.g., evaluating the weight of two objects) and socio-cognitive uncertainty (e.g., evaluating which is the most suitable item in a specific context), and measured by the participants' conformation to the iCub's answers to specific questions. In particular, we were interested in understanding whether specific (i) user-related features (i.e. desire for control), (ii) robot-related features (i.e., attitude towards social influence of robots), and (iii) context-related features (i.e., collaborative vs. competitive scenario), may influence their trust towards the iCub robot. We found that participants conformed more to the iCub's answers when their decisions were about functional issues than when they were about social issues. Moreover, the few participants conforming to the iCub's answers for social issues also conformed less for functional issues. Trust in the robot's functional savvy does not thus seem to be a pre-requisite for trust in its social savvy. Finally, desire for control, attitude towards social influence of robots and type of interaction scenario did not influence the trust in iCub. Results are discussed with relation to methodology of HRI research.

Citations (176)

Summary

  • The paper investigates how human trust and conformation to the iCub robot's decisions vary between functional and social tasks under uncertainty.
  • Experimental results indicate users conform more readily in functional tasks than social ones, suggesting technical trust is higher than social trust.
  • The study reveals a potential dichotomy in human perception of robot capabilities, distinguishing functional competence from social integration.
  • This research suggests the complexity of fostering holistic robot acceptance, highlighting that social trust doesn't necessarily follow from functional trust.

Trust as an Indicator of Robot Functional and Social Acceptance: An Evaluation of Human Conformance to iCub's Decisions

This study examines the acceptance of a humanoid robot, the iCub, by human users in both functional and social decision-making tasks under uncertainty. The experiment explores the role of trust as an indicator of robot acceptance, with a focus on conformation—users modifying their decisions to match the robot's. The authors employ a comprehensive method, considering user-related, robot-related, and context-related features that might influence trust.

Study Design and Methodology

The experimental design involved fifty-six adult participants interacting with the iCub robot. The participants confronted two types of tasks:

  1. Functional tasks: Participants assessed perceptual characteristics of stimuli, such as weight comparison, pitch, and predominant color, requiring objective measurements.
  2. Social tasks: These tasks included subjective evaluations regarding appropriateness of items within specific social contexts.

The study utilized the Wizard-of-Oz paradigm, wherein participants believed they were interacting with an autonomous robot, though it was controlled remotely. Participants were exposed to scenarios: collaborative, competitive, and neutral.

Prior to robot interaction, participants completed two questionnaires aimed at assessing their desire for control and attitudes towards social influence of robots. This preparatory phase was designed to probe individual propensities influencing their conformation to the robot's decisions.

Findings

Results indicated that participants were more likely to conform in functional tasks than in social tasks. Only a minor subset trusted the robot's social savvy. This suggests that while functional acceptance based on high precision technical skills was acknowledged, social acceptance was less evident, perhaps due to the innate belief that robots lack subjective experience necessary for social savvy.

Interestingly, those conforming socially did not necessarily conform functionally, challenging the assumption that social trust arises from functional trust. This dichotomy implies that participants view the iCub differently as either a tool with technical expertise or a social entity capable of integrating into social tasks separately.

Implications

These observations underscore the complexities of human-robot trust dynamics, particularly in distinguishing functional savvy from social savvy. The results reflect a nuanced understanding of robot acceptance within varying contexts of uncertainty. Despite the societal push towards robots being dual-functional and socially capable, the study reveals a potential dichotomy in user perception and expectations.

Future Directions

Future investigations should explore whether the observed behavior depends on robots being perceived as "socially ignorant" or whether the intrinsic nature of tasks contributes to this trust divide. Additional research should encompass comparative interactions between humans and humanoid robots to illuminate distinctions in trust mechanisms across different agents.

Improving robot design to bridge the acceptance gap between functional and social savvy requires understanding how robots are perceived in terms of capability and reliability. Researchers could benefit from exploring the intersection of task nature, robot appearance, and cultural influences further. Additionally, expanding participant demographics and examining the longitudinal impact of repeated robot interactions could provide deeper insights into evolving trust dynamics.

In summary, this study contributes significantly to understanding human-robot interaction dynamics and raises compelling inquiries for further exploration into establishing holistic trust in robots as integral partners in everyday tasks.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.