Papers
Topics
Authors
Recent
Search
2000 character limit reached

Predicting Trust In Autonomous Vehicles: Modeling Young Adult Psychosocial Traits, Risk-Benefit Attitudes, And Driving Factors With Machine Learning

Published 13 Sep 2024 in cs.HC, cs.AI, and cs.LG | (2409.08980v2)

Abstract: Low trust remains a significant barrier to Autonomous Vehicle (AV) adoption. To design trustworthy AVs, we need to better understand the individual traits, attitudes, and experiences that impact people's trust judgements. We use machine learning to understand the most important factors that contribute to young adult trust based on a comprehensive set of personal factors gathered via survey (n = 1457). Factors ranged from psychosocial and cognitive attributes to driving style, experiences, and perceived AV risks and benefits. Using the explainable AI technique SHAP, we found that perceptions of AV risks and benefits, attitudes toward feasibility and usability, institutional trust, prior experience, and a person's mental model are the most important predictors. Surprisingly, psychosocial and many technology- and driving-specific factors were not strong predictors. Results highlight the importance of individual differences for designing trustworthy AVs for diverse groups and lead to key implications for future design and research.

Summary

  • The paper demonstrates that risk-benefit perceptions drive trust in autonomous vehicles with an 85.8% prediction accuracy.
  • It employs random forest and SHAP to analyze 130 features spanning psychosocial, driving, and cognitive domains.
  • The research suggests that transparent communication of AV benefits and system explainability can enhance user trust and adoption.

Trust in Autonomous Vehicles: A Machine Learning Approach to Understanding Young Adult Perspectives

The research presented in "Predicting Trust In Autonomous Vehicles: Modeling Young Adult Psychosocial Traits, Risk-Benefit Attitudes, And Driving Factors With Machine Learning" offers a robust analysis of the factors contributing to young adults' trust in autonomous vehicles (AVs). The study underscores the significant barrier posed by low trust to AV adoption and seeks to enhance comprehension of the individual traits, attitudes, and experiences that influence trust judgments. Leveraging a comprehensive set of 130 distinct input features drawn from surveys, the researchers explore a range of characteristics from psychosocial and driving to cognitive and experiential domains using advanced machine learning techniques.

Key Findings and Analysis

The application of machine learning models, specifically random forest, demonstrated an 85.8% accuracy in categorizing individuals into high and low trust categories. This model was then elucidated using SHapley Additive exPlanations (SHAP) to determine the contribution of specific factors to trust assessments. A striking revelation from the study is the predominance of risk-benefit perceptions in predicting AV trust. Indeed, a comprehensive evaluation of the risks and benefits associated with AVs emerged as a paramount determinant, with 12 out of the 20 most influential features relating to this aspect. This finding amplifies the assertion that understanding and communicating the risk-benefit trade-off effectively can significantly enhance user trust in AVs.

Contrary to prevailing assumptions, psychosocial factors such as personality, self-esteem, and risk preferences, as well as many driving-specific factors like driving style and experience, did not substantially predict trust levels. Instead, perceptions of AV feasibility, institutional trust, and prior experience played more consequential roles, although still secondary to risk-benefit evaluations.

Implications for Design and Research

The study's implications illuminate pathways toward designing more trustworthy AV systems. Prioritizing communication of unique AV benefits, such as safety enhancements and emission reductions, can refine user trust perceptions. Furthermore, advancing explainable AI (XAI) models that elucidate AV decision-making in relatable, human-like terms may also foster acceptance. Aligning AV design with user transparency and inclusivity principles holds promise for mitigating prevalent concerns, especially around system reliability and control relinquishment.

The insights extend to policy discussions, advocating for enhanced regulatory frameworks that underscore safety and privacy standards, thereby reinforcing institutional trust. Given the less significant role of demographic factors such as age and education within this young adult cohort, future research should consider subgroup-specific characteristics to fine-tune trust predictions across diverse populations.

Conclusion

This investigation sets a benchmark in understanding trust dynamics in AVs through an integrative machine learning methodology. The emphasis on risk-benefit perceptions over psychosocial attributes redefines the landscape of trust prediction, offering nuanced insights for designing adaptive, transparent AV systems that resonate with user values and expectations. As we look to a future where AV adoption becomes the norm, such insights will be indispensable in crafting AV technologies that harness trust as a key enabler of societal integration and acceptance.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 2 likes about this paper.