Papers
Topics
Authors
Recent
Search
2000 character limit reached

Do You Trust ChatGPT? -- Perceived Credibility of Human and AI-Generated Content

Published 5 Sep 2023 in cs.HC and cs.AI | (2309.02524v1)

Abstract: This paper examines how individuals perceive the credibility of content originating from human authors versus content generated by LLMs, like the GPT LLM family that powers ChatGPT, in different user interface versions. Surprisingly, our results demonstrate that regardless of the user interface presentation, participants tend to attribute similar levels of credibility. While participants also do not report any different perceptions of competence and trustworthiness between human and AI-generated content, they rate AI-generated content as being clearer and more engaging. The findings from this study serve as a call for a more discerning approach to evaluating information sources, encouraging users to exercise caution and critical thinking when engaging with content generated by AI systems.

Citations (6)

Summary

  • The paper finds that user interface variations did not significantly affect perceived credibility, as users rated competence and trust similarly across formats.
  • The paper shows that AI-generated content was perceived as clearer and more engaging than human-generated content.
  • The paper highlights the importance of critical evaluation to mitigate misinformation risks arising from the appealing nature of AI-generated texts.

Perceived Credibility of Human and AI-Generated Content: An Analysis of User Trust in ChatGPT

The examined paper offers a structured analysis of the comparative perceived credibility of content produced by human authors versus AI-generated outputs, with specific emphasis on LLMs, such as those underwriting ChatGPT. This research is critical in light of the consistent rise in AI applications for information generation and dissemination, highlighting significant considerations for both user awareness and the inherent biases of these systems.

Study Methodology and Setup

The authors adopted a comprehensive approach involving 606 participants, engaging them with texts presented in three distinct user interface (UI) versions: ChatGPT UI, Raw Text UI, and Wikipedia UI. The content was either generated by humans or AI (specifically by ChatGPT-generated text). Participants evaluated the credibility based on key factors such as competence, trustworthiness, clarity, and engagement related to the content they encountered.

Key Observations

UI Conditions and Credibility Perception: One of the central findings was that user interface variations did not have a significant impact on the perception of content credibility. Participants conferred similar levels of competence and trustworthiness to the content, irrespective of the UI rendering.

Comparison of Content Origin: Notably, while content origin did not majorly shift perceptions of competence and trustworthiness, AI-generated content was consistently perceived as clearer and more engaging than its human-generated counterparts. Such differences in perception highlight an advantage for AI-generated content in engaging users, albeit posing risks due to potential misinformation given the known inaccuracies of AI outputs.

Implications of Findings

The revelations from this study necessitate a cautious approach to interpreting AI-generated content. Although the perceived professionalism (clarity and engagement) of AI-generated texts may captivate users, the perception of equivalent expertise and reliability between AI and human content raises concerns. Such perceptions overlook the fallibility and potential hallucinations associated with AI systems, particularly due to their reliance on extensive, yet not always reliable, datasets.

In practical terms, this study underscores the necessity for rigorous discernment and critical evaluation by consumers of AI-generated content. The finding that AI-generated content is perceived as more engaging entrains additional responsibilities for developers to ensure the mitigation of misinformation risk. Furthermore, as the ubiquity of such technologies expands, there is a pressing need for educational strategies that enhance public understanding of AI's inherent limitations, promoting informed consumption and safeguarding against misinformation.

Future Prospects and Research Directions

The continuous evolution of LLMs demands ongoing scrutiny into their broader societal impacts and user perception, particularly in relation to the long-term effects of AI credibility over time—a topic warranting further longitudinal investigation. Exploring diverse content types and extending demographic diversity in research can offer more comprehensive insights into these dynamics.

Ultimately, this study functions as an essential contribution adding to the foundational understanding required for navigating the rapidly transforming landscape of AI in content generation. It punctuates the need for mindful interaction between humans and machines, fostering an environment where AI serves as an augmentative force rather than a source of ambiguity and misinformation.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 33 likes about this paper.