Papers
Topics
Authors
Recent
Search
2000 character limit reached

AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns

Published 15 Feb 2024 in cs.CR and cs.AI | (2402.09728v1)

Abstract: SMS phishing, also known as "smishing", is a growing threat that tricks users into disclosing private information or clicking into URLs with malicious content through fraudulent mobile text messages. In recent past, we have also observed a rapid advancement of conversational generative AI chatbot services (e.g., OpenAI's ChatGPT, Google's BARD), which are powered by pre-trained LLMs. These AI chatbots certainly have a lot of utilities but it is not systematically understood how they can play a role in creating threats and attacks. In this paper, we propose AbuseGPT method to show how the existing generative AI-based chatbot services can be exploited by attackers in real world to create smishing texts and eventually lead to craftier smishing campaigns. To the best of our knowledge, there is no pre-existing work that evidently shows the impacts of these generative text-based models on creating SMS phishing. Thus, we believe this study is the first of its kind to shed light on this emerging cybersecurity threat. We have found strong empirical evidences to show that attackers can exploit ethical standards in the existing generative AI-based chatbot services by crafting prompt injection attacks to create newer smishing campaigns. We also discuss some future research directions and guidelines to protect the abuse of generative AI-based services and safeguard users from smishing attacks.

Citations (7)

Summary

  • The paper identifies a critical security vulnerability where generative AI chatbots can be manipulated with prompt engineering to generate smishing messages.
  • It demonstrates that indirect prompts and jailbreak techniques can bypass ethical safeguards, producing actionable smishing content and fake URLs.
  • The findings call for rapid enhancements in AI ethical standards and defenses to mitigate cyber threats and protect user security.

"AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns"

The paper "AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns" proposes a method named AbuseGPT, which illustrates potential abuses of generative AI chatbot services, such as OpenAI's ChatGPT and Google's Bard, for creating SMS phishing (smishing) campaigns. This study identifies vulnerabilities in these systems that could be exploited to generate fraudulent messages, posing significant cybersecurity challenges.

Introduction

Smishing is a type of phishing attack conducted through mobile text messaging to trick users into revealing private information. With the rise of AI-based chatbots powered by LLMs, concerns regarding their misuse in orchestrating smishing attacks have emerged. This paper is the first to explore how generative AI can facilitate the creation of smishing texts. Prior studies on AI and cybercrime are referenced, establishing the broader context of this emerging threat.

Methodology

The proposed method, AbuseGPT, employs strategic prompt engineering to circumvent the ethical guidelines of AI chatbots, thereby generating smishing messages and campaign ideas. The approach is designed to test the limits of AI chatbots in resisting prompt injection attacks and their ability to produce malicious outputs. Figure 1

Figure 1: Overview of proposed AbuseGPT method

The research questions (RQs) guiding the study are:

  • RQ1: Can AI chatbots be jailbroken to bypass ethical standards?
  • RQ2: Can AI chatbots generate realistic smishing text messages?
  • RQ3: Can AI chatbots suggest tools for initiating smishing attacks?
  • RQ4: Can AI chatbots provide ideas for generating fake URLs?

The experiments primarily involve exposing vulnerabilities in AI chatbots using public jailbreak prompts to assess their potential for abuse in smishing campaigns.

Case Study: Smishing Campaigns with ChatGPT

The case study utilizes ChatGPT 3.5 to understand its capacity for generating smishing content. Initial direct queries failed to breach ethical guidelines (Figure 2), but indirect questioning exposed loopholes (Figure 3). Notably, the AIM jailbreak prompt successfully bypassed ethical constraints, generating actionable information for smishing scams (Figure 4, 6). Figure 2

Figure 2: Asking ChatGPT directly to give an SMS phishing message without jailbreaking

Figure 3

Figure 3: Asking indirectly for a smishing message without jailbreaking

Figure 4

Figure 4: `AIM' Jailbreak prompt

Figure 5

Figure 5: Unethical response after jailbreaking to design effective smishing

This part of the study demonstrates that prompt engineering can manipulate chatbots to extract smishing text ideas, tools, and fake URLs (Figures 9, 17). Figure 6

Figure 6: Prompt for avoiding common smishing

Figure 7

Figure 7: Asking ChatGPT for crafting fake URLs

Discussion and Limitations

The implications of this study are twofold. First, it highlights significant security vulnerabilities in AI chatbots that can be easily exploited without advanced technical skills. Second, it raises ethical concerns regarding the deployment of such AI systems without robust safeguards. While these findings prompt urgent enhancements to AI ethical standards, limitations exist due to the evolving nature of AI responses and their dependency on model updates.

Conclusion

The AbuseGPT method unveils a critical vulnerability in AI chatbots: their potential to aid cybercriminal activities like smishing. The study underscores the necessity for strengthening AI ethical safeguards and suggests anti-smishing strategies for AI developers and security stakeholders. Future research could explore integrating defensive AI mechanisms to detect and prevent misuse, thereby reinforcing user trust and safety.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 3 likes about this paper.