Chatbots to ChatGPT in a Cybersecurity Space: Evolution, Vulnerabilities, Attacks, Challenges, and Future Recommendations
Abstract: Chatbots shifted from rule-based to artificial intelligence techniques and gained traction in medicine, shopping, customer services, food delivery, education, and research. OpenAI developed ChatGPT blizzard on the Internet as it crossed one million users within five days of its launch. However, with the enhanced popularity, chatbots experienced cybersecurity threats and vulnerabilities. This paper discussed the relevant literature, reports, and explanatory incident attacks generated against chatbots. Our initial point is to explore the timeline of chatbots from ELIZA (an early natural language processing computer program) to GPT-4 and provide the working mechanism of ChatGPT. Subsequently, we explored the cybersecurity attacks and vulnerabilities in chatbots. Besides, we investigated the ChatGPT, specifically in the context of creating the malware code, phishing emails, undetectable zero-day attacks, and generation of macros and LOLBINs. Furthermore, the history of cyberattacks and vulnerabilities exploited by cybercriminals are discussed, particularly considering the risk and vulnerabilities in ChatGPT. Addressing these threats and vulnerabilities requires specific strategies and measures to reduce the harmful consequences. Therefore, the future directions to address the challenges were presented.
- S. Hussain, O. A. Sianaki, and N. Ababneh, “A survey on conversational agents/chatbots classification and design techniques.” Springer, 2019, pp. 946–956.
- A. Gupta, D. Hathwar, and A. Vijayakumar, “Introduction to ai chatbots,” International Journal of Engineering Research and Technology, vol. 9, pp. 255–258, 2020.
- S. Paliwal, V. Bharti, and A. K. Mishra, “Ai chatbots: Transforming the digital world,” Recent Trends and Advances in Artificial Intelligence and Internet of Things, pp. 455–482, 2020.
- A. Sojasingarayar, “Seq2seq ai chatbot with attention mechanism,” arXiv preprint arXiv:2006.02767, 2020.
- L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, and A. Ray, “Training language models to follow instructions with human feedback,” Advances in Neural Information Processing Systems, vol. 35, pp. 27 730–27 744, 2022.
- S. Wollny, J. Schneider, D. D. Mitri, J. Weidlich, M. Rittberger, and H. Drachsler, “Are we there yet?-a systematic literature review on chatbots in education,” Frontiers in artificial intelligence, vol. 4, p. 654924, 2021.
- A. Følstad and M. Skjuve, “Chatbots for customer service: user experience and motivation,” 2019, pp. 1–9.
- M. P. Nath and S. Sagnika, “Capabilities of chatbots and its performance enhancements in machine learning.” Springer, 2020, pp. 183–192.
- A. Haleem, M. Javaid, and R. P. Singh, “An era of chatgpt as a significant futuristic support tool: A study on features, abilities, and challenges,” BenchCouncil transactions on benchmarks, standards and evaluations, vol. 2, p. 100089, 2022.
- N. M. S. Surameery and M. Y. Shakor, “Use chat gpt to solve programming bugs,” International Journal of Information Technology & Computer Engineering (IJITC) ISSN: 2455-5290, vol. 3, pp. 17–22, 2023.
- S. Frieder, L. Pinchetti, R.-R. Griffiths, T. Salvatori, T. Lukasiewicz, P. C. Petersen, A. Chevalier, and J. Berner, “Mathematical capabilities of chatgpt,” arXiv preprint arXiv:2301.13867, 2023.
- R. W. McGee, “What will the united states look like in 2050? a chatgpt short story,” A Chatgpt Short Story (April 8, 2023), 2023.
- E. A. M. van Dis, J. Bollen, W. Zuidema, R. van Rooij, and C. L. Bockting, “Chatgpt: five priorities for research,” Nature, vol. 614, pp. 224–226, 2023.
- W. Ye and Q. Li, “Chatbot security and privacy in the age of personal assistants.” IEEE, 2020, pp. 388–393.
- CPR, “Opwnai : Cybercriminals starting to use chatgpt,” 2023. [Online]. Available: https://research.checkpoint.com/2023/opwnai-cybercriminals-starting-to-use-chatgpt/
- A. Mulgrew, “I built a zero day virus with undetectable exfiltration using only chatgpt prompts,” 2023. [Online]. Available: https://www.forcepoint.com/blog/x-labs/zero-day-exfiltration-using-chatgpt-prompts
- P. B. Brandtzaeg and A. Følstad, “Why people use chatbots.” Springer, 2017, pp. 377–392.
- J. Weizenbaum, “Eliza—a computer program for the study of natural language communication between man and machine,” Communications of the ACM, vol. 26, pp. 23–28, 1983.
- K. M. Colby, F. D. Hilf, S. Weber, and H. C. Kraemer, “Turing-like indistinguishability tests for the validation of a computer simulation of paranoid processes,” Artificial Intelligence, vol. 3, pp. 199–221, 1972.
- Chatbots, “Jabberwacky.” [Online]. Available: https://www.chatbots.org/chatterbot/jabberwacky/
- K. Jwala, G. Sirisha, and G. V. P. Raju, “Developing a chatbot using machine learning,” International Journal of Recent Technology and Engineering (IJRTE), vol. 8, pp. 89–92, 2019.
- M. T. ZEMČÍK, “A brief history of chatbots,” DEStech Transactions on Computer Science and Engineering, vol. 10, 2019.
- Wikipedia, “Dr. sbaitso.” [Online]. Available: https://en.wikipedia.org/wiki/Dr._Sbaitso
- M. das Graças Bruno Marietto, R. V. de Aguiar, G. de Oliveira Barbosa, W. T. Botelho, E. Pimentel, R. dos Santos França, and V. L. da Silva, “Artificial intelligence markup language: a brief tutorial,” arXiv preprint arXiv:1307.3091, 2013.
- B. Heller, M. Proctor, D. Mah, L. Jewell, and B. Cheung, “Freudbot: An investigation of chatbot technology in distance education.” Association for the Advancement of Computing in Education (AACE), 2005, pp. 3913–3918.
- Chatbots, “Smarterchild.” [Online]. Available: https://www.chatbots.org/chatterbot/smarterchild/
- H. Soffar, “Apple siri features, use, advantages, disadvantages & using of siri for learning,” Science online website: https://www. online-sciences. com/technology/apple-siri-features-use-advantages-disadvantages-using-of-siri-for-learning/.(Retrieved 27 August 2020), 2019.
- Google, “Google now.” [Online]. Available: https://www.google.co.uk/landing/now/
- M. B. Hoy, “Alexa, siri, cortana, and more: an introduction to voice assistants,” Medical reference services quarterly, vol. 37, pp. 81–88, 2018.
- P. Security, “Cortana security flaw means your pc may be compromised,” 2018. [Online]. Available: https://www.pandasecurity.com/mediacenter/mobile-news/cortana-security-flaw/
- X. Lei, G.-H. Tu, A. X. Liu, K. Ali, C.-Y. Li, and T. Xie, “The insecurity of home digital voice assistants–amazon alexa as a case study,” arXiv preprint arXiv:1712.03327, 2017.
- ames Vincent, “Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day,” 2016. [Online]. Available: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
- iSocial, “Woebot, self-help chat for dealing with mental and emotional discomfort.” [Online]. Available: https://isocial.cat/en/woebot-self-help-chat-for-dealing-with-mental-and-emotional-discomfort/
- Y. Leviathan, “Google duplex: An ai system for accomplishing real-world tasks over the phone,” 2018. [Online]. Available: https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html
- A. P. OpenAI, “Gpt-3 powers the next generation of apps,” 2021. [Online]. Available: https://openai.com/blog/gpt-3-apps
- J. Finnie-Ansley, P. Denny, B. A. Becker, A. Luxton-Reilly, and J. Prather, “The robots are coming: Exploring the implications of openai codex on introductory programming,” 2022, pp. 10–19.
- Wikipedia, “Chatgpt.” [Online]. Available: https://en.wikipedia.org/wiki/ChatGPT
- ——, “Gpt-4.” [Online]. Available: https://en.wikipedia.org/wiki/GPT-4
- K. Bala, M. Kumar, S. Hulawale, and S. Pandita, “Chat-bot for college management system using ai,” International Research Journal of Engineering and Technology, vol. 4, pp. 2030–2033, 2017.
- S. Ayanouz, B. A. Abdelhakim, and M. Benhmed, “A smart chatbot architecture based nlp and machine learning for health care assistance,” 2020, pp. 1–6.
- R. Kumar and M. M. Ali, “A review on chatbot design and implementation techniques,” Int. J. Eng. Technol, vol. 7, 2020.
- H. Xufei, “Chatbot: Architecture, design, & development.” pp. 1–41, 2017. [Online]. Available: https://www.cis.upenn.edu/wp-content/uploads/2021/10/Xufei-Huang-thesis.pdf
- H. Chen, X. Liu, D. Yin, and J. Tang, “A survey on dialogue systems: Recent advances and new frontiers,” Acm Sigkdd Explorations Newsletter, vol. 19, pp. 25–35, 2017.
- W. Maroengsit, T. Piyakulpinyo, K. Phonyiam, S. Pongnumkul, P. Chaovalit, and T. Theeramunkong, “A survey on evaluation methods for chatbots,” 2019, pp. 111–119.
- L. Shang, Z. Lu, and H. Li, “Neural responding machine for short-text conversation,” arXiv preprint arXiv:1503.02364, 2015.
- A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J.-Y. Nie, J. Gao, and B. Dolan, “A neural network approach to context-sensitive generation of conversational responses,” arXiv preprint arXiv:1506.06714, 2015.
- M. Galley, A. Sordoni, C. J. Brockett, J. Gao, W. B. Dolan, Y. Ji, A. Michael, M. A. Mitchell, and J.-Y. Nie, “Context-sensitive generation of conversational responses,” 10 2018.
- Z. Ji, Z. Lu, and H. Li, “An information retrieval approach to short text conversation,” arXiv preprint arXiv:1408.6988, 2014.
- T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” ieee Computational intelligenCe magazine, vol. 13, pp. 55–75, 2018.
- S. Worswick, “Mitsuku chatbot: Mitsuku now available to talk on kik messenger,” 2020.
- K. K. Nirala, N. K. Singh, and V. S. Purani, “A survey on providing customer and public administration based services using ai: chatbot,” Multimedia Tools and Applications, vol. 81, pp. 22 215–22 246, 2022.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Łukasz Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
- T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, and A. Askell, “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
- M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, and G. Brockman, “Evaluating large language models trained on code,” arXiv preprint arXiv:2107.03374, 2021.
- OpenAI, “Gpt-4 technical report,” 2023. [Online]. Available: https://arxiv.org/abs/2303.08774
- J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
- M. Ruby, “How chatgpt works: The model behind the bot,” 2023. [Online]. Available: https://towardsdatascience.com/how-chatgpt-works-the-models-behind-the-bot-1ce5fca96286
- D. Bilika, N. Michopoulou, E. Alepis, and C. Patsakis, “Hello me, meet the real me: Audio deepfake attacks on voice assistants,” arXiv preprint arXiv:2302.10328, 2023.
- D. Wang, R. Wang, L. Dong, D. Yan, X. Zhang, and Y. Gong, “Adversarial examples attack and countermeasure for speech recognition system: A survey.” Springer, 2020, pp. 443–468.
- J. Xu, D. Ju, M. Li, Y.-L. Boureau, J. Weston, and E. Dinan, “Bot-adversarial dialogue for safe conversational agents,” 2021, pp. 2950–2968.
- H. Chung, M. Iorga, J. Voas, and S. Lee, “Alexa, can i trust you?” Computer, vol. 50, pp. 100–104, 2017.
- J. Kepner, J. Bernays, S. Buckley, K. Cho, C. Conrad, L. Daigle, K. Erhardt, V. Gadepally, B. Greene, and M. Jones, “Zero botnets: An observe-pursue-counter approach,” arXiv preprint arXiv:2201.06068, 2022.
- M. H. Shah and M. Panchal, “Theoretical evaluation of securing modules for educational chatbot.” IEEE, 2022, pp. 818–824.
- N. Waheed, M. Ikram, S. S. Hashmi, X. He, and P. Nanda, “An empirical assessment of security and privacy risks of web-based chatbots.” Springer, 2022, pp. 325–339.
- R. Chivukula, T. J. Lakshmi, L. R. R. Kandula, and K. Alla, “A study of cyber security issues and challenges.” IEEE, 2021, pp. 1–5.
- Y. Hu, W. Kuang, Z. Qin, K. Li, J. Zhang, Y. Gao, W. Li, and K. Li, “Artificial intelligence security: Threats and countermeasures,” ACM Computing Surveys (CSUR), vol. 55, pp. 1–36, 2021.
- K. Gondaliya, S. Butakov, and P. Zavarsky, “Sla as a mechanism to manage risks related to chatbot services.” IEEE, 2020, pp. 235–240.
- C. Qian, H. Qi, G. Wang, L. Kunc, and S. Potdar, “Distinguish sense from nonsense: Out-of-scope detection for virtual assistants,” arXiv preprint arXiv:2301.06544, 2023.
- G. F. Elsayed, I. Goodfellow, and J. Sohl-Dickstein, “Adversarial reprogramming of neural networks,” arXiv preprint arXiv:1806.11146, 2018.
- Y. Zheng, X. Feng, Z. Xia, X. Jiang, A. Demontis, M. Pintor, B. Biggio, and F. Roli, “Why adversarial reprogramming works, when it fails, and how to tell the difference,” Information Sciences, vol. 632, pp. 130–143, 2023.
- F. Satari, “Review on sql injection and prevention methods (sql injection attacks),” Available at SSRN 3845158, 2008.
- H. Prakken, “A persuasive chatbot using a crowd-sourced argument graph and concerns,” Computational Models of Argument, vol. 326, p. 9, 2020.
- M. Mijwil and M. Aljanabi, “Towards artificial intelligence-based cybersecurity: The practices and chatgpt generated ways to combat cybercrime,” Iraqi Journal For Computer Science and Mathematics, vol. 4, pp. 65–70, 2023.
- A. Rahman, M. Faizal, and V. S. Suguna, “Chatbots: Friend or fiend?” NST Online, 2017.
- A. Qamar, A. Karim, and V. Chang, “Mobile malware attacks: Review, taxonomy & future directions,” Future Generation Computer Systems, vol. 97, pp. 887–909, 2019.
- E. Shimony and O. Tsarfati, “Chatting our way into creating a polymorphic malware,” 2023. [Online]. Available: https://www.cyberark.com/resources/threat-research-blog/chatting-our-way-into-creating-a-polymorphic-malware
- A. Klein and I. Kotler, “Windows process injection in 2019,” Black Hat USA, vol. 2019, 2019.
- S. Baki, R. Verma, A. Mukherjee, and O. Gnawali, “Scaling and effectiveness of email masquerade attacks: Exploiting natural language generation,” 2017, pp. 469–482.
- A. Giaretta and N. Dragoni, “Community targeted phishing: A middle ground between massive and spear phishing through natural language generation.” Springer, 2020, pp. 86–93.
- K. Shu, S. Wang, D. Lee, and H. Liu, “Mining disinformation and fake news: Concepts, methods, and recent advancements,” Disinformation, misinformation, and fake news in social media: Emerging research challenges and opportunities, pp. 1–19, 2020.
- H. Stiff and F. Johansson, “Detecting computer-generated disinformation,” International Journal of Data Science and Analytics, vol. 13, pp. 363–383, 2022.
- R. Zellers, A. Holtzman, H. Rashkin, Y. Bisk, A. Farhadi, F. Roesner, and Y. Choi, “Defending against neural fake news,” Advances in neural information processing systems, vol. 32, 2019.
- VARINDIA, “Chatgpt produces malicious emails and code,” 2022. [Online]. Available: https://varindia.com/news/chatgpt-produces-malicious-emails-and-code
- R. Karanjai, “Targeted phishing campaigns using large scale language models,” arXiv preprint arXiv:2301.00665, 2022.
- R. Tarek, S. Chaimae, and C. Habiba, “Runtime api signature for fileless malware detection.” Springer, 2020, pp. 645–654.
- F. Barr-Smith, X. Ugarte-Pedrero, M. Graziano, R. Spolaor, and I. Martinovic, “Survivalism: Systematic analysis of windows malware living-off-the-land.” IEEE, 2021, pp. 1557–1574.
- R. Stamp, “Living-off-the-land abuse detection using natural language processing and supervised learning,” arXiv preprint arXiv:2208.12836, 2022.
- L. Bilge and T. Dumitraş, “Before we knew it: an empirical study of zero-day attacks in the real world,” 2012, pp. 833–844.
- Q. Study, “A new chatgpt zero day attack is undetectable malware that steals data,” 2023. [Online]. Available: https://qsstudy.com/a-new-chatgpt-zero-day-attack-is-undetectable-malware-that-steals-data/
- A. KRAFT, “Microsoft shuts down ai chatbot after it turned into a nazi,” 2023. [Online]. Available: https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/
- A. Hashim, “Vulnerability in telegram bot api encryption allows access to messages attribution,” 2019. [Online]. Available: https://latesthackingnews.com/2019/01/22/vulnerability-in-telegram-bot-api-encryption-allows-access-to-messages/
- K. O’Flaherty, “Telegram bots have got a major problem, security researchers warn,” 2019. [Online]. Available: https://www.forbes.com/sites/kateoflahertyuk/2019/01/17/watch-out-for-telegram-bots-security-researchers-warn/?sh=1a65c78315db
- zo.ai, “Zo is microsoft’s latest ai chatbot.” [Online]. Available: https://mspoweruser.com/zo-microsofts-latest-ai-chatbot/
- N. Lee, A. Madotto, and P. Fung, “Exploring social bias in chatbots using stereotype knowledge.” 2019, pp. 177–180.
- A. Weir, “Shades of tay: Microsoft’s zo chatbot says the qu’ran is ”very violent”,” 2017. [Online]. Available: https://www.neowin.net/news/shades-of-tay-microsofts-zo-chatbot-says-the-quran-is-very-violent/
- F. Agomuoh, “Your siri conversations may have been recorded without your permission,” 2022. [Online]. Available: https://www.digitaltrends.com/computing/developer-finds-apple-bluetooth-security-flaw-worth-7000/
- R. Hasan, R. Shams, and M. Rahman, “Consumer trust and perceived risk for voice-controlled artificial intelligence: The case of siri,” Journal of Business Research, vol. 131, pp. 591–597, 2021.
- NIST, “Cve-2019-25071 detail,” 2022. [Online]. Available: https://nvd.nist.gov/vuln/detail/CVE-2019-25071
- C. Lentzsch, S. J. Shah, B. Andow, M. Degeling, A. Das, and W. Enck, “Hey alexa, is this skill safe?: Taking a closer look at the alexa skill ecosystem,” Network and Distributed Systems Security (NDSS) Symposium2021, 2021.
- BBC, “Amazon alexa security bug allowed access to voice history,” 2020. [Online]. Available: https://www.bbc.com/news/technology-53770778
- S. Pathak, S. A. Islam, H. Jiang, L. Xu, and E. Tomai, “A survey on security analysis of amazon echo devices,” High-Confidence Computing, p. 100087, 2022.
- G. Zhang, C. Yan, X. Ji, T. Zhang, T. Zhang, and W. Xu, “Dolphinattack: Inaudible voice commands,” 2017, pp. 103–117.
- H. Chung, J. Park, and S. Lee, “Digital forensic approaches for amazon alexa ecosystem,” Digital investigation, vol. 22, pp. S15–S25, 2017.
- K. M. Malik, H. Malik, and R. Baumann, “Towards vulnerability analysis of voice-driven interfaces and countermeasures for replay attacks.” IEEE, 2019, pp. 523–528.
- J. Jackson, “The growing influence of chatgpt in the cybersecurity landscape,” 2023. [Online]. Available: https://blogs.blackberry.com/en/2023/03/the-growing-influence-of-chatgpt
- C. Coles, “11% of data employees paste into chatgpt is confidential,” 2023.
- T. Economist, “[exclusive] concerns turned into reality… as soon as samsung electronics unlocks chatgpt, ’misuse’ continues,” 2023. [Online]. Available: https://economist.co.kr/article/view/ecn202303300057?s=31
- S. Schroeder, “Chatgpt was shut down due to a bug that exposed user chat titles,” 2023. [Online]. Available: https://mashable.com/article/chatgpt-bug-user-histories
- OpenAI, “March 20 chatgpt outage: Here’s what happened,” 2023. [Online]. Available: https://openai.com/blog/march-20-chatgpt-outage
- Cyble, “The growing threat of chatgpt-based phishing attacks,” 2023. [Online]. Available: https://blog.cyble.com/2023/02/22/the-growing-threat-of-chatgpt-based-phishing-attacks/
- J. Vijayan, “Chatgpt browser extension hijacks facebook business accounts,” 2023. [Online]. Available: https://www.darkreading.com/application-security/chatgpt-browser-extension-hijacks-facebook-business-accounts
- S. Mohanty, S. Vyas, S. Mohanty, and S. Vyas, “Cybersecurity and ai,” How to Compete in the Age of Artificial Intelligence: Implementing a Collaborative Human-Machine Strategy for Your Business, pp. 143–153, 2018.
- J. Martindale, “These are the countries where chatgpt is currently banned,” 2023. [Online]. Available: https://www.digitaltrends.com/computing/these-countries-chatgpt-banned/
- S. McCallum, “Chatgpt banned in italy over privacy concerns,” 2023. [Online]. Available: https://www.bbc.com/news/technology-65139406
- K. Hurler, “Amazon warns employees to beware of chatgpt,” 2023. [Online]. Available: https://gizmodo.com/amazon-chatgpt-ai-software-job-coding-1850034383
- J. Edu, C. Mulligan, F. Pierazzi, J. Polakis, G. Suarez-Tangil, and J. Such, “Exploring the security and privacy risks of chatbots in messaging services,” 2022, pp. 581–588.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.