WIBA: What Is Being Argued? A Comprehensive Approach to Argument Mining
Abstract: We propose WIBA, a novel framework and suite of methods that enable the comprehensive understanding of "What Is Being Argued" across contexts. Our approach develops a comprehensive framework that detects: (a) the existence, (b) the topic, and (c) the stance of an argument, correctly accounting for the logical dependence among the three tasks. Our algorithm leverages the fine-tuning and prompt-engineering of LLMs. We evaluate our approach and show that it performs well in all the three capabilities. First, we develop and release an Argument Detection model that can classify a piece of text as an argument with an F1 score between 79% and 86% on three different benchmark datasets. Second, we release a LLM that can identify the topic being argued in a sentence, be it implicit or explicit, with an average similarity score of 71%, outperforming current naive methods by nearly 40%. Finally, we develop a method for Argument Stance Classification, and evaluate the capability of our approach, showing it achieves a classification F1 score between 71% and 78% across three diverse benchmark datasets. Our evaluation demonstrates that WIBA allows the comprehensive understanding of What Is Being Argued in large corpora across diverse contexts, which is of core interest to many applications in linguistics, communication, and social and computer science. To facilitate accessibility to the advancements outlined in this work, we release WIBA as a free open access platform (wiba.dev).
- D. Halpern and J. Gibbs, “Social media as a catalyst for online deliberation? exploring the affordances of facebook and youtube for political expression,” Computers in Human Behavior, vol. 29, p. 1159–1168, 05 2013.
- N. Reimers, B. Schiller, T. Beck, J. Daxenberger, C. Stab, and I. Gurevych, “Classification and clustering of arguments with contextualized word embeddings,” 2019. [Online]. Available: https://arxiv.org/abs/1906.09821
- C. Dutilh Novaes, “Argument and Argumentation,” in The Stanford Encyclopedia of Philosophy, fall 2022 ed., E. N. Zalta and U. Nodelman, Eds. Metaphysics Research Lab, Stanford University, 2022.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” 2023.
- M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” arXiv preprint arXiv:1910.13461, 2019.
- H. Touvron and L. M. et. al., “Llama 2: Open foundation and fine-tuned chat models,” 2023.
- AI@Meta, “Llama 3 model card,” 2024. [Online]. Available: https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md
- A. Q. Jiang and A. S. et. al., “Mistral 7b,” 2023.
- C. Zhou, P. Liu, P. Xu, S. Iyer, J. Sun, Y. Mao, X. Ma, A. Efrat, P. Yu, L. Yu, S. Zhang, G. Ghosh, M. Lewis, L. Zettlemoyer, and O. Levy, “Lima: Less is more for alignment,” 2023.
- J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou, “Chain-of-thought prompting elicits reasoning in large language models,” 2023.
- Y. Zhang, Y. Sun, Y. Zhan, D. Tao, D. Tao, and C. Gong, “Large language models as an indirect reasoner: Contrapositive and contradiction for automated reasoning,” 2024.
- E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, and Y. Li, “Lora: Low-rank adaptation of large language models,” 2021.
- C. Stab, T. Miller, B. Schiller, P. Rai, and I. Gurevych, “Cross-topic argument mining from heterogeneous sources,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Brussels, Belgium: Association for Computational Linguistics, Oct.-Nov. 2018, pp. 3664–3674. [Online]. Available: https://aclanthology.org/D18-1402
- E. Shnarch, L. Choshen, G. Moshkowich, N. Slonim, and R. Aharonov, “Unsupervised expressive rules provide explainability and assist human experts grasping new domains,” arXiv preprint arXiv:2010.09459, 2020.
- A. Toledo, S. Gretz, E. Cohen-Karlik, R. Friedman, E. Venezian, D. Lahav, M. Jacovi, R. Aharonov, and N. Slonim, “Automatic argument quality assessment–new datasets and methods,” arXiv preprint arXiv:1909.01007, 2019.
- A. Akbik, T. Bergmann, D. Blythe, K. Rasul, S. Schweter, and R. Vollgraf, “FLAIR: An easy-to-use framework for state-of-the-art NLP,” in NAACL, 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pp. 54–59.
- G. Chen, L. Cheng, L. A. Tuan, and L. Bing, “Exploring the potential of large language models in computational argumentation,” arXiv preprint arXiv:2311.09022, 2023.
- J. Guo, L. Cheng, W. Zhang, S. Kok, X. Li, and L. Bing, “Aqe: Argument quadruplet extraction via a quad-tagging augmented generative approach,” arXiv preprint arXiv:2305.19902, 2023.
- R. Levy, Y. Bilu, D. Hershcovich, E. Aharoni, and N. Slonim, “Context dependent claim detection,” in Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, 2014, pp. 1489–1500.
- L. Cheng, T. Wu, L. Bing, and L. Si, “Argument pair extraction via attention-guided multi-layer multi-cross encoding,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021, pp. 6341–6353.
- R. M. Palau and M.-F. Moens, “Argumentation mining: the detection, classification and structure of arguments in text,” in Proceedings of the 12th international conference on artificial intelligence and law, 2009, pp. 98–107.
- Y. Li, K. Garg, and C. Caragea, “A new direction in stance detection: Target-stance extraction in the wild,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023, pp. 10 071–10 085.
- W. Zhang, X. Li, Y. Deng, L. Bing, and W. Lam, “Towards generative aspect-based sentiment analysis,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), 2021, pp. 504–510.
- S. Zhang, Y. Shen, Z. Tan, Y. Wu, and W. Lu, “De-bias for generative extraction in unified ner task,” in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022, pp. 808–818.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.