Event Temporal Relation Extraction based on Retrieval-Augmented on LLMs
Abstract: Event temporal relation (TempRel) is a primary subject of the event relation extraction task. However, the inherent ambiguity of TempRel increases the difficulty of the task. With the rise of prompt engineering, it is important to design effective prompt templates and verbalizers to extract relevant knowledge. The traditional manually designed templates struggle to extract precise temporal knowledge. This paper introduces a novel retrieval-augmented TempRel extraction approach, leveraging knowledge retrieved from LLMs to enhance prompt templates and verbalizers. Our method capitalizes on the diverse capabilities of various LLMs to generate a wide array of ideas for template and verbalizer design. Our proposed method fully exploits the potential of LLMs for generation tasks and contributes more knowledge to our design. Empirical evaluations across three widely recognized datasets demonstrate the efficacy of our method in improving the performance of event temporal relation extraction tasks.
- X. Han, W. Zhao, N. Ding, Z. Liu, and M. Sun, “Ptr: Prompt tuning with rules for text classification,” arXiv preprint arXiv:2105.11259, 2021.
- H. Touvron, L. Martin, K. R. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, D. M. Bikel, L. Blecher, C. C. Ferrer, M. Chen, G. Cucurull, D. Esiobu, J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V. Goswami, N. Goyal, A. S. Hartshorn, S. Hosseini, R. Hou, H. Inan, M. Kardas, V. Kerkez, M. Khabsa, I. M. Kloumann, A. V. Korenev, P. S. Koura, M.-A. Lachaux, T. Lavril, J. Lee, D. Liskovich, Y. Lu, Y. Mao, X. Martinet, T. Mihaylov, P. Mishra, I. Molybog, Y. Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A. Schelten, R. Silva, E. M. Smith, R. Subramanian, X. Tan, B. Tang, R. Taylor, A. Williams, J. X. Kuan, P. Xu, Z. Yan, I. Zarov, Y. Zhang, A. Fan, M. Kambadur, S. Narang, A. Rodriguez, R. Stojnic, S. Edunov, and T. Scialom, “Llama 2: Open foundation and fine-tuned chat models,” ArXiv, vol. abs/2307.09288, 2023.
- Q. Huang, Y. Hu, S. Zhu, Y. Feng, C. Liu, and D. Zhao, “More than classification: A unified framework for event temporal relation extraction,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), (Toronto, Canada), pp. 9631–9646, Association for Computational Linguistics, July 2023.
- Z. Wang, Y. Yang, and J. Ma, “Two-stage graph convolutional networks for relation extraction,” in Neural Information Processing - 30th International Conference, ICONIP 2023, Changsha, China, November 20-23, 2023, Proceedings, Part XV (B. Luo, L. Cheng, Z. Wu, H. Li, and C. Li, eds.), vol. 1969 of Communications in Computer and Information Science, pp. 483–494, Springer, 2023.
- X. Zhang, L. Zang, P. Cheng, Y. Wang, and S. Hu, “A knowledge/data enhanced method for joint event and temporal relation extraction,” in ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6362–6366, 2022.
- H. Man, N. T. Ngo, L. N. Van, and T. H. Nguyen, “Selecting optimal context sentences for event-event relation extraction,” in AAAI Conference on Artificial Intelligencel Intelligence, 2022.
- R. Han, X. Ren, and N. Peng, “ECONET: Effective continual pretraining of language models for event temporal reasoning,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, (Online and Punta Cana, Dominican Republic), pp. 5367–5380, Association for Computational Linguistics, Nov. 2021.
- Y. Zhou, Y. Yan, R. Han, J. H. Caufield, K.-W. Chang, Y. Sun, P. Ping, and W. Wang, “Clinical temporal relation extraction with probabilistic soft logic regularization and global inference,” in AAAI Conference on Artificial Intelligencel Intelligence, 2021.
- F. Cheng and Y. Miyao, “Classifying temporal relations by bidirectional LSTM over dependency paths,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), (Vancouver, Canada), pp. 1–6, Association for Computational Linguistics, July 2017.
- P. Mathur, R. Jain, F. Dernoncourt, V. Morariu, Q. H. Tran, and D. Manocha, “TIMERS: Document-level temporal relation extraction,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), (Online), pp. 524–533, Association for Computational Linguistics, Aug. 2021.
- N. Chambers, T. Cassidy, B. McDowell, and S. Bethard, “Dense event ordering with a multi-pass architecture,” 2014.
- T. Cassidy, B. McDowell, N. Chambers, and S. Bethard, “An annotation framework for dense event ordering,” in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), (Baltimore, Maryland), pp. 501–506, Association for Computational Linguistics, June 2014.
- A. Naik, L. Breitfeller, and C. Rose, “TDDiscourse: A dataset for discourse-level temporal ordering of events,” in Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, (Stockholm, Sweden), pp. 239–249, Association for Computational Linguistics, Sept. 2019.
- J. Pustejovsky, P. Hanks, R. Saurí, A. See, and M. Lazo, “The timebank corpus,” proceedings of corpus linguistics, 2003.
- Q. Ning, S. Subramanian, and D. Roth, “An improved neural baseline for temporal relation extraction,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), (Hong Kong, China), pp. 6203–6209, Association for Computational Linguistics, Nov. 2019.
- Q. Ning, H. Wu, H. Peng, and D. Roth, “Improving temporal relation extraction with a globally acquired statistical resource,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), (New Orleans, Louisiana), pp. 841–851, Association for Computational Linguistics, June 2018.
- P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Kuttler, M. Lewis, W. tau Yih, T. Rocktäschel, S. Riedel, and D. Kiela, “Retrieval-augmented generation for knowledge-intensive nlp tasks,” ArXiv, vol. abs/2005.11401, 2020.
- X. Chen, N. Zhang, N. Zhang, X. Xie, S. Deng, Y. Yao, C. Tan, F. Huang, L. Si, and H. Chen, “Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction,” Proceedings of the ACM Web Conference 2022, 2021.
- S. Hu, N. Ding, H. Wang, Z. Liu, J. Wang, J. Li, W. Wu, and M. Sun, “Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification,” in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (S. Muresan, P. Nakov, and A. Villavicencio, eds.), (Dublin, Ireland), pp. 2225–2240, Association for Computational Linguistics, May 2022.
- Y. Gu, X. Han, Z. Liu, and M. Huang, “PPT: Pre-trained prompt tuning for few-shot learning,” in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), (Dublin, Ireland), pp. 8410–8423, Association for Computational Linguistics, May 2022.
- T. Gao, A. Fisch, and D. Chen, “Making pre-trained language models better few-shot learners,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), (Online), pp. 3816–3830, Association for Computational Linguistics, Aug. 2021.
- K. Hambardzumyan, H. Khachatrian, and J. May, “WARP: Word-level Adversarial ReProgramming,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), (Online), pp. 4921–4933, Association for Computational Linguistics, Aug. 2021.
- C. Chan, J. Cheng, W. Wang, Y. Jiang, T. Fang, X. Liu, and Y. Song, “Chatgpt evaluation on sentence level relations: A focus on temporal, causal, and discourse relations,” ArXiv, vol. abs/2304.14827, 2023.
- J. Robinson, C. M. Rytting, and D. Wingate, “Leveraging large language models for multiple choice question answering,” ArXiv, vol. abs/2210.12353, 2022.
- N. Ding, S. Hu, W. Zhao, Y. Chen, Z. Liu, H. Zheng, and M. Sun, “OpenPrompt: An open-source framework for prompt-learning,” in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, (Dublin, Ireland), pp. 105–113, Association for Computational Linguistics, May 2022.
- B. Xu, C. Zhao, W. Jiang, P. Zhu, S. Dai, C. Pang, Z. Sun, S. Wang, and Y. Sun, “Retrieval-augmented domain adaptation of language models,” in Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023) (B. Can, M. Mozes, S. Cahyawijaya, N. Saphra, N. Kassner, S. Ravfogel, A. Ravichander, C. Zhao, I. Augenstein, A. Rogers, K. Cho, E. Grefenstette, and L. Voita, eds.), (Toronto, Canada), pp. 54–64, Association for Computational Linguistics, July 2023.
- T.-Y. Lin, P. Goyal, R. B. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2999–3007, 2017.
- F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571, 2016.
- M. D. Ma, J. Sun, M. Yang, K.-H. Huang, N. Wen, S. Singh, R. Han, and N. Peng, “EventPlus: A temporal event understanding pipeline,” in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations, (Online), pp. 56–65, Association for Computational Linguistics, June 2021.
- J. Jiang, K. Zhou, Z. Dong, K. Ye, X. Zhao, and J.-R. Wen, “StructGPT: A general framework for large language model to reason over structured data,” in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (H. Bouamor, J. Pino, and K. Bali, eds.), (Singapore), pp. 9237–9251, Association for Computational Linguistics, Dec. 2023.
- J. Zhou, S. Dong, H. Tu, X. Wang, and Y. Dou, “RSGT: Relational structure guided temporal relation extraction,” in Proceedings of the 29th International Conference on Computational Linguistics, (Gyeongju, Republic of Korea), pp. 2001–2010, International Committee on Computational Linguistics, Oct. 2022.
- G. Yu, L. Liu, H. Jiang, S. Shi, and X. Ao, “Retrieval-augmented few-shot text classification,” in Findings of the Association for Computational Linguistics: EMNLP 2023 (H. Bouamor, J. Pino, and K. Bali, eds.), (Singapore), pp. 6721–6735, Association for Computational Linguistics, Dec. 2023.
- J. Zhang, A. Muhamed, A. Anantharaman, G. Wang, C. Chen, K. Zhong, Q. Cui, Y. Xu, B. Zeng, T. Chilimbi, and Y. Chen, “ReAugKD: Retrieval-augmented knowledge distillation for pre-trained language models,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) (A. Rogers, J. Boyd-Graber, and N. Okazaki, eds.), (Toronto, Canada), pp. 1128–1136, Association for Computational Linguistics, July 2023.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.