Papers
Topics
Authors
Recent
Search
2000 character limit reached

ORCA: A Challenging Benchmark for Arabic Language Understanding

Published 21 Dec 2022 in cs.CL and cs.AI | (2212.10758v2)

Abstract: Due to their crucial role in all NLP, several benchmarks have been proposed to evaluate pretrained LLMs. In spite of these efforts, no public benchmark of diverse nature currently exists for evaluation of Arabic. This makes it challenging to measure progress for both Arabic and multilingual LLMs. This challenge is compounded by the fact that any benchmark targeting Arabic needs to take into account the fact that Arabic is not a single language but rather a collection of languages and varieties. In this work, we introduce ORCA, a publicly available benchmark for Arabic language understanding evaluation. ORCA is carefully constructed to cover diverse Arabic varieties and a wide range of challenging Arabic understanding tasks exploiting 60 different datasets across seven NLU task clusters. To measure current progress in Arabic NLU, we use ORCA to offer a comprehensive comparison between 18 multilingual and Arabic LLMs. We also provide a public leaderboard with a unified single-number evaluation metric (ORCA score) to facilitate future research.

Citations (35)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.