Papers
Topics
Authors
Recent
Search
2000 character limit reached

Text Classification in the LLM Era -- Where do we stand?

Published 17 Feb 2025 in cs.CL | (2502.11830v1)

Abstract: LLMs revolutionized NLP and showed dramatic performance improvements across several tasks. In this paper, we investigated the role of such LLMs in text classification and how they compare with other approaches relying on smaller pre-trained LLMs. Considering 32 datasets spanning 8 languages, we compared zero-shot classification, few-shot fine-tuning and synthetic data based classifiers with classifiers built using the complete human labeled dataset. Our results show that zero-shot approaches do well for sentiment classification, but are outperformed by other approaches for the rest of the tasks, and synthetic data sourced from multiple LLMs can build better classifiers than zero-shot open LLMs. We also see wide performance disparities across languages in all the classification scenarios. We expect that these findings would guide practitioners working on developing text classification systems across languages.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.