Papers
Topics
Authors
Recent
Search
2000 character limit reached

Revisiting Automated Topic Model Evaluation with Large Language Models

Published 20 May 2023 in cs.CL | (2305.12152v2)

Abstract: Topic models are used to make sense of large text collections. However, automatically evaluating topic model output and determining the optimal number of topics both have been longstanding challenges, with no effective automated solutions to date. This paper proposes using LLMs to evaluate such output. We find that LLMs appropriately assess the resulting topics, correlating more strongly with human judgments than existing automated metrics. We then investigate whether we can use LLMs to automatically determine the optimal number of topics. We automatically assign labels to documents and choosing configurations with the most pure labels returns reasonable values for the optimal number of topics.

Citations (4)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.