Revisiting Automated Topic Model Evaluation with Large Language Models
Abstract: Topic models are used to make sense of large text collections. However, automatically evaluating topic model output and determining the optimal number of topics both have been longstanding challenges, with no effective automated solutions to date. This paper proposes using LLMs to evaluate such output. We find that LLMs appropriately assess the resulting topics, correlating more strongly with human judgments than existing automated metrics. We then investigate whether we can use LLMs to automatically determine the optimal number of topics. We automatically assign labels to documents and choosing configurations with the most pure labels returns reasonable values for the optimal number of topics.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.