Papers
Topics
Authors
Recent
Search
2000 character limit reached

Evaluating Large Language Models for the Generation of Unit Tests with Equivalence Partitions and Boundary Values

Published 14 May 2025 in cs.SE and cs.AI | (2505.09830v1)

Abstract: The design and implementation of unit tests is a complex task many programmers neglect. This research evaluates the potential of LLMs in automatically generating test cases, comparing them with manual tests. An optimized prompt was developed, that integrates code and requirements, covering critical cases such as equivalence partitions and boundary values. The strengths and weaknesses of LLMs versus trained programmers were compared through quantitative metrics and manual qualitative analysis. The results show that the effectiveness of LLMs depends on well-designed prompts, robust implementation, and precise requirements. Although flexible and promising, LLMs still require human supervision. This work highlights the importance of manual qualitative analysis as an essential complement to automation in unit test evaluation.

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.