2000 character limit reached
Cracking the Code: Evaluating Zero-Shot Prompting Methods for Providing Programming Feedback
Published 20 Dec 2024 in cs.SE | (2412.15702v1)
Abstract: Despite the growing use of LLMs for providing feedback, limited research has explored how to achieve high-quality feedback. This case study introduces an evaluation framework to assess different zero-shot prompt engineering methods. We varied the prompts systematically and analyzed the provided feedback on programming errors in R. The results suggest that prompts suggesting a stepwise procedure increase the precision, while omitting explicit specifications about which provided data to analyze improves error identification.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.