Comparing large language models for supervised analysis of students' lab notes
Abstract: Recent advancements in LLMs hold significant promise in improving physics education research that uses machine learning. In this study, we compare the application of various models to perform large-scale analysis of written text grounded in a physics education research classification problem: identifying skills in students' typed lab notes through sentence-level labeling. Specifically, we use training data to fine-tune two different LLMs, BERT and LLaMA, and compare the performance of these models to both a traditional bag of words approach and a few-shot LLM (without fine-tuning).} We evaluate the models based on their resource use, performance metrics, and research outcomes when identifying skills in lab notes. We find that higher-resource models often, but not necessarily, perform better than lower-resource models. We also find that all models estimate similar trends in research outcomes, although the absolute values of the estimated measurements are not always within uncertainties of each other. We use the results to discuss relevant considerations for education researchers seeking to select a model type to use as a classifier.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.