Some models are useful, but for how long?: A decision theoretic approach to choosing when to refit large-scale prediction models
Abstract: Large-scale prediction models using tools from AI or ML are increasingly common across a variety of industries and scientific domains. Despite their effectiveness, training AI and ML tools at scale can cost tens or hundreds of thousands of dollars (or more); and even after a model is trained, substantial resources must be invested to keep models up-to-date. This paper presents a decision-theoretic framework for deciding when to refit an AI/ML model when the goal is to perform unbiased statistical inference using partially AI/ML-generated data. Drawing on portfolio optimization theory, we treat the decision of {\it recalibrating} a model or statistical inference versus {\it refitting} the model as a choice between investing'' in one of twoassets.'' One asset, recalibrating the model based on another model, is quick and relatively inexpensive but bears uncertainty from sampling and may not be robust to model drift. The other asset, {\it refitting} the model, is costly but removes the drift concern (though not statistical uncertainty from sampling). We present a framework for balancing these two potential investments while preserving statistical validity. We evaluate the framework using simulation and data on electricity usage and predicting flu trends.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.