Papers
Topics
Authors
Recent
Search
2000 character limit reached

Some models are useful, but for how long?: A decision theoretic approach to choosing when to refit large-scale prediction models

Published 22 May 2024 in stat.ME and econ.EM | (2405.13926v2)

Abstract: Large-scale prediction models using tools from AI or ML are increasingly common across a variety of industries and scientific domains. Despite their effectiveness, training AI and ML tools at scale can cost tens or hundreds of thousands of dollars (or more); and even after a model is trained, substantial resources must be invested to keep models up-to-date. This paper presents a decision-theoretic framework for deciding when to refit an AI/ML model when the goal is to perform unbiased statistical inference using partially AI/ML-generated data. Drawing on portfolio optimization theory, we treat the decision of {\it recalibrating} a model or statistical inference versus {\it refitting} the model as a choice between investing'' in one of twoassets.'' One asset, recalibrating the model based on another model, is quick and relatively inexpensive but bears uncertainty from sampling and may not be robust to model drift. The other asset, {\it refitting} the model, is costly but removes the drift concern (though not statistical uncertainty from sampling). We present a framework for balancing these two potential investments while preserving statistical validity. We evaluate the framework using simulation and data on electricity usage and predicting flu trends.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 3 tweets with 0 likes about this paper.