Papers
Topics
Authors
Recent
Search
2000 character limit reached

ComplexityNet: Increasing LLM Inference Efficiency by Learning Task Complexity

Published 12 Dec 2023 in cs.CL, cs.AI, and cs.LG | (2312.11511v3)

Abstract: We present ComplexityNet, a streamlined LLM designed for assessing task complexity. This model predicts the likelihood of accurate output by various LLMs, each with different capabilities. Our initial application of ComplexityNet involves the Mostly Basic Python Problems (MBPP) dataset. We pioneered the creation of the first set of labels to define task complexity. ComplexityNet achieved a notable 79% accuracy in determining task complexity, a significant improvement over the 34% accuracy of the original, non fine-tuned model. Furthermore, ComplexityNet effectively reduces computational resource usage by 90% compared to using the highest complexity model, while maintaining a high code generation accuracy of 86.7%. This study demonstrates that fine-tuning smaller models to categorize tasks based on their complexity can lead to a more balanced trade-off between accuracy and efficiency in the use of LLMs. Our findings suggest a promising direction for optimizing LLM applications, especially in resource-constrained environments.

Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 0 likes about this paper.