Papers
Topics
Authors
Recent
Search
2000 character limit reached

Scaling Capability in Token Space: An Analysis of Large Vision Language Model

Published 24 Dec 2024 in cs.AI and cs.LG | (2412.18387v2)

Abstract: The scaling capability has been widely validated in neural LLMs with respect to the number of parameters and the size of training data. One important question is that does the scaling capability also exists similarly with respect to the number of vision tokens in large vision LLM? This study fills the gap by investigating the relationship between the number of vision tokens and the performance on vision-LLMs. Our theoretical analysis and empirical evaluations demonstrate that the model exhibits scalable performance (S(N_l)) with respect to the number of vision tokens (N_l), characterized by the relationship (S(N_l) \approx (c/N_l){\alpha}). Furthermore, we also investigate the impact of a fusion mechanism that integrates the user's question with vision tokens. The results reveal two key findings. First, the scaling capability remains intact with the incorporation of the fusion mechanism. Second, the fusion mechanism enhances model performance, particularly when the user's question is task-specific and relevant. The analysis, conducted on fifteen diverse benchmarks spanning a broad range of tasks and domains, validates the effectiveness of the proposed approach.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 1 like about this paper.