Papers
Topics
Authors
Recent
Search
2000 character limit reached

Empowering Lightweight MLLMs with Reasoning via Long CoT SFT

Published 3 Sep 2025 in cs.CV | (2509.03321v1)

Abstract: While Reinforcement Learning with Verifiable Rewards has enhanced the reasoning of large-scale LLMs, its efficacy for lightweight multimodal LLMs (MLLMs) with fewer than seven billion parameters remains underexplored. This paper investigates the role of long Chain-of-Thought (long CoT) data in enhancing the reasoning abilities of such MLLMs. Our findings demonstrate that Supervised Fine-Tuning (SFT) with long CoT data significantly improves MLLM reasoning. Furthermore, we observe that after this initial SFT phase, MLLMs can achieve additional performance gains through a subsequent RL stage. We conclude that a SFT stage with long CoT data is a critical prerequisite for developing the reasoning capabilities of lightweight MLLMs.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.