SIKeD: Self-guided Iterative Knowledge Distillation for mathematical reasoning
Abstract: LLMs can transfer their reasoning skills to smaller models by teaching them to generate the intermediate reasoning process required to solve multistep reasoning tasks. While LLMs can accurately solve reasoning tasks through a variety of strategies, even without fine-tuning, smaller models are not expressive enough to fit the LLMs distribution on all strategies when distilled and tend to prioritize one strategy over the others. This reliance on one strategy poses a challenge for smaller models when attempting to solve reasoning tasks that may be difficult with their preferred strategy. To address this, we propose a distillation method SIKeD (Self-guided Iterative Knowledge Distillation for mathematical reasoning), where the LLM teaches the smaller model to approach a task using different strategies and the smaller model uses its self-generated on-policy outputs to choose the most suitable strategy for the given task. The training continues in a self-guided iterative manner, where for each training iteration, a decision is made on how to combine the LLM data with the self-generated outputs. Unlike traditional distillation methods, SIKeD allows the smaller model to learn which strategy is suitable for a given task while continuously learning to solve a task using different strategies. Our experiments on various mathematical reasoning datasets show that SIKeD significantly outperforms traditional distillation techniques across smaller models of different sizes. Our code is available at: https://github.com/kumar-shridhar/SIKeD
- Gpt-4 technical report. arXiv preprint, 2023. URL https://arxiv.org/abs/2303.08774.
- On-policy distillation of language models: Learning from self-generated mistakes. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=3zKtaqxLhW.
- Qwen technical report. arXiv preprint, 2023. URL https://arxiv.org/abs/2309.16609.
- Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 535–541, Philadelphia PA USA, August 2006. ACM. ISBN 978-1-59593-339-3. 10.1145/1150402.1150464. URL https://dl.acm.org/doi/10.1145/1150402.1150464.
- Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=YfZ4ZPt8zd.
- Mixed distillation helps smaller language model better reasoning. arXiv preprint, 2023. URL https://arxiv.org/abs/2312.10730.
- Training verifiers to solve math word problems. arXiv preprint, 2021. URL https://arxiv.org/abs/2110.14168.
- The llama 3 herd of models. arXiv preprint, 2024. URL https://arxiv.org/abs/2407.21783.
- Reinforced self-training (rest) for language modeling. arXiv preprint, 2023. URL https://arxiv.org/abs/2308.08998.
- A new look to a classic issue: Reasoning and academic achievement at secondary school. Frontiers in Psychology, 9, 2018. ISSN 1664-1078. 10.3389/fpsyg.2018.00400. URL https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2018.00400.
- Self-knowledge distillation in natural language processing. In Ruslan Mitkov and Galia Angelova, editors, Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 423–430, September 2019. URL https://aclanthology.org/R19-1050.
- Revisiting self-training for neural sequence generation. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SJgdnAVKDH.
- Distilling the knowledge in a neural network. ArXiv, abs/1503.02531, 2015. URL https://arxiv.org/abs/1503.02531.
- Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8003–8017. Association for Computational Linguistics, July 2023. URL https://aclanthology.org/2023.findings-acl.507.
- LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=nZeVKeeFYf9.
- Hugging Face. smol-llm: Train a small llm from scratch. https://huggingface.co/blog/smollm, 2023. Accessed: 2024-09-23.
- First-step advantage: Importance of starting right in multi-step math reasoning. ArXiv, abs/2311.07945, 2023. URL https://arxiv.org/abs/2311.079455.
- Sequence-level knowledge distillation. In Jian Su, Kevin Duh, and Xavier Carreras, editors, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327, Austin, Texas, November 2016. Association for Computational Linguistics. 10.18653/v1/D16-1139. URL https://aclanthology.org/D16-1139.
- On information and sufficiency. The annals of mathematical statistics, 22(1):79–86, 1951.
- Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. URL https://dl.acm.org/doi/10.1145/3600006.3613165.
- Smart: Self-learning meta-strategy agent for reasoning tasks. arXiv preprint, 2024. URL https://arxiv.org/abs/2410.16128.
- Calibrating large language models with sample consistency. arXiv preprint, 2024. URL https://arxiv.org/abs/2402.13904.
- Teaching small language models to reason. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1773–1781. Association for Computational Linguistics, July 2023. URL https://aclanthology.org/2023.acl-short.151.
- A diverse corpus for evaluating and developing English math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 975–984. Association for Computational Linguistics, July 2020. 10.18653/v1/2020.acl-main.92. URL https://aclanthology.org/2020.acl-main.92.
- Constructivism—constructivist learning theory. IAP Information Age Publishing, 2013.
- Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2080–2094. Association for Computational Linguistics, June 2021. URL https://aclanthology.org/2021.naacl-main.168.
- Dean A Pomerleau. Efficient training of artificial neural networks for autonomous navigation. Neural computation, 3(1):88–97, 1991.
- Efficient reductions for imitation learning. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 661–668. PMLR, 13–15 May 2010. URL https://proceedings.mlr.press/v9/ross10a.html.
- Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1743–1752. Association for Computational Linguistics, September 2015. URL https://aclanthology.org/D15-1202.
- Automatic generation of socratic subquestions for teaching math word problems. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4136–4149. Association for Computational Linguistics, December 2022. URL https://aclanthology.org/2022.emnlp-main.277.
- Distilling reasoning capabilities into smaller language models. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Findings of the Association for Computational Linguistics: ACL 2023, pages 7059–7073, Toronto, Canada, July 2023. Association for Computational Linguistics. 10.18653/v1/2023.findings-acl.441. URL https://aclanthology.org/2023.findings-acl.441.
- Self-training for unsupervised neural machine translation in unbalanced training data scenarios. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3975–3981. Association for Computational Linguistics, June 2021. URL https://aclanthology.org/2021.naacl-main.311.
- Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024. URL https://arxiv.org/abs/2403.08295.
- Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. URL https://arxiv.org/abs/2302.13971.
- Unslothai. Unsloth. https://github.com/unslothai/unsloth, 2023. URL https://github.com/unslothai/unsloth. GitHub repository.
- Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=1PL1NIMMrw.
- Chain of thought prompting elicits reasoning in large language models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=_VjQlMeSB_J.
- Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation, 2020. URL https://arxiv.org/abs/2002.10345.
- Least-to-most prompting enables complex reasoning in large language models. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=WZH7099tgfM.
- Distilling mathematical reasoning capabilities into small language models. arXiv preprint arXiv:2401.11864, 2024. URL https://arxiv.org/abs/2401.11864.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.