Mixture of insighTful Experts (MoTE): The Synergy of Thought Chains and Expert Mixtures in Self-Alignment
Abstract: As the capabilities of LLMs continue to expand, aligning these models with human values remains a significant challenge. Recent studies show that reasoning abilities contribute significantly to model safety, while integrating Mixture-of-Experts (MoE) architectures can further enhance alignment. In this work, we address a fundamental question: How to effectively incorporate reasoning abilities and MoE architectures into self-alignment process in LLMs? We propose Mixture of insighTful Experts (MoTE), a novel framework that synergistically combines reasoning chains and expert mixtures to improve self-alignments. From a data perspective, MoTE employs a structured reasoning chain comprising four key stages: Question Analysis, Answer Guidance, Safe Answer, and Safety Checking. This approach enhances safety through multi-step reasoning and proves effective even for smaller and less powerful LLMs (e.g., 7B models). From an architectural perspective, MoTE adopts a multi-LoRA framework with step-level routing, where each expert is dedicated to a specific reasoning step. This design eliminates the need for balance losses, ensures stable training, and supports adaptive inference lengths. Experimental results demonstrate that MoTE significantly improves model safety, jailbreak resistance, and over-refusal capabilities, achieving performance comparable to OpenAI's state-of-the-art o1 model.
- Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
- Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022.
- Are aligned neural networks adversarially aligned? In NeruIPS, 2023.
- Multisiam: Self-supervised multi-instance siamese representation learning for autonomous driving. In ICCV, 2021.
- Mixed autoencoder for self-supervised visual representation learning. In CVPR, 2023a.
- Gaining wisdom from setbacks: Aligning large language models via mistake analysis. arXiv preprint arXiv:2310.10477, 2023b.
- Integrating geometric control into text-to-image diffusion models for high-quality detection data generation via text prompt. arXiv preprint arXiv:2306.04607, 2023c.
- Llava-mole: Sparse mixture of lora experts for mitigating data conflicts in instruction finetuning mllms. arXiv preprint arXiv:2401.16160, 2024.
- Octavius: Mitigating task interference in mllms via moe. arXiv preprint arXiv:2311.02684, 2023d.
- Safe rlhf: Safe reinforcement learning from human feedback. arXiv preprint arXiv:2310.12773, 2023.
- The art of balancing: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment. arXiv preprint arXiv:2312.09979, 2023.
- Alpacafarm: A simulation framework for methods that learn from human feedback. In NeurIPS, 2023.
- Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. In JMLR, 2021.
- Mixture-of-loras: An efficient multitask tuning for large language models. arXiv preprint arXiv:2403.03432, 2024.
- Magicdrive: Street view generation with diverse 3d geometry control. arXiv preprint arXiv:2310.02601, 2023.
- Mixture of cluster-conditional lora experts for vision-language instruction tuning. arXiv preprint arXiv:2312.12379, 2023.
- Eyes closed, safety on: Protecting multimodal llms via image-to-text transformation. arXiv preprint arXiv:2403.09572, 2024.
- Soda10m: Towards large-scale object detection benchmark for autonomous driving. arXiv preprint arXiv:2106.11118, 2021.
- Parameter-efficient transfer learning for nlp. In International conference on machine learning, pp. 2790–2799. PMLR, 2019.
- Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
- Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024.
- Large language models are zero-shot reasoners. In Advances in neural information processing systems, 2022.
- Beyond distillation: Task-level mixture-of-experts for efficient inference. arXiv preprint arXiv:2110.03742, 2021.
- Rlaif: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267, 2023.
- Gshard: Scaling giant models with conditional computation and automatic sharding. arxiv preprint arxiv:2006.16668, 2020.
- Coda: A real-world road corner case dataset for object detection in autonomous driving. arXiv preprint arXiv:2203.07724, 2022.
- Trackdiffusion: Multi-object tracking data generation via diffusion models. arXiv preprint arXiv:2312.00651, 2023.
- Automated evaluation of large vision-language models on self-driving corner cases. arXiv preprint arXiv:2404.10595, 2024.
- Moe-llava: Mixture of experts for large vision-language models. arXiv preprint arXiv:2401.15947, 2024.
- Chain of hindsight aligns language models with feedback. arXiv preprint arXiv:2302.02676, 2023a.
- Task-customized self-supervised pre-training with scalable dynamic routing. In AAAI, 2022.
- Geom-erasing: Geometry-driven removal of implicit concept in diffusion models. arXiv preprint arXiv:2310.05873, 2023b.
- Task-customized masked autoencoder via mixture of cluster-conditional experts. arXiv preprint arXiv:2402.05382, 2024.
- Multimodal contrastive learning with limoe: the language-image mixture of experts. arxiv preprint arxiv:2206.02770, 2022.
- Training language models to follow instructions with human feedback. In NeurIPS, 2022.
- Self-alignment of large language models via multi-agent social simulation. In ICLR Workshop on Large Language Model (LLM) Agents, 2024.
- Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023.
- Scaling vision with sparse mixture of experts. In NeurIPS, 2021.
- Mixture-of-experts meets instruction tuning: A winning combination for large language models. arXiv preprint arXiv:2305.14705, 2023a.
- Scaling vision-language models with sparse mixture of experts. arxiv preprint arxiv:2303.07226, 2023b.
- Alpaca: A strong, replicable instruction-following model. Stanford Center for Research on Foundation Models. https://crfm. stanford. edu/2023/03/13/alpaca. html, 3(6):7, 2023.
- Stablerep: Synthetic images from text-to-image models make strong visual representation learners. arXiv preprint arXiv:2306.00984, 2023.
- Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
- Adamix: Mixture-of-adaptations for parameter-efficient model tuning. arxiv preprint arxiv:2210.17451, 2022.
- Detdiffusion: Synergizing generative and perceptive models for enhanced data generation and perception. arXiv preprint arXiv:2403.13304, 2024.
- Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.
- Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS, 2022.
- Pushing mixture of experts to the limit: Extremely parameter efficient moe for instruction tuning. arxiv preprint arxiv:2309.05444, 2023.
- Judging llm-as-a-judge with mt-bench and chatbot arena. In NeurIPS, 2024.
- Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043, 2023.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.