Construct task-specific chain-of-thought prompts

Develop systematic, principled methods for constructing task-specific chain-of-thought (CoT) prompts for generative AI models, including guidelines for selecting exemplars and formatting reasoning instructions that reliably improve logical reasoning performance across different tasks.

Background

The survey highlights chain-of-thought prompting as a promising approach to enhance reasoning in LLMs and notes successful extensions to vision–language QA and code generation. Despite empirical gains, the authors point out the lack of clear guidance for designing CoT prompts tailored to specific tasks.

This gap suggests the need for methods or frameworks that formalize how to compose effective task-specific CoT exemplars and instructions, ensuring reproducible and generalizable improvements in reasoning.

References

However, it still remains a problem that how to construct these CoT prompts according to specific tasks.