Shijie Xia1,2,3,
Yiwei Qin3,
Xuefeng Li1,2,3,
Yan Ma3,
Run-Ze Fan3,
Steffi Chern3,
Haoyang Zou1,2,3,
Fan Zhou1,2,3,
Xiangkun Hu2,3,
Jiahe Jin1,2,3,
Yanheng He1,2,3,
Yixin Ye1,2,3,
Yixiu Liu1,2,3,
Pengfei Liu1,2,3+
1Shanghai Jiao Tong University,
2SII,
3Generative AI Research Lab (GAIR)
+Corresponding author
Abstract
The first generation of Large Language Models—what might be called ''Act I" of generative AI (2020-2023)—achieved remarkable success through massive parameter and data scaling, yet exhibited fundamental limitations in knowledge latency, shallow reasoning, and constrained cognitive processes. During this era, prompt engineering emerged as our primary interface with AI, enabling dialogue-level communication through natural language. We now witness the emergence of ''Act II" (2024-present), where models are transitioning from knowledge-retrieval systems (in latent space) to thought-construction engines through test-time scaling techniques. This new paradigm establishes a mind-level connection with AI through language-based thoughts. In this paper, we clarify the conceptual foundations of cognition engineering and explain why this moment is critical for its development. We systematically break down these advanced approaches through comprehensive tutorials and optimized implementations, democratizing access to cognition engineering and enabling every practitioner to participate in AI's second act. We provide a regularly updated collection of papers on test-time scaling in the GitHub Repository.
Three Scaling Phases
The three scaling phases illustrated as a progression of knowledge representation. Pre-training scaling (blue) forms isolated knowledge islands with fundamental physics concepts connected by limited innate associations. Post-training scaling (green) densifies these islands with more sophisticated learned connections between related concepts. Test-time scaling (red) enables dynamic reasoning pathway formation between previously disconnected concepts through extended computation, facilitating multi-hop inference across the entire knowledge space. Test-time scaling builds bridges between knowledge islands, connecting distant nodes that remain isolated during pre-training and conventional post-training.
The Practitioner’s Roadmap: How to Apply Test-Time Scaling to your Applications?
Workflow for applying test-time scaling in a specific domain. For more details, please refer to the main paper.
Methods to improve scaling efficiency of test-time scaling approaches
Works and methodologies for applying RL to elicit long CoT abilities
Long CoT resources across different domains
Works applying test-time scaling across various domains
A hands-on tutorial applying RL to unlock long CoT abilities
📬 Contact
If you have any questions regarding the paper, feel free to directly submit a github issue.