Skip to main content
Registration is now open! Early-bird pricing available through May 5, 2026. Register now

All Accepted Papers

Scaling Textual Gradients via Sampling-Based Momentum

Zixin Ding (University of Chicago), Junyuan Hong (University of Texas at Austin), Zhan Shi (Santa Clara University), Tianhao Wang (Princeton University), Zinan Lin (Microsoft Research), Li Yin (SylphAI), Meng Liu (SylphAI), Atlas Wang (UT Autsin), Yuxin Chen (University of Chicago)

System Optimization & Efficiency

Abstract

LLM-based prompt optimization, which uses LLM-provided ``textual gradients'' (feedback) to refine prompts, has emerged as an effective method for automatic prompt engineering. However, its scalability and stability are unclear when using more data in training. We systematically investigate the potential and challenges of scaling training data in textual gradient descent. We show that naively scaling training examples is infeasible due to both explicit context-length limits and an implicit context wall, where long-context degradation yields diminishing returns. Inspired by prior wisdom in stochastic gradient descent, we propose Textual Stochastic Gradient Descent with Momentum (TSGD-M), which reweights updates through momentum sampling, using bootstrapped minibatch validation accuracy as importance weights over historical prompts. To stabilize TSGD and enable effective scaling within a limited context window, TSGD-M carries prior prompts information by \textit{dynamically} exploring the past top performing prompts without expanding input context length. TSGD-M integrates seamlessly into existing prompt optimization frameworks, including TextGrad, DSPy-COPRO, and AdalFlow, and achieves consistent gains across 6 benchmarks. Our framework is integrated within the AdalFlow library.

ACM CAIS 2026 Sponsors