Skip to main content
Registration is now open! Early-bird pricing available through May 5, 2026. Register now

All Accepted Papers

Robust Batch-Level Query Routing for Large Language Models under Cost and Capacity Constraints

Jelena Markovic-Voronov (LinkedIn), Kayhan Behdin (LinkedIn), Yuanda Xu (LinkedIn), Zhengze Zhou (LinkedIn), Zhipeng Wang (LinkedIn), Rahul Mazumder (LinkedIn, MIT)

System Optimization & Efficiency Architectural Patterns & Composition

Abstract

We study the problem of routing queries to large language models (LLMs) under cost, GPU, and concurrency constraints. Prior per-query routing methods often fail to control batch-level cost, especially under non-uniform or adversarial batching. To address this, we propose a batch-level, resource-aware routing framework that jointly optimizes model assignment for each batch while respecting cost and capacity limits. We further introduce a robust variant that accounts for uncertainty in predicted LLM performance, along with an offline instance allocation procedure that balances quality and throughput across multiple models. Experiments on two multi-task LLM benchmarks show that robustness improves accuracy by 1–14% over non-robust counterparts (depending on the performance estimator), batch-level routing outperforms per-query methods by up to 24% under adversarial batching, and optimized instance allocation yields additional gains of up to 3%, all while strictly controlling cost and GPU resource constraints.

ACM CAIS 2026 Sponsors