Scalable Inference Architectures for Compound AI Systems: A Production Deployment Study
Srikanta Prasad Sondekoppam Vijayashankar (Salesforce India Pvt Ltd), Utkarsh Arora (Salesforce India Pvt Ltd)
Engineering & Operations Architectural Patterns & Composition
Abstract
Modern enterprise AI applications increasingly rely on compound AI systems—architectures that compose multiple models, retrievers, and tools to accomplish complex tasks. Deploying such systems in production demands inference infrastructure that can efficiently serve concurrent, heterogeneous model invocations while maintaining cost-effectiveness and low latency. This paper presents a production deployment study of a modular, platform-agnostic inference architecture developed at Salesforce to support compound AI use cases including Agentforce (autonomous AI agents) and ApexGuru (AI-powered code analysis). The system integrates serverless execution, dynamic autoscaling, and MLOps pipelines to deliver consistent low-latency inference across multi-component agent workflows. We report production results demonstrating over 50% reduction in tail latency (P95), up to 3.9× throughput improvement, and 30–40% cost savings compared to prior static deployments. We further present a novel analysis of compound-system-specific challenges including multi-model fan-out overhead, cascading cold-start propagation, and heterogeneous scaling dynamics that emerge uniquely when serving agentic workloads. Through detailed case studies and operational lessons, we illustrate how the architecture enables compound AI systems to scale model invocations in parallel, handle bursty multi-agent workloads, and support rapid model iteration—capabilities essential for operationalizing agentic AI at enterprise scale