Skip to main content
Registration is now open! Early-bird pricing available through May 5, 2026. Register now

All Accepted Demos

Behavioral Fingerprints for LLM Endpoint Stability and Identity

Jonah Leshin (VAIL), Manish Shah (VAIL), Ian Timmis (VAIL), Daniel Kang (University of Illinois at Urbana-Champaign)

Engineering & Operations Evaluation & Benchmarking

Summary

A black-box monitoring system that detects behavioral changes in LLM endpoints caused by weight updates, quantization, or infrastructure changes via output distribution fingerprinting.

Description

The consistency of AI-native applications depends on the behavioral consistency of the model endpoints that power them. Traditional reliability metrics such as uptime, latency and throughput do not capture behavioral change, and an endpoint can remain "healthy" while its effective model identity changes due to updates to weights, tokenizers, quantization, inference engines, kernels, caching, routing, or hardware. We introduce Stability Monitor, a black-box stability monitoring system that periodically fingerprints an endpoint by sampling outputs on a fixed prompt set and comparing the resulting output distributions over time. Fingerprints are compared using a summed energy distance statistic across prompts, with permutation-test p-values as evidence of distribution shift aggregated sequentially to detect change events and define stability periods. In controlled validation, Stability Monitor detects changes to model family, version, inference stack, quantization, and behavioral parameters. In real-world monitoring of the same model hosted by multiple providers, we observe substantial provider-to-provider and within-provider stability differences.

ACM CAIS 2026 Sponsors