Archestra natively exposes Prometheus metrics for comprehensive monitoring of AI agent infrastructure, enabling organizations to track LLM usage, costs, and security events.
llm_tokens_total — total tokens consumed across all providersllm_request_duration_seconds — LLM API call latencyhttp_request_duration_seconds — gateway HTTP request timingMetrics are available at /metrics endpoint. Configure Prometheus scrape jobs to collect data.