Complex analytical questions — "Why did revenue decline 12% in Q3? What are the three most likely root causes? What should we test to recover?" — require multiple sequential queries, hypothesis generation, data interpretation, and follow-up investigation. A single NL-to-SQL query cannot answer them; a human analyst would take hours to days. LLM chatbots cannot answer them because they lack persistent tool access and multi-step reasoning. Gartner predicts over 40% of agentic AI projects will be canceled by end of 2027, primarily due to organizational and governance failures.
An agentic analytics system uses an LLM as the reasoning engine to: (1) decompose the user's analytical question into sub-questions, (2) select and invoke tools (NL-to-SQL, chart generation, statistical tests, web search for context), (3) interpret intermediate results, (4) decide whether to continue investigating or surface a conclusion, and (5) produce a synthesized answer with citations to data. ReAct-style loops (Reason + Act) enable the agent to iteratively refine hypotheses. Human-in-the-loop checkpoints at high-stakes decision points prevent full automation of judgment-intensive conclusions. Unlike single-turn NL-to-SQL, agents maintain context across multiple reasoning steps.
This is the most nascent capability. Hard quantified production data from named companies remains scarce.
Agent orchestration framework (LangGraph / CrewAI / Autogen / Semantic Kernel) + LLM backbone (GPT-4o / Claude / Gemini) + tool library (NL-to-SQL / chart generator / statistical tests) + semantic layer integration + access control and guardrails + human review checkpoint UI.
Governed source of truth for metric definitions decoupling business logic from BI tools, ensuring consistent calculations across dashboards and ML.
Agents require a governed metric layer to avoid hallucinating business definitions during multi-step reasoning.
An AI system converting business questions in natural language into executable SQL, enabling non-technical users to query data warehouses directly.
NL-to-SQL is the core tool agents invoke during analysis loops.
End-to-end ML lifecycle automation from experiment tracking through deployment, monitoring, and rollback, anchored by a versioned model registry.
MLOps infrastructure is needed if agents invoke predictive models as part of their analysis workflow.
Nothing downstream yet.