

Dify is a self-hostable LLMOps platform that combines a visual agent workflow builder, RAG pipeline, multi-provider LLM gateway, and prompt management into one stack. It targets teams building internal AI agents, chatbots, and retrieval-augmented applications without assembling a separate tool for each layer.
The visual workflow editor supports branching logic, loops, variable passing, and tool invocations, producing agents that can be deployed as chat interfaces or embedded via API. The RAG pipeline handles document ingestion, chunking, embedding, and vector search across multiple vector store backends. The gateway layer proxies OpenAI, Anthropic, Google, Azure, AWS Bedrock, Mistral, Ollama, Xinference, LocalAI, and OpenAI-compatible providers.
MCP client and server support lets Dify agents call external MCP tools and expose Dify-built agents as MCP endpoints. A conversation API allows external applications to embed Dify-built agents.
Product teams building AI-powered features that combine RAG, tool use, and multi-step workflows without wiring components together manually. Dify's scope is broad — teams that want a focused LLM gateway typically use a dedicated gateway instead.
Dify uses the Dify Open Source License, a modified Apache 2.0 with a non-compete clause that prohibits using the software to build a competing multi-tenant SaaS product. Internal enterprise use, single-tenant customer deployments, and most modifications are permitted. The license is source-available but not OSI-approved.
Docker Compose reference stack with Postgres, Redis, and a vector database (Weaviate, Qdrant, Milvus, or pgvector). Production deployments typically add separate workers for RAG indexing. Helm charts are available for Kubernetes.
LLMOps platform with visual agent builder, RAG, and multi-provider gateway