

Open WebUI is a self-hostable web interface for large language models, originally built as a frontend for Ollama and now a full multi-user chat platform. It runs as a Docker container and supports direct connections to OpenAI, Anthropic, Ollama, and any OpenAI-compatible endpoint, with authentication, workspaces, RAG, and MCP tool integration built in.
Users sign in through a web UI, select a model from the providers the admin has configured, and chat. Conversations are saved per-user with tagging and search. Documents uploaded to a workspace are chunked, embedded, and made available to the LLM through RAG. Models can be grouped into shared catalogs with per-role access controls.
Authentication covers email-password, OAuth (GitHub, Google, Microsoft), and generic OIDC for providers like Keycloak, Zitadel, or Authentik. Workspaces let teams share prompts, models, and tools across groups of users. The MCP client lets a conversation invoke tools from registered MCP servers inline.
Two extension mechanisms ship with the product:
A plugin marketplace and tool marketplace provide community-built extensions.
Single Docker container backed by SQLite for small installs or Postgres for multi-user production. Runs on Docker, Docker Swarm, and Kubernetes. Optional Redis for rate limiting and background jobs. RAG uses ChromaDB by default with options to swap in Qdrant, Milvus, or pgvector.
BSD-3-Clause. Open WebUI Enterprise is a paid support and consulting offering from the project; feature development happens in the open-source repository.