
Open WebUI is a self-hosted AI interface that connects to Ollama, OpenAI, Anthropic, and any OpenAI-compatible model, either locally or in the cloud. The project is community-maintained and ships with a plugin system for prompts, tools, and Python function pipelines.
Open WebUI is a self-hosted, extensible web UI for large language models, originally launched as "Ollama WebUI" in 2023 and later renamed and broadened to work with any OpenAI-compatible backend. It's one of the most popular open-source ChatGPT alternatives for teams running local LLMs.
Open WebUI (github.com/open-webui/open-webui, ~50K+ stars, MIT licensed) runs as a Docker container and provides a full chat UI with multi-user accounts, conversation history, RAG over uploaded documents, image generation, web search, voice input, function/tool calling, and MCP client integration. It targets local-first AI stacks (Ollama, LM Studio, LocalAI) but also supports any OpenAI-compatible API including Anthropic, Azure, Groq, and Mistral via community connectors.
The project is maintained by Tim Jäschke (GitHub: tjbck) and a growing contributor community. It remains community-governed with no commercial entity or paid tier. Funding comes from GitHub Sponsors and Open Collective.
Open WebUI competes with LibreChat on the self-hosted multi-provider chat space and with commercial tools like ChatGPT Team and Claude Projects. Its differentiator is deeper local-first integration (native Ollama support out of the box) and richer built-in RAG.