Full CodeMemobase supports any OpenAI-compatible LLM provider as its backend. This tutorial demonstrates how to use Ollama to run a local LLM for both the Memobase server and your chat application.
ollama -v
.qwen2.5:7b
.
config.yaml
file.
config.yaml
.host.docker.internal
to allow it to access the Ollama server running on your local machine at port 11434
.
user_id
to trigger the memory functionality.