Skip to main content
Ready to give your AI a memory boost? Here’s how to get started.

Memobase Client

Step 1: Get Prepped

pip install memobase
You’ll get these when you set up your backend. Keep them handy.

Step 2: Connect to the Backend

from memobase import MemoBaseClient

client = MemoBaseClient(
    project_url=YOUR_PROJECT_URL,
    api_key=YOUR_API_KEY,
)
assert client.ping()

Step 3: User Management

Create, read, update, and delete users.
uid = client.add_user({"name": "Gustavo"})
u = client.get_user(uid)

Step 4: Insert Message

Now that you have a user, let’s give them some memories.
from memobase import ChatBlob
b = ChatBlob(messages=[
    {"role": "user", "content": "Hi, I'm here again"},
    {"role": "assistant", "content": "Hi, Gus! How can I help you?"}
])
u = client.get_user(uid)
bid = u.insert(b)

Step 5: Memory Operations

This is where the magic happens. Memobase extracts and stores memories for each user.
  1. flush: User data is held in a buffer. If the buffer gets too large or remains idle, it gets processed. You can also trigger this manually.
u.flush() # async
u.flush(sync=True) # sync
  1. Get Personalized Context:
u.context()
  1. Pack the context into your system prompt:
from openai import OpenAI

client = OpenAI(api_key=YOUR_API_KEY)

SYSTEM_PROMPT = f"""You're a helpful assistant.
Your job is to ...

Below is the user's memory:
{context}"""


response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": SYSTEM_PROMPT},
        {"role": "user", "content": "Who'am I?s"}
    ]
)

print(response.choices[0].message.content)

Memobase Backend

We offer an open-source solution with a Docker backend to launch your own instance. You can use docker-compose to launch the backend in one command.

Memobase Cloud

We also offer a hosted cloud service, with free tier and nice dashboard.

Memobase Cloud Dashboard