Diagram of OpenAI API with Memory Integration
project_url
and api_key
after setting up your backend.
user_id
to your standard API call. The client will automatically handle the memory context.
user_id
is passed, the client functions exactly like the original OpenAI client.
flush
method:
openai_memory
function wraps the OpenAI client with two key actions:
Your name is Gus.
, Memobase will only store the last exchange. This is equivalent to:
openai_memory
to customize its behavior:
max_context_size
: Controls the maximum token size of the injected memory context. Defaults to 1000
.
additional_memory_prompt
: Provides a meta-prompt to guide the LLM on how to use the memory.
client.get_memory_prompt("user_id")
: Returns the current memory prompt that will be injected for a given user.client.flush("user_id")
: Immediately processes the memory buffer for a user. Call this if you need to see memory updates reflected instantly.