Get User Personalized Context
Return a string of the user’s personalized context you can directly insert it into your prompt.
Format:
Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Path Parameters
The ID of the user
Query Parameters
Max token size of returned Context
Rank prefer topics at first to try to keep them in filtering, default order is by updated time
Only return profiles with these topics, default is all
Max subtopic size of the same topic in returned Context
Set specific subtopic limits for topics in JSON, for example {"topic1": 3, "topic2": 5}. The limits in this param will override max_subtopic_size
.
Profile event ratio of returned Context
Whether to require event summary in returned Context
Pass the recent chats to enable context search. Memobase will use those chats to search for relevant events. It's a list of chats in OpenAI Message format, for example: [{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi"}].
NOTICE
- It will increase your latency by 0.1-1 seconds, because Memobase will use Embedding to search for relevant profiles and events.
- It will cost your Memobase tokens, roughly 100~200 tokens per chat based on the profile size.
- The profiles in the context will not be searched by the
chats_str
. - If you want also search profiles, see
full_profile_and_only_search_event
query parameter.
Event similarity threshold of returned Context
Only allow events within the past few days, default is 180
Customize context prompt template.
- use
{profile_section}
to refer to the profile section - use
{event_section}
to refer to the event section
For example:
# Memory
Unless the user has relevant queries, do not actively mention those memories in the conversation.
## User Background:
{profile_section}
## Latest Events:
{event_section}
If you pass chats_str
and set this to False
, Memobase will search for relevant profiles and events at the same time.
NOTICE
- It will increase your latency by 2-5(based on the profile size) seconds, because Memobase will use LLM and Embedding to search for relevant profiles and events.
- It will cost your Memobase tokens, roughly 100~1000 tokens per chat based on the profile size.
If set to True
, Memobase will fill the token window with the rest events.
Response
Successful Response
The response is of type object
.