context()
API to retrieve this information as a structured string, ready to be injected directly into your LLM prompts.
context()
method on a user object.
max_tokens
parameter.
max_tokens
limit applies to the profile and event content, not the final formatted string. If you use a large custom prompt template, the final output may still exceed the limit.
context()
API offers several parameters for fine-grained control:
prefer_topics
, only_topics
: Prioritize or exclusively include certain profile topics.max_subtopic_size
: Limit the number of sub-topics returned per topic.profile_event_ratio
: Adjust the balance between profile and event information.time_range_in_days
: Filter events to a specific time window.customize_context_prompt
: Provide a custom template for the final output string.get_context
.