Manage conversation history and context across multiple turns. Build stateful AI applications with persistent memory.
The API is stateless by default. You maintain conversation history by sending all previous messages with each request.
from mythicdot import MythicDot
client = MythicDot()
# Maintain conversation history
conversation_history = []
def chat(user_message):
# Add user message to history
conversation_history.append({
"role": "user",
"content": user_message
})
# Send request with full history
response = client.messages.create(
model="mythic-4",
max_tokens=1024,
system="You are a helpful assistant.",
messages=conversation_history
)
# Extract assistant response
assistant_message = response.content[0].text
# Add to history
conversation_history.append({
"role": "assistant",
"content": assistant_message
})
return assistant_message
# Example conversation
print(chat("What is the capital of France?"))
print(chat("What's the population?")) # Refers to "it" = Paris
print(chat("Tell me about its history")) # Context maintained
Keep only the last N messages. Simple and effective for most chat applications. Risk: may lose important early context.
Periodically summarize older messages. Preserves key information while reducing token count. Adds latency.
Remove middle messages, keep first (context) and recent (relevant). Good balance of context and recency.
Store messages in a vector database. Retrieve relevant context dynamically. Best for long-term memory.
| Approach | Complexity | Context Quality | Token Efficiency | Best For |
|---|---|---|---|---|
| Full History | Low | Perfect | Low | Short conversations |
| Sliding Window | Low | Good | High | General chat |
| Summarization | Medium | Good | High | Long sessions |
| RAG + Vector DB | High | Excellent | Very High | Persistent memory |
Learn more about managing complex conversation flows.