Stateful conversations with built-in memory. No more managing message arraysβjust send your prompt and we handle the rest.
Conversation history is stored and managed automatically on our servers.
No need to manage message arrays. Just pass a thread ID and new input.
Automatic context caching for multi-turn conversations saves tokens.
# You manage conversation history
messages = []
messages.append({"role": "user", "content": "Hi"})
response = client.chat.completions.create(
model="mythic-4",
messages=messages
)
messages.append(response.choices[0].message)
messages.append({"role": "user", "content": "How are you?"})
response = client.chat.completions.create(
model="mythic-4",
messages=messages # Send ALL history
)
# We manage conversation history
response = client.responses.create(
model="mythic-4",
input="Hi"
)
# Automatically includes context
response = client.responses.create(
model="mythic-4",
input="How are you?",
previous_response_id=response.id
)
from mythicdot import MythicDot client = MythicDot() # First message - starts new conversation response = client.responses.create( model="mythic-4", input="What's the capital of France?" ) print(response.output_text) # "The capital of France is Paris." # Follow-up - automatically has context response = client.responses.create( model="mythic-4", input="What's the population there?", previous_response_id=response.id ) print(response.output_text) # "Paris has a population of about 2.1 million..."
The previous_response_id links responses together into a conversation. Each response knows about all previous messages in the chain.
Previous conversation turns are cached, reducing costs by up to 75% on multi-turn conversations.
Define tools once and they persist across the conversation. The model remembers tool outputs.
Attach images, PDFs, or documents. They remain accessible throughout the conversation.
Fetch any previous response by ID. Great for resuming conversations or debugging.
tools = [{
"type": "function",
"name": "get_weather",
"description": "Get current weather for a city",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string"}
}
}
}]
# Model can call tools across the conversation
response = client.responses.create(
model="mythic-4",
input="What's the weather in Tokyo?",
tools=tools
)
# If tool call requested, provide result
if response.output[0].type == "function_call":
result = get_weather("Tokyo") # Your function
response = client.responses.create(
model="mythic-4",
input=[{"type": "function_call_output", "output": result}],
previous_response_id=response.id
)
| Parameter | Type | Description |
|---|---|---|
| model | string | Model ID (required) |
| input | string | array | User input or content items (required) |
| previous_response_id | string | ID of previous response for multi-turn |
| instructions | string | System instructions (like system prompt) |
| tools | array | Available functions/tools |
| stream | boolean | Enable streaming (default: false) |
# Get a specific response response = client.responses.retrieve("resp_abc123") print(response.output_text) # Delete a response (and its context) client.responses.delete("resp_abc123")