✨ New API

Responses API

Stateful conversations with built-in memory. No more managing message arraysβ€”just send your prompt and we handle the rest.

Why Responses API?

🧠

Built-in Memory

Conversation history is stored and managed automatically on our servers.

⚑

Simpler Code

No need to manage message arrays. Just pass a thread ID and new input.

πŸ’°

Cost Efficient

Automatic context caching for multi-turn conversations saves tokens.

Chat Completions vs Responses

Chat Completions (Stateless)
Responses API (Stateful)
# You manage conversation history
messages = []

messages.append({"role": "user", "content": "Hi"})
response = client.chat.completions.create(
    model="mythic-4",
    messages=messages
)
messages.append(response.choices[0].message)

messages.append({"role": "user", "content": "How are you?"})
response = client.chat.completions.create(
    model="mythic-4", 
    messages=messages  # Send ALL history
)
# We manage conversation history
response = client.responses.create(
    model="mythic-4",
    input="Hi"
)

# Automatically includes context
response = client.responses.create(
    model="mythic-4",
    input="How are you?",
    previous_response_id=response.id
)

How It Works

πŸ’¬
User Input
β†’
πŸ“š
Load Context
β†’
πŸ€–
Generate
β†’
πŸ’Ύ
Save Response

Quick Start

Python
from mythicdot import MythicDot

client = MythicDot()

# First message - starts new conversation
response = client.responses.create(
    model="mythic-4",
    input="What's the capital of France?"
)
print(response.output_text)  # "The capital of France is Paris."

# Follow-up - automatically has context
response = client.responses.create(
    model="mythic-4",
    input="What's the population there?",
    previous_response_id=response.id
)
print(response.output_text)  # "Paris has a population of about 2.1 million..."

πŸ’‘ Key Concept

The previous_response_id links responses together into a conversation. Each response knows about all previous messages in the chain.

Features

βœ“

Automatic Context Caching

Previous conversation turns are cached, reducing costs by up to 75% on multi-turn conversations.

βœ“

Built-in Tool Calling

Define tools once and they persist across the conversation. The model remembers tool outputs.

βœ“

File Attachments

Attach images, PDFs, or documents. They remain accessible throughout the conversation.

βœ“

Response Retrieval

Fetch any previous response by ID. Great for resuming conversations or debugging.

With Tools

Python - Function Calling
tools = [{
    "type": "function",
    "name": "get_weather",
    "description": "Get current weather for a city",
    "parameters": {
        "type": "object",
        "properties": {
            "city": {"type": "string"}
        }
    }
}]

# Model can call tools across the conversation
response = client.responses.create(
    model="mythic-4",
    input="What's the weather in Tokyo?",
    tools=tools
)

# If tool call requested, provide result
if response.output[0].type == "function_call":
    result = get_weather("Tokyo")  # Your function
    response = client.responses.create(
        model="mythic-4",
        input=[{"type": "function_call_output", "output": result}],
        previous_response_id=response.id
    )

API Reference

Parameter Type Description
model string Model ID (required)
input string | array User input or content items (required)
previous_response_id string ID of previous response for multi-turn
instructions string System instructions (like system prompt)
tools array Available functions/tools
stream boolean Enable streaming (default: false)

Retrieving Responses

Python
# Get a specific response
response = client.responses.retrieve("resp_abc123")
print(response.output_text)

# Delete a response (and its context)
client.responses.delete("resp_abc123")