📏 Context Length

Understand context windows and token limits. Learn how to optimize your prompts for maximum context utilization.

Context Windows by Model

Each model has a maximum context window that includes both input and output tokens.

Mythic-4 Mini
128K tokens
Max output: 8K tokens
Mythic-4 Ultra
1M tokens
Max output: 32K tokens

Context Breakdown

How Context is Divided (Example: 200K Context)

System (2K)
User Context (182K)
Output (16K)
System Prompt
User Input
Model Output

What Can You Fit?

With 200K tokens, you can process approximately:

📄
~500
pages of text
📚
~2
full novels
💻
~50K
lines of code
📧
~1,000
email threads

Check Available Context

Python
from mythicdot import MythicDot client = MythicDot() # Get model info including context limits model = client.models.retrieve("mythic-4") print(f"Model: {model.id}") print(f"Context window: {model.context_window:,} tokens") print(f"Max output: {model.max_output_tokens:,} tokens") # Count tokens before sending count = client.messages.count_tokens( model="mythic-4", messages=my_messages, system=my_system_prompt ) remaining = model.context_window - count.input_tokens - model.max_output_tokens print(f"Available context: {remaining:,} tokens")

Optimization Tips

📝 Compress System Prompts

Keep system prompts concise. Every token in your system prompt reduces available context for user input.

🗜️ Summarize Long Contexts

For long conversations, periodically summarize older messages to maintain context while reducing token usage.

📊 Use Structured Data

JSON and structured formats are often more token-efficient than verbose prose descriptions.

🎯 Relevant Context Only

Use RAG or semantic search to include only the most relevant context, not everything.

Need More Context?

Our Mythic-4 Ultra model offers 1 million tokens of context.

View All Models → Token Counting